Professional Documents
Culture Documents
CHAPTER 5
Bivariate and Marginal Probability
Distributions
And X2 = 0 if no belt
X2 = 1 if adult belt used
X2 = 2 if child seat used
The following table represents the joint probability distribution of X1
and X2 . In general we write P(X1 = x1 , X2 = x2 ) = p(x1 , x2) and call
p(x1 , x2) the joint probability function of (X1 , X2).
X1
0 1
--------------------------------------
0 | 0.38 0.17 | 0.55
X2 1 | 0.14 0.02 | 0.16
2 | 0.24 0.05 | 0.29
------------------------------------------
0.76 0.24
Probability that a child will both survive and be in a child seta when
involved in an accident is:
P(X1 = 0, X2 = 2) = 0.24
The conditional probability distribution for X1 given X2 fixes a value of X2
and looks at the relative frequencies for values of X1. For example,
conditioning on X2 = 0 produces
and similarly
P(X1 =1 | X2 = 0) = 0.31.
These two values show how survivorship relates to the situation in which no
seat belt is in use, 69% of children in such situation survived fatal accidents.
Diagram
Probabilities
Independent Random Variables
Exercise
Let X and Y denote the proportion of two different chemicals in a sample
mixture of chemicals used as an insecticide. Suppose X and Y have joint
probability density given by:
2, 0 ≤ x ≤ 1,0 ≤ y ≤ 1,0 ≤ x + y ≤ 1
f ( x, y) =
0, elsewhere
(Note that X + Y must be at most unity since the random variables denote
proportions within the same sample).
1) Find the marginal density functions for X and Y.
2) Are X and Y independent?
3) Find P(X > 1/2 | Y =1/4).
1) 1− x
f1 ( x ) = ∫
2dy = 2(1 − x ), 0 ≤ x ≤ 1
0
0 otherwise
1− y
f 2 ( y) = ∫
2dx = 2(1 − y ), 0 ≤ y ≤ 1
0
0 otherwise
2) f1(x) f2(y)=2(1-x)* 2(1-y) ≠ 2 = f(x,y), for 0 ≤ x ≤ 1-y.
Therefore X and Y are not independent.
3)
1
1 f ( x, y = )
1 1
1 1 1 4 dx = 2 2
P X > | Y = = ∫ f ( x | y = )dx = ∫ ∫/ 2 1 3 =
2 4 1/ 2 4 1/ 2 f ( y =
1
) 1 2(1 − )
4 4
If X and Y are independent, then it follows that
E[G(X)H(Y)]=E[G(X)]E[H(Y)].
If, on the other hand Y tends to be small when X is large, and large
when X is small, then X and Y will have a negative covariance.
While covariance measures the direction of the association between two
random variables, correlation measures the strength of the association.
Clearly, the correlation between the two variables should not be dependent
on the actual units used, but simply how the variables are related to each
other. This can be achieved simply by normalizing or scaling the
covariance, which yields the correlation coefficient.
>Plot(x,y)
Example
The proportions X and Y of two chemicals found in samples of an
insecticide have the joint probability density function
0 0 0
3
1 1− x 1 4
2 2 x 23 1
E[( X + Y ) 2 ] = ∫ ∫0 ( x + y ) 2
2dydx = ∫0 3 (1 − x ) 3
dx = ( x − ) |1
0= =
0
3 4 34 2
2
1 2 1
V ( Z ) = E ( Z 2 ) − E ( Z )2 = − =
2 3 1
Solution
Using Tchebysheff’s theorem with k= 2 we have,
2 2 2 2
P − < X +Y < + ≥ 0 .5
3 18 3 18
The desired interval is (1/3, 1).
Solution
1− X 1− X
f (X ) = ∫
0
f ( X , Y )dy = ∫ 2dY = 2(1 − X ),0 ≤ X ≤1
0
1− Y 1− Y
f (Y ) = ∫
0
f ( X , Y )dx = ∫ 2dX = 2(1 − Y ),0 ≤ Y ≤1
0
1
1 1
z z 2 3
1 1 1 1
E ( X ) = E (Y ) = ∫ z 2(1 − z )dz = 2∫ z − z 2dz = 2 − = 2 − = 2 =
0 0 2 3 0 2 3 6 3
1
1 1
z z
3 4
1 1 1 1
E ( X 2 ) = E (Y 2 ) = ∫ z 2 (1 − z )dz = 2∫ z 2 − z 3dz = 2 − = 2 − = 2 =
0 0 3 4 0 3 4 12 6
Solution
2
1 1 1 1 1
Var ( X ) = Var (Y ) = E ( X ) − ( E ( X )) = − = − =
2 2
6 3 6 9 18
1 1− x 1 1− x 1 1− x
E ( XY ) = ∫ ∫ xy 2dxdy = ∫ x ∫ 2 ydydx = ∫ x ( y 2 ) dx =
0 0 0 0 0 0
1
1 1
x 2x x 2
1 2 1 1 3 4
= ∫ x (1 − x ) dx = ∫ x − 2 x + x dx =
2 2
− 3
+
= − + =
0 0 2 3 4 0 2 3 4 12
2
1 1 1 1
− −
12 3 12 9 1
ρ= = =−
2 1 2
1
18
18
HOMEWORK 3 – Due January 25,
2007
4.57
4.63
5.3
5.9
5.19
6.15
6.24
6.35