Professional Documents
Culture Documents
(STAT
TISTIC
CS)
P BABIL
PROB
LITY
Y DIST
TRIBU
UTIO
ONS
B
BSc.Ma
athemattics
CO
OMPLEMEN
NTARY CO
OURSE
II SEM
MESTER
R
UNIIVER
RSITY
Y OF CALICUT
T
SC
CHOO
OL OF
F DIST
TANCE
E EDU
UCATION
Callicut University P.O.
P
Mala
appuram, Kerala, India 67
73 635
416
SchoolofDistanceEducation
UNIVERSITY OF CALICUT
SCHOOL OF DISTANCE EDUCATION
B.ScMathematics
IISemester
ComplementaryCourse
(STATISTICS)
PROBABILITY DISTRIBUTIONS
Preparedby:
Sri.GIREESHBABU.M.
DepartmentofStatistics
GovernmentArts&ScienceCollege,
Calicut18
Scrutinisedby:
Sri.C.P.MOHAMMED(Rtd.)
PoolakkandyHouse
NanmandaP.O.
CalicutDistrict
Layout:
ComputerSection,SDE
Reserved
ProbabilityDistributionsSemesterII
Page2
SchoolofDistanceEducation
CHAPTER
CONTENTS
PAGE
MATHEMATICAL EXPECTATION OF
BIVARIATE RANDOM VARIABLES
13
STANDARD DISTRIBUTIONS
25
52
ProbabilityDistributionsSemesterII
Page3
SchoolofDistanceEducation
ProbabilityDistributionsSemesterII
Page4
SchoolofDistanceEducation
SYLLABUS
Module 1
Bivariate random variable: definition(discrete and continuous type), Joint
probability mass function and probability density function, marginal and conditional
distributions, independence of random variables. (15 hours).
Module 2
Bivariate moments: Definition of raw and central product moments, conditional
mean and conditional variance, covariance, correlation and regression coefficients.
Mean and variance of a random variable in terms of conditional mean and
conditional variance.(15 hours).
Module 3
Standard
Distributions:
Discrete
type
Bernoulli,
Binomial,
Poisson
distributions(definition, properties and applications) Geometric and Discrete
Uniform(definition, mean, variance and mgf only). Continuous type
Normal(definition, properties and applications) Rectangular, Exponential, Gamma,
Beta (definition, mean, variance and mgf only). Lognormal, Pareto and Cauchy
Distributions (definition only) . (30 hours).
Module 4
Law of large Numbers : Chebychevs inequality, convergence in probability, Weak
Law of Large Numbers for in random variables, Bernoulli Law of Large Numbers,
Central Limit Theorem for independent and identically distributed random
variables(Lindberg-Levy form). (12 hours).
Page5
SchoolofDistanceEducation
Chapter 1
BIVARIATE PROBABILITY DISTRIBUTIONS
1.1 BIVARIATE RANDOM VARIABLES
1.1.1 Definition:
Let S be the sample space associated with a random experiment E. Let X = X(s) and Y
= Y(s) be two functions each assigning a real number to each outcomes s S. Then (X,
Y) is called a bivariate random variable or two-dimensional random variable.
If the possible values of (X, Y) are finite or countably infinite, (X, Y) is called a
bivariate discrete RV. When (X, Y) is a bivariate discrete RV the possible values of (X,
Y) may be represented as (xi, yj), i = 1,2, ..., m, ...; j = 1,2, ..., n, .... If (X, Y) can assume all
values in a specified region R in the xy plane, (X, Y) is called a bivariate continuous
RV.
1.1.2 Joint Probability Mass Function
Let (X,Y) be a pair of discrete bivariate random variables assuming pairs of values
(x1, y1), (x2, y2), ..., (xn, yn) from the real plane. Then the probability of the event X = xi,
Y = yj denoted as f(xi, yj) or pij is called the joint probability mass function of (X, Y).
xi, yj = P(X = xi, Y = yj)
i.e.,
1.
2.
xi, yj = 1
1 . 1 . 3 J o int P r o b ab i l i ty D ensi ty F un c ti o n
If (X,Y) is a two dimensional continuous random variable such that
joint
of
(X,Y),
provided
1.
2.
R (xi,yj)
f(x,y)
= x, y
satisfies the
= 1.
. In particular P{a
b, c Y d} =
1 . 1 . 4 C um u l a t iv e Di s t r i but io n F uncti o n
If (X,Y) is a bivariate random variable (discrete or continuous), then
F(x, y) = P{X x andY y} is called the cdf of of (X,Y).
ProbabilityDistributionsSemesterII
Page6
SchoolofDistanceEducation
(x,y)
P r o p e r t i e s o f F(x,y)
(i). F ( -, y) = 0 = F(x, -) and F(, ) = 1
(ii). P{a < X < b,Y
(iii).P {X
y} = F(b,y)-F(a,y)
= (x,y)
since
1
2
1
2
=
=
Similarly,
Note:
P(a
ProbabilityDistributionsSemesterII
b)=P(a
SchoolofDistanceEducation
=
Similarly,
P(c
d)=
1 . 1 . 6 C o nd i t io na l P r o b a bi li ty D i st ri b ut io n
In the discrete case
P(X = xi/Y = yj) =
Y= yj .
Similarly, the collection of pairs,{Yj,
i.e. pij = pi* p*j for all i,j then X and Y are said to be independent random variables.
Similarly if (X,Y) be a bivariate continuous random variable such that (x, y) = X(x)
Y (y), then X and Y are said to be independent random variables.
1.2 SOLVED PROBLEMS
Problem 1
ProbabilityDistributionsSemesterII
Page8
Solution :
Given
(x,y) =
x (x) =
(x,y)
,x 1,2
The marginal pdf of Y is,
(y) = x (x, y)
=
=
=
, y = 1, 2
(x/y) =
joint pdf ?
Solution:
Since the conditional pdf (x/y) is a pdf, we have
/
i.e.,
i.e.,
dx =1
a[
]0
=1
a=2
Similarly,
dy = 1
SchoolofDistanceEducation
b[
]10 = 1
b =5
(y). (x/y) = by 4
Problem 3
xy(x,y)dy =
2 , 0 < x < 1
= 0, elsewhere
y(y) =
1
xy(x, y)dx =
= 0, elsewhere
, 0 < x < 1
=
,
,0 <y <1
Problem 4
Page10
SchoolofDistanceEducation
y + 2
2
dy = 1
0
+ 2y2
k(
i.e.,
+ xy2
dy = 1
4
= 1
1
) = 1
50k = 1
i.e.
k =
Problem 5
<3) =
=
(ii)
P(X=Y<3) =
ProbabilityDistributionsSemesterII
,
(6-x-y)dxdy =
(6x y)dxdy =
Page11
SchoolofDistanceEducation
(iii)
) =
Problem 6
(x,y) = 4
0,
(x,y) = 4
0,
x(x,) =
(put
= 4
= 4
= t) = 2x.
dy
/
0
x(x) = 2x.
;x
(y) =
(x,y)dx =2y.
Y(y),
;y
0.
(X=x/Y=y)=
= 2x.
ProbabilityDistributionsSemesterII
;x
0.
Page12
SchoolofDistanceEducation
Chapter 2
MATHEMATICAL EXPECTATION OF BIVARIATE RANDOM VARIABLES
2.1 Definition:
Let X1, X2, ..., Xn be n random variables with joint pdf
xn) be any function of these random variables.
Then,
(x1, x2, ..., xn) (x1, x2, ..., xn) if the rvs are discrete
=
,
.
(x1, x2, ..., xn)f(x1, x2, ..., xn)dx1,dx2...dxn, if the r.vs are continuous
provided the sum or integral on the RHS is absolutely convergent.
2.2 Properties of Expectation
2.2.1 Property
If X and Y are any two random variables and (X) be any measurable function of X.
Then for two constants a and b, E(a. (X) + b) = a.E( (X)) + b
Proof:
. (x) (x) +
= a.
=a.E( (x))+b)since
(X) + b]f(x)
(x)+b
. (x)
(x)
(x)=1)
Let X and Y are two discrete random variables with the joint pmf f(x,y).
Then by definition
E(X + Y) =
,
=
ProbabilityDistributionsSemesterII
Page13
SchoolofDistanceEducation
= E(X) + E(Y)
In the case of two continuous random variables instead of summation use integration.
Remarks:
Let X1, X2, ..., Xn are any finite number of random variables, then,
E(X1 + X 2+ . . . + X n)) = E(X1) + E ( X 2)+...+E(Xn)
2.2.3 Multiplication Theorem
Let X and Y are two independent random variables, then, E(XY) = E(X) + E(Y),
provided all the expectations exist.
Proof
Let X and Y are any two independent discrete random variables with joint pmf
f ( x , y)
Then,
E(XY) =
,
=
(x). 2(y)
E(XY) =
Hence ,
(x)
= E(X).E(Y)
In the case of continuous random variables use integration instead of summation.
Remarks:
The converse of multiplication theorem need not be true. That is , for two random
variables E(XY) = E(X).E(Y), need not imply X and Y are independent.
Example 1:
Consider two random variables X and Y with joint probability function is
( x , y) = (1 - |x|.|y|), where x = -1, 0, 1; and y = -1, 0, 1= 0, elsewhere.
Here,
1
2
x) =
y
E(X) =
x , y
1
2
. (3-2|x|)
= -1(3-2)+0(3)+1(3-2 ) = 0
ProbabilityDistributionsSemesterII
Page14
SchoolofDistanceEducation
0, the E(X)
2.2.5 Property:
If X and Y are two random variables,
Then, [E(XY)]2 E(X2) .E(Y2) (Cauchy-Schwartz Inequality)
Proof :
Consider the real valued function, E(X + tY)2.
Since (X + tY)2
0,
LHS is a quadratic equation in t and since it is always greater than or equal to zero,
this quadratic equation is not having more than one real root. Hence its discriminant
must be less than or equal to zero.
The discriminant b2 - 4ac for the above quadratic equation in t is
[2E(XY)]2 - 4E(X2)E(Y2) [2E(XY)]2 - 4E(X2)E(Y2)
i.e., 4[E(XY)]2
Hence, [E(XY)]2
4E(X2).E(Y2)
E(X2)E(Y2)
2.2.6 Property:
If X and Y are two random variables such that Y
ConsiderY X , t h e n Y -X 0,i.e.,X-Y 0
E(X-Y) 0
E(X-Y ) = E ( X + ( -Y ) ) = E ( X ) + E ( -Y)
0 E(X)
X, then E(Y)
E(X) Proof:
E(Y)
2.2.7 Property:
For a random variable X, |E(X)|
Proof:
We have X |X| E(X) E(|X|) .............. (1)
Again, -X
|X| E(-X)
E(|X|) ..(2)
E(|X|)
Page15
SchoolofDistanceEducation
2.2.8 Property:
If the possible values of a random variable are 0,1,2,....Then ,
E(X) =
Proof :
0
(x)or
(x)dx
When A = 0, then E(X-A)r = E(Xr); which is known as the rth raw moment of X and is
denoted by .
E(X - )r is the rth central moment of X and is denoted by r .
We have
=
n
xi= ipi =
r
2 = E(X - E(X)) 2
= E[X2 + (E(X)) 2 - 2XE(X)]
= E(X 2 ) + [E(X)] 2 - 2E(X)E(X)
= E(X 2 ) - [E(X)] 2 = 2 - ( 1) 2
Relation between raw and central moments:
We have first central moment, 1 = E(X - E(X)) 1 = E(X) - E(X) = 0
The second central moment or variance ,
= -( )
2
2
1
ProbabilityDistributionsSemesterII
2
Page16
SchoolofDistanceEducation
= E(X E(X))3
= E(X3 - 3X 2 E(X) + 3X[E(X)] 2 - [E(X)] 3 )
3 - 3 1 +2[1]3
Page17
SchoolofDistanceEducation
f(y/x)
Proof:
By definition,
Mx+y(t)= E[[et(x+y] = E[etxety]
= E[etx]E[ety] = Mx (t).My (t) ,
ProbabilityDistributionsSemesterII
Page18
SchoolofDistanceEducation
Proof:
By definition,
M
E[
i.e., m.g.f of a sum of n indpendent r.v.s is equal to the product of their m.g.fs
Remarks 1
For a pair of r.v.s (X, Y), then covariance between X and Y (or product moment
between X and Y) is defined as
Cov(X, Y) = E[X - E(X)][Y - E(Y)]
Remarks 2
The correlation coefficient between the two random variables X and Y is de-fined as
pXY =
where
Cov(X, Y) = E(XY) - E(X)E(Y)
V(X) = E(X 2 ) - [E(X)] 2
V(Y) = E(Y 2 ) - [E(Y)] 2
2.6 SOLVED PROBLEMS
Problem 1
Page19
SchoolofDistanceEducation
Solution
E(X 2 Y) =
=
y[
= 1[ ] + 2[ ] + 5[ ] + 8[ ] =
Problem 2
For two random variables X and Y, the joint pdf f(x,y) = x + y , 0 < x < 1;0
< y < 1,Find E(3XY 2 )
Solution:
3
E(3XY2)=
=
(x,y)dydx
(x+y)dydx
=3
ydx
=3
[ ] 1 + x[ ] 1)dx
=3
+ ) dx
= 3[ ] 1 + 3[ ] 1 =
0
Problem 3
, x = 1, 2;y=1,2
Correlation (X,Y) =
=
E(XY) =
=
ProbabilityDistributionsSemesterII
,
[
]
Page20
SchoolofDistanceEducation
E(X) =
where
(x) =
5
9
] = 1 + 2 =
E(X) =
E(X2) =
=
f2(y) =
1(x)
] =1 +4
, x = 1,2
]=
E(Y) =
= 1 +2
= E(Y2) =
=
Correlation (X,Y) =
2(y)
+4
]=1
, y = 1,2
=-0.026
V(X/Y=1)=E(X2/Y=1)-[E(X/Y=1]
E(X2/Y=1)=
,
(x/y=1)
,
= 1.
+ 4.
=
E(X/Y=1) =
ProbabilityDistributionsSemesterII
1.
1
,
2.
Page21
SchoolofDistanceEducation
=
Therefore,
]2 =
V(X/Y=1)
Problem 4
The Joint pdf of two random variables (X,Y) is given by, f(x,y) = 2,0< x <y <1,
= 0, elsewhere.
Find the conditional mean and variance of X given Y=y.
Solution :
(x/y)dx
where,
,
f(x/y) =
Then,
,
(y)=
2
There fore,
f(x/y) =
. dx =
E(X/y) =
=
[ ]0
= , 0 <y < 1
Also,
V(X/y) = E(X 2 /y) - [E(X/y)] 2
Where,
E(X 2 /y) =
=
Therefore,
V(X/y) =
ProbabilityDistributionsSemesterII
f(x/y)dx
dx = [ ]
Page22
SchoolofDistanceEducation
, 0<y <1
Problem 5
Two random variables X and Y have joint pdf
f ( x , y ) = 2 - x - y; 0 x 1,0 y 1
= 0, otherwise
Find (i). f 1(x), f 2(x), (ii). f(x/Y = y), f(y/X = x), (iii). cov(X,Y)
Solution:
(i)
(x) =
=
- x, 0 x 1
,
(y) =
=
,0
x 1
(ii)
,
/
,
f(y/X=x)=
(iii)
, 0
y 1
E(XY) =
2
=
2
=
=
=[
x 1
, 0
=[
]1 =
=[
- ]dx
]1 =
0
E(Y) =
E(X) =
]1dx =
ProbabilityDistributionsSemesterII
]1 =
0
Page23
SchoolofDistanceEducation
There fore,
Cov(X,Y) = -
Problem 6
Two random variables X and Y are independent then cov(X,Y)=0, which implies
xy = 0. To prove the converse, consider a r.v X having the pdx,
f(x)= , -1 x
= 0, otherwise
Let Y = , i.e., X and Y are not independent.
Here,
E(X) =
dx = 0 = 0
E(Y)= E(X2) =
dx =(
1
1 =
dx
dx = 0 =0
Therefore,
cov(X,Y) = E(XY) E(X)E(Y) =0 0
=0
i.e.
=0
This shows that non correlation need not imply independence always.
ProbabilityDistributionsSemesterII
Page24
SchoolofDistanceEducation
Chapter 3
STANDARD DISTRIBUTIONS
3.1 DISCRETE DISTRIBUTIONS
k
{ 1,0,whenx
otherwise
P(X = x)
1 =
P(X=x) =
= [x1+x2+.+xn] =
E(X2 )=
= [( ) 2 +(
ProbabilityDistributionsSemesterII
P(X=x) =
) 2 + +(
) 2]=
Page25
SchoolofDistanceEducation
Then,
V(X) =
-[
]2
[nCx pxqn-x]=
= np[(1)
pxqn-x
px-1qn-x])
] =np
E(X2) =
[nCx pxqn-x]
1 [
=n(n-1)p2
!
!
= np
np[(p+q)
pxqn-x
pxqn-x] +
!
!
!
!
!
pxqn-x]
px-2qn-x] +E(X)
Page26
SchoolofDistanceEducation
= n(n - 1)p 2 + np
There fore the variance,
V(X) = E(X 2 ) - [E(X)] 2
= [n(n - 1)p 2 +np] - [np] 2
= n 2 p 2 - np 2 + np - n 2 p 2
= n 2 p 2 np 2 = np(1 -p) = n 2 p 2
np-np 2 =np(1-p)=npq
E(X3) =
=
=
x3[nCx p x qn-x]
1
2 +3x2 -2x][
2
!
pxq n-x -
!
!
!
!
!
pxqn-x
pxqn-x+
!
!
pxqn-x
1=
=3+
Page27
SchoolofDistanceEducation
P(X=x)
=
=
[nCx pxqn-x]
pxqn-x](pet)xqn-x
(q+pet)n
Additive property of the binomial distribution
If X is a B (n1 , p) and Y is B (n2 , p) and they are independent then teir sum X + Y
also follows B(n1 + n2 , p).
Proof:
Since,X ~ B(n1 ,p), Mx (t) = (q +pet)n1
Y ~ B(n2 ,p), My (t) = (q +pet)n2
We have,
Mx + y (t) = Mx (t).My (t)
, since X and Y are independent.
= (q+pet)n1 (q+pet)n2
= (q + p e t)n1+n2
= mgfofB(n 1 + n 2 , p )
There fore,
X + Y ~B ( n 1 + n 2 , p )
If the second parameter (p) is not the same for X and Y, then X + Y will not be
binomial.
Recurrence relation for central moments
I f X~ B(n, p), then
+ 1
pq[nr r
-1 +
P r o o f : We have
r = E[X - E(X)]r
= E(X - np) r
=
n C x p x q n-x
Therefore,
r
n C x p x q n-x
nCx(x np)
ProbabilityDistributionsSemesterII
qn-xxpx-1+
Cxpxqn-x +
=-nr
=-nr r-1 +
r(x-np)r-1(-n)+
(x-np)rpx(n-x)qn-x-1(-1)]
Cxpxqn-x[
f(x)
Page28
SchoolofDistanceEducation
i.e.,
=-nr r +
+ 1
There fore,
r
= pq[nrr
B(x; n, p)
Proof:
We have,
B ( x ; n, p ) = C x p x q n - x
B(x+1;n,p) =
Cx+1px+1qn-(x+1)
; ,
; ,
!
=
There fore,
B(x;n,p)
B(x+1;n,p) =
3.1.4 Bernoulli Distribution
,
0,
0,1
Here X is a discrete random variable taking only two values 0 and 1 with the
corresponding probabilities 1-p and p respectively.
The rth moment about origin is , r = E(Xr) = 0rq + 1rp = p,r =1,2
2 =E(X2) =p
therefore , Variance = 2 =p-p2=p(1-p)=pq
3.1.5 Poisson Distribution
Poisson distribution is a discrete probability distribution. This distribution was
developed by the French mathematician Simeon Denis Poisson in 1837.This distribution is
used to represent rare events. Poisson Distribution is a limiting case of binomial
distribution under certain conditions.
ProbabilityDistributionsSemesterII
Page29
SchoolofDistanceEducation
0,1,2, ,
0,
L e t X ~B ( n , p ) , t h e n , f ( x ) = n C xp x q n-x , x = 0 , 1 , 2 , . . . , n ; p + q = 1
!
=
p x q n-x
!
px(1-p)n-x
Now,
1
lim 1
.. 1
Also np = p =
lim
lim (1-p) x =
lim 1
lim (1-p) n
=1
=
, x = 0, 1, 2, ...
E(X) =
= e-
=
Variance,
V(X) = E(X2) - [E(X)]2
ProbabilityDistributionsSemesterII
Page30
SchoolofDistanceEducation
where,
E(X2) =
=
=2+
Therefore,
V ( X ) = 2 + - 2 =
A l s o , S D ( X ) =
=
=
(x-1)(x-2)
3
= e
3x
2x
+3
f(x)-2
f(x)
+ 3E(X 2 ) - 2E(X)
= 3 e - e - +3( 2 +)-2
= 3 + 3 2 +
Therefore,
3= 3 +3 2 + -3( 2 +) +2 3
=
= =
1= 1=
Page31
SchoolofDistanceEducation
Also,
2 =
=3+
2=
2 -3=
f(x)=
=
=
Additive property of Poisson distribution:
Let X1 and X2 be two independent Poisson random variables with parameters 1 and 2
respectively. Then X = X 1 + X 2 follows Poisson distribution with parameter 1 + 2.
Proof:
X 1 ~ P ( 1) M X1(t) =
X 2 ~ P ( 2) M X2(t) =
Thus,
X = X 1+X2 ~ P(1 +2)
Remarks:
In general if Xi ~ P(i) for i = 1, 2, ..., k; and Xis are independent, then
X = X1+ X2 + ... + Xk ~ P(1+ 2 + ... + k)
3.1.6 Geometric Distribution
Definition:
Mean,
E(X)=
ProbabilityDistributionsSemesterII
Page32
SchoolofDistanceEducation
qx p
Variance,
V(X) = E(X 2 ) - [E(X)] 2
E(X 2 ) =
=
=
f(x)
1
(x-1)q 2 p+
+ ( )2
V(X)=
=
+ =
+
=
(p+q) =
=p
)x
=p[1+qe t +(qe t ) 2 +]
=p(1-qe t ) -1 =
ProbabilityDistributionsSemesterII
Page33
SchoolofDistanceEducation
P[X
s + t/X
s] =
=
= qt =P(X
t)
The mean and variance of a binomial random variable X are 12 and 6 respectively.
(i). Find P(X=0) and (ii). P ( X > 1)
Solution:
Let X ~ B(n,p)
G i v e n E ( X ) = n p = 1 2 and V ( X ) = n p q = 6
=
q=
p = , since p + q = 1
Also, n=24
(i).
P(X = 0) =24 C0 ( )24 = ( )24
P ( X > 1 ) = 1 -P(X
1 ) = 1 -[ P ( X = 0 ) + P ( X = 1 ) ]
= 1 - 25( )24
Problem 2.
)=
Solution:
)- E( ) E(
ProbabilityDistributionsSemesterII
)- E( )E(
)
Page34
SchoolofDistanceEducation
= (E(nX-X 2 )- E(X)E(n-X)
= [E(nX)-E(X 2 )]- (E(X)[n-E(X)]
= ( [n.np-[n(n-1)p 2 +np]-np(n-np)]
= [np-(n-1)p 2 -p-np+np 2 ]
Problem 3.
The p d f o f X i s ,
p(x) = n C xp x q n - x , x = 0, 1, 2, ..., n
Given, Y~ n -X, i . e . , X = n -Y
f(y) =
Cn yp n-yqn-(n-y), y = n, n - 1, n - 2, ..., 0
-
p n-yqy, y = 0, 1, 2, ..,n
! !
Cy qy , pn-y, y = 0, 1, 2, ..,n
Y ~ B(n,q)
Problem 4
If X and Y are independent poisson variates such that P(X=1) =P(X=2) and
P(Y=2)=P(Y=3). Find the variance of X-2Y.
Solution:
Let X~P(1)and Y~P(2)
Given, P(X=1)=P(X=2)
i.e.,
i.e.,
!
1 =
,since 1>0
There fore,
1 =2
ProbabilityDistributionsSemesterII
Page35
SchoolofDistanceEducation
Also
P(Y=2)=P(Y=3)
i.e.,
1=
There fore,
2 =3
, since 2 > 0
There fore,V(X) = 1 = 2 and V(Y) = 2 =3
Then,
V(X - 2Y) = V(X) + 4V(Y)
Since X and Y are independent.
= 2+4 3 = 14
Problem 5.
If X and Y are independent poisson variates, show that the conditional distribution
of X given X+Y is binomial.
Solution:
Given X ~ P( 1) and Y ~ P( 2)
!
!
!
!
! 1 2
n-x
1 2
=nCxpxqn-x
Where p =(
1
1 2
ProbabilityDistributionsSemesterII
Page36
SchoolofDistanceEducation
Problem 6.
Let two independent random variables X and Y have the same geometric distribution.
Show that the conditional distribution of X/X+Y=n is uniform.
Solution:
Given P (X = k) = P (Y = k) = q k p, k = 0, 1, 2, ...
P (X = x/X + Y = n) =
=
=
P (X + Y = n) = P (X = 0, Y = n) + P (X = 1, Y = n - 1)+
P (X = 2, Y = n - 2) + ... + P (X = n, Y = 0)
=
q 0 pq n p + q 1 pq n- 1 p + q 2pq n-2 p + . . . + q n pq 0 p
, x = 0, 1, 2, ...
For a random variable following geometric distribution with parameter p, prove the
recurrence formula,P(x + 1) = q.P(x)
Solution:
=q
Hence
P(x + 1) = q.P(x)
ProbabilityDistributionsSemesterII
Page37
SchoolofDistanceEducation
, < x <
=
=
dx
dx
dx+
dx
dx+
dx = du
Put
E(X)=
=
(Since ue
+ 1
0 + =
is an odd function of u,
du = 0)
=
=
=z
Put
=
=
Put
dx
dz
dz
=u
=
=
ProbabilityDistributionsSemesterII
Page38
SchoolofDistanceEducation
=
=
=
=
i.e., V(X) =
There fore Standard Ddeviation =
V X =
2 2
dx
dz
=z
By putting
=
=
dz
0=0
=
Put
dx
=z
=
Put
2 2
dz
=z
=
=
ProbabilityDistributionsSemesterII
dz,
dz,
Page39
SchoolofDistanceEducation
Put
=u
=
=
=
=
=
(r - )(r - ),., ,
. .
2r = 1.3.5...(2r - 1)
2r
2r = 1.3.5...(2r - 1) 2r
There fore,
2r 2
2r
= (2r + 1) 2
i.e.,
2r+2 = (2r + 1) 2 2r
This is the recurrence relation for even order central moments of Normal Distribution.
and 4th moments.
Using this relationship we can find out the 2nd
Put r = 0 then 2 = 2
r = 1 4 = 34
Since 3 = 0 1 = 0,1 = 0
Also,1 =
=3 and 2 = 0
dx
Page40
SchoolofDistanceEducation
Put
= z
=
=
=
Put z-t =u
=
=
Put
=v
=
=
=
=
12 2
2
dv
Thus,
M X (t) =
Additive property:
Let X 1 ~ N( 1 , 1 ), X 2 ~ N( 2 , 2 ) and if X 1 and X 2 are independent, then
X 1 + X 2 ~ N( 1 + 2 ,
Proof:
Page41
SchoolofDistanceEducation
i.e.,
X 1 + X 2 ~ N( 1 + 2 ,
Remarks 1
f(z) =
Moment generating function:
MZ(t)=
=
(t)=
Now
MZ(t)=
ProbabilityDistributionsSemesterII
Page42
SchoolofDistanceEducation
Mx(t/ )
(q+
)n
Then,
logMZ(t) =
=
+ nlog(q+
+ nlog(q+
+ nlog [ q + p(1+
+ nlog [ q + p + p(
=
+ nlog [ 1 + p(
+n
+n
[ p(
+)]
+)]
+)
+)]
+0(
+) 2 +..
(1-p)+0( )
+0( )
0( )
as n
There fore
Mz(t) =
This is the mgf of a standard normal variate. So Z N(0, 1)
i.e.,
=
N(0, 1) as n
X N(np,
,a
xb
= 0 , elsewhere
ProbabilityDistributionsSemesterII
Page43
SchoolofDistanceEducation
Properties :
1. a and b (ab) are the two parameters of the uniform distribution on (a,b).
2. This distribution is also known as rectangular distribution, since the curve y= f(x)
describes a rectangle over the x-axis and between the ordinates at x=a and x=b.
3. The d.f., f(x) is given by
0, if
- < x < a
,a< x<b
f(x)
1, b < x <
Moments:
Mean = E(x)
=
=
dx
( )
Variance
V(X) = E(X 2 ) - [E(X)] 2
E(X 2 ) =
f(x)dx =
dx
Therefore,
V(X) =
=
)2
Also,
SD(X) =
ProbabilityDistributionsSemesterII
Page44
SchoolofDistanceEducation
f(x) =
,x>0
= 0 , otherwise
where m > 0, p > 0 are called the parameters of the gamma distribution.
Moments
Mean,
E(X) =
=
dx
=
=
=
Variance,
V(X) = E(X2) - [E(X)]2
E(X2) =
dx
=
=
=
=
V(X) =
There fore,
ProbabilityDistributionsSemesterII
Page45
SchoolofDistanceEducation
dx
)=
1 dx
=(
=
=
( 1- )
Mean,
E(X) =
e
Variance,
V(X) = E(X 2 ) - [E(X)] 2
e
E(X2)=
e
=
=
There fore,
V(X) =
= [
ProbabilityDistributionsSemesterII
e e
-(1- ) -1
Page46
SchoolofDistanceEducation
m,n
Mean,
E(X) =
=
=
=
m,n
m,n
m,n
m 1 n
m n
= m n 1 mn
=
Variance = E(X 2)-[E(X)]2
E(X2)=
=
m,n
(m+2,n)
m 2 n
m n
= m n 2 mn
=
There fore,
V(X) =
ProbabilityDistributionsSemesterII
Page47
SchoolofDistanceEducation
Moments:
=E(Xr) =E(ery)
=
=
=
-1)
( )
, > 0, x 0 > 0
E(X) =
>1
Variance =
V(X) =
, for
>1
ProbabilityDistributionsSemesterII
, < x <
Page48
SchoolofDistanceEducation
Properties
1. For a Cauchy distribution mean does not exist.
2. For a Cauchy distribution variance does not exist.
3. mgf of Cauchy distribution does not exist.
3.4 SOLVED PROBLEMS
Problem 1.
IF X ~ N(12,4). Find
(i). P(X 20)
(ii). P(0 X 12)
(iii). Find a such that P(X > a) = 0.24.
Solution:
We have Z =
~ N(0,1)
(i)
P(X 20)= P
=P(Z 2)
12) =P
Z 3)= 0.4987.
(iii)
Given P(X>a) =0.24P
=0.24
P( Z >
Hence P( 0 < Z <
) =0.24
)=0.5 -0.24 =0.26
=0.71 a =14.84
Problem 2.
Given that
P(X k) = 2P(X > k)
ProbabilityDistributionsSemesterII
=2
+
= 2+1
Page49
SchoolofDistanceEducation
=3
=3
P(X>k) =
=0.333
=0.333
) =0.333
i.e., P( Z >
= 0.44
From table
Then k = +0.44
Problem 3.
Given X ~ N(6, 7)
P(3X +8 ) = P(4X -7
P(X
)=P(X
P(5X -2
P(X
) - - - - - - - - (1)
) = P(2X + 1
) = P(X
- - - - - - - - (2)
Since
~ N(0,1)
X ~N (6,7), Z =
From(1),
P(Z
=P
21
) = P(Z
28
a) = P(Z
b), then a = -b
= 4 +3 -155=0 - - - - - - -- - - (3)
1
From (2) P
ProbabilityDistributionsSemesterII
=P
Page50
SchoolofDistanceEducation
P(Z
35
=-
) = P(Z
14
13
5 +2 -121=0 - - - - - - - - - - - - - - - - - (4)
Solving (3) and (4) we get, = 7.57 and = 41.571
Problem 4.
Solution :
We have E(X)=
There fore,
f(x)dx=
dx=0
Problem 5
If X1, X2, ., Xn are n independent random variables following exponential
parameter , find the distribution of y =
Solution:
Given that X~exponential with parameter
Therefore,
M x(t) = (1- ) -1
Then,
M y (t) =M
(t) =
= (1-
)-n
This is the mgf of a gamma distribution with parameter n and . There fore the pdf of
Y is given by,
f(y) =
ProbabilityDistributionsSemesterII
(yn-1), y
0
= 0, elsewhere.
Page51
SchoolofDistanceEducation
Chapter 4
t )
Proof
Since f(x) is a pdf which is non negative ,[x - ]2f(x) is always non negative then,
0
There fore,
In
In
There fore
2 2
2 2
[P(X
[P(X
[P(|X
-t
-t ) + P(X
|
Also,
-[P(|x-) t ]
t )]
t )]or
1
2
1
2
i.e.,
1 - [P(|x-) t ] 1
i.e.,
[P(|x-| t ]
ProbabilityDistributionsSemesterII
[P|X - |
[P(|x- ) t ]
+P(X-
1
2
Page52
SchoolofDistanceEducation
)0 as n .
)= p and V(
)=
>1 -
i.e.,
P
Put, t
= t =
pq
> 1 -
Hence,
P |
P |
as n ,
0 P |
>1-
>1-
1.
0 as n ,
Page53
SchoolofDistanceEducation
provided
0 as n ,
Proof:
< . . . . . (1)
Here
1 2 ..
E( ) = E
and
V( )=
V( ) =
put = t
V(
)=
..
t=
hence,
P
<
<
as n P
provided
0,
N(0,1) as n
Proof:
Given E(Xi ) = , V ( X i ) =
, i = 1,2,3,...,n
Sn = X1 + X2 + ... + Xn =
ProbabilityDistributionsSemesterII
Xi
Page54
SchoolofDistanceEducation
Now,
Mzt =M
(t)
MZ(t)=
and higher
where 0
There fore ,
logMZ(t) = =-
+ nlog 1
+n 1
=
-
=-
Since
+ 0
as n
Therefore,
MZ(t)
as n
Page55
SchoolofDistanceEducation
,x
2)
Solution:
We have,
E(X) =
=
+3
=2
=
+2
E(X2 ) =
=
+3
+ 22
9
=6
Hence
V(X)= 6-4 =2
By Chebyshevs Inequality,
P ( |X - |
P ( |X - 2|
] 1 -
2)] 1 -
Put t =2 , we get,
[ P ( |X - 2|
2)]
Problem 2
Solution:
By Chebyshevs Inequality,
P ( |X - |
] 1 -
i.e.,
ProbabilityDistributionsSemesterII
Page56
SchoolofDistanceEducation
P(-3
7)= P(1-4
X-4
Put 2 k = 3 , t h e n k =
There fore
P(1 X
7)= P(|X-4|
3) 1
If X ~ B(100, 0.5), using Chebyshevs Inequality obtain the lower bound for,
P(|X - 50| <7.5).
Solution:
By Chebyshevs Inequality
P(|X-|
t) 1-
i.e.,
P(|X- 50|
5t) 1 -
ProbabilityDistributionsSemesterII
Page57