Professional Documents
Culture Documents
Systems
A list of topics that will be covered is as follows:
Markov Inequality
Chebyshev Inequality
Binomial Theorem
The Weak Law of Large Numbers
The Central Limit Theorem
For this lecture, I am borrowing derivations/discussions from the
following book:
Kishore S. Trivedi, Probability and Statistics with Reliability, Queuing, and
Computer Science Applications, 2005.
Copyright © Syed Ali Khayam 2009 2
Markov Inequality
Consider that you are given the mean E{X}=μ of a non‐negative
random variable X
Define a function of X as:
ì
ï0, if X < t
Y =ï
í
ï
ït, X ³ t
î
Define a function of X as:
ì
ï0, if X < t
Y =ï
í
ï
ït, X ³ t
î
Recall that a function of a random variable is also a random
variable
So what is the pmf of Y?
pY(y) t
Y
0 t
Since X≥Y, we have E{X}≥E{Y}
E{X } ³ E{Y } = t Pr{X ³ t }
Since X≥Y, we have E{X}≥E{Y}
E{X } ³ E{Y } = t Pr{X ³ t }
E{X }
Pr{X ³ t } £
t
m This is called
Pr{X ³ t } £ the Markov
t
Inequality
σ2 is small => Values close to the mean have higher probabilities
σ 2 is large => Values far away from the mean have high
probabilities
s2 1
Pr{ X - m ³ k s} £ 2 2 = 2
k s k
1
Pr{ X - m < k s} ³ 1 - 2
k
Consider a binomial distribution X with parameters (n, p)
The mean and standard deviation of the distribution are given
by:
m = np s = np(1 - p)
Now applying the Chebyshev Inequality gives:
1
Pr { X - np < k np(1 - p)} ³ 1 - 2
k
ì
ï X p(1 - p) üïï 1
ï
Pr í - p < k ý ³ 1- 2
ï ïï
în
ï n þ k
ìï X ü
ï This is called
lim Pr í - p < eý = 1 Bernoulli’s
n ¥ ïîï n ï
ï
þ
Theorem
Bernoulli’s Theorem is a special case of the Weak Law of Large
Numbers discussed next
Let μ represent the mean of the common distribution of these
random variables
Then for large n, we would expect that the sample mean is close
to μ
x1 + x 2 + + xn
x= m
n
Then the variance of the sample mean is:
var {X } = var
n{ }
Sn ìïï 1 n
= var í å X i ï
ü
ï
ý
ïîïn i =1 ï
ï
þ
1 n
= 2 å var {X i }
n i =1
ns2 s2
= 2 =
n n
That is, the distribution of the sample mean gets more and more
concentrated about its mean as the number of trials (n)
approach infinity
The Weak Law of Large Numbers states that if you run a large
number of trials (n) then the probability that the sample mean
converges to the true mean is 1
åX i - å mi
Zn = i =1
n
i =1
å i
s 2
i =1
so that E{Zn}=0 and var{Zn}=1. Then, under certain regularity
conditions, the limiting distribution of Zn is standard normal:
t
1 -y 2 / 2
lim FZn (t ) = lim Pr{Z n £ t } =
n ¥ n ¥ ò 2p
e dy
-¥
Z n N (0,1)
ås
i =1
2
i å i
s 2
i =1
nX - n m n (X - m)
= = Thus the distribution
ns 2 s of the sample mean
converges to
normality as n->∞
Reference: http://www.intuitor.com/statistics/CentralLim.html
Reference: http://www.intuitor.com/statistics/CentralLim.html
Reference: http://www.intuitor.com/statistics/CentralLim.html
The distribution
approaches
normality as we
move keep
averaging over more
and more samples
Reference: http://www.intuitor.com/statistics/CentralLim.html