You are on page 1of 18

Statistics G6105: Probability

Professor Philip Protter


1029 SSW
Lecture 12

October 14, 2013


Review for the Exam

• In this lecture we will go over the problems of Homework # 6,


the Review Problems
• Problem 1: State carefully the Monotone Convergence
Theorem
• Monotone Convergence Theorem: If Xn , n ≥ 1 is a
sequence of random variables increasing to X a.s., on a given
complete probability space (Ω, F, P), and if each Xn ≥ 0, then

lim E (Xn ) = E ( lim Xn ) = E (X )


n→∞ n→∞

whether or not E (X ) = ∞.
Problem 2

• Problem 2: Let Qn be a sequence of probability measures on


a complete probability space (Ω, F, P). Define the set
function Q on (Ω, F) by

X 1
Q(Λ) = Qn (Λ) for any Λ ∈ F.
2n
n=1

• Problem 2(a): Show that Q is a bona fide probability


measure
Problem 2 (II)

• First we need to show that Q(Ω) = 1. We have



X 1
Q(Ω) = Qn (Ω)
2n
n=1

X 1
= =1
2n
n=1

• Next we need to show that Q is countably additive. Let Λk be


a sequence of pairwise disjoint sets. That is, Λk ∩ Λj = ∅ if
k 6= j
Problem 2 (III)

• We have

X 1
Q(∪∞
i=1 Λi ) = Qn (∪∞
i=1 Λi )
2n
n=1
∞ ∞
!
X 1 X
= Qn (Λi ) (1)
2n
n=1 i=1

by the countable additivity of each probability measure Qn


• We next recall that an infinite sum is defined as the limit of
its partial sums, hence we can rewrite (1) in that way
Problem 2 (IV)

• Equation (1) becomes:

∞ ∞ n k
! !
X 1 X X 1 X
Qn (Λi ) = lim lim Qm (Λi )
2n n→∞ 2m k→∞
n=1 i=1 m=1 i=1
(2)
• To handle the interchange of limits, since everything is
positive, it is natural to try to use the monotone convergence
theorem. This should work easily for the sum with the Qn ,
since they are probability measures, but what do we do with
the classical infinite series?
• Trick: Counting measures!
Problem 2 (V)

• Let ε{i} denote point mass at the integer point {i}. Then ε{i}
is a probability measure, and ∞
P
i=1 ε{i} is an infinite measure
on (R, B), analogous to our treatment of Lebesgue measure
• Let µ(dx) = ∞
P
i=1 ε{i} (dx). Then

n Z
X 1
lim = lim fn (x)µ(dx)
n→∞ 2m n→∞
m=1

1
where fn (x) = 2x 1[0,n] (x).
Problem 2 (VI)
• We now can rewrite equation (2) as:

n k
!
X 1 X
lim lim Qm (Λi )
n→∞ 2m k→∞
m=1 i=1
Z Z
= lim fn (x)µ(dx) lim gk (x)µ(dx)
n→∞ k→∞
where gk,n (x) = Qn (Λx )1[0,k] (x)
Z Z
= lim lim fn (x)gk,n (x)µ(dx)µ(dx)
n→∞ k→∞
Z Z
= lim lim fn (x)gk,n (x)µ(dx)µ(dx)
n→∞ k→∞
n
X ∞
X
= lim Qm (Λi ) = Qm (Λi ) = Q(Λi ) (3)
n→∞
i=1 i=1
Problem 3

• Problem 3: A sequence of random variables (Xn )n=1,2,... is


said to converge to X in L1 if E (|Xn − X |) → 0 as n → ∞.
• It is said to converge to X in probability if for any δ > 0 one
has limn→∞ P(|Xn − X | ≥ δ) = 0.
• Show that if Xn → X in L1 then Xn → X in probability, too.
• Solution: P(|Xn − X | ≥ δ) = E (1{|Xn −X |≥δ} )
• Observe that on the event {|Xn − X | ≥ δ} we have

|Xn − X |
>1
δ
Problem 3 (II)

• Therefore
 
|Xn − X |
E (1{|Xn −X |≥δ} ) ≤ E 1{|Xn −X |≥δ}
δ
1
= E (|Xn − X |1{|Xn −X |≥δ} )
δ
1
≤ E {|Xn − X |} → 0 as n → ∞
δ
Problem 4

• Problem 4: Let

fn (x) = n1[0,1/n] (x), where fn : (R, B) → (R, B) for each n ≥ 1

Also let (Ω = [0, 1], F = B([0, 1]), P(dx) = dx) be the


probability space. Show that
1. E (fn ) = 1
2. limn→∞ fn = 0 a.s.
3. What can you conclude from this example? That is, it is a
counterexample, but to what?
Solution to Problem 4

• Let x0 be a fixed point in [0, 1], with x0 > 0. Then for n ≥ N,


we have 1/n < x0
• Hence for all n ≥ N we have fn (x0 ) = 0
• Since x0 > 0 was arbitrary, we have limn→∞ fn (x) = 0 for all
x >0
• For x = 0 we have limn→∞ fn (0) = ∞
• Therefore limn→∞ fn (x) = 0 a.e. (Indeed for all x except
x = 0)
• Next we calculate E (fn ).
Solution to Problem 4 (II)

R
• E (fn ) = n1[0,1/n] (x)dx = 1 for all n
• We notice that this example satisfies Fatou’s Lemma:

0 = E (lim inf fn ) ≤ lim inf E (fn ) = 1


n→∞ n→∞

• However this example does not let us write:

0 = lim E (fn ) = E ( lim fn )= 1


n→∞ n→∞

• This is because that while the fn are all nonnegative, they are
neither increasing a.s., nor are they dominated by an L1
random variable
Problem 5

• Problem 5: Let Xn be a sequence of independent random


variables all defined on the same space (Ω, F, P).
• Let Z also be defined on (Ω, F, P) and suppose Z has the
N(0, 1) distribution.
• Show that no matter what is the distribution of any of the
random variables Xn , we cannot have limn→∞ Xn = Z a.s.
Solution to Problem 5

• We know that lim inf n→∞ Xn and lim supn→∞ Xn are both in
the tail σ algebra C∞
• Because the sequence of random variables Xn are all
independent, one from the other, we can apply the
Kolmogorov Zero-One Law to conclude that if Λ ∈ C∞ then
P(Λ) = 0 or 1.
• Therefore both t lim inf n→∞ Xn and lim supn→∞ Xn must be
constants a.s. If they are equal, then we have a limit, but it
too will be constant a.s.
• Since a normally distributed random variable Z is not
constant a.s., Z 6= limn→∞ Xn a.s.
Problem 6

• Problem 6: Let X be a uniform random variable on [0, 1] and


let
1
Y = tan(π(X − ))
2
Calculate E (Y ) if it exists.
Solution to Problem 6

• Solution: X is uniform on [0, 1] so U = π(X − 12 ) is uniform


on [−π/2, π/2]
• We have

P(Y ≤ y ) = P(tan U ≤ y ) = P(U ≤ arctan y )


Z arctan y
1
= dx
−π π
2
and differentiating gives
Z arctan y  
d 1 1 d π
dx = arctan y +
dy − π π π dy 2
2
1
=
π(1 + y 2 )
Solution to Problem 6 (II)

• Therefore we have
Z ∞ Z ∞
y y
E (Y ) ≥ dy ≥ dy
π(1 + y 2 ) π(1 + y 2)
Z0 ∞ 1
Z ∞
y 1
≥ 2
dy ≥ dy
π(y + y ) π(1 + y )
Z1 ∞ 1
1 1
≥ dy = (ln(∞) − 0) = ∞
1 π(2y ) 2π
• An analogous calculation gives
Z 0
y
dy = −∞
−∞ π(1 + y 2 )

• Therefore E (Y ) = ∞ − ∞ and does not exist

You might also like