Professional Documents
Culture Documents
Copyright
P =
1
q
0
0
0
0
0
q
0
0
0
p
0
q
0
0
0
p
0
0
0
0
0
p
1
q i
1( p ) ,
q N
Pi (N ) = 1( p )
i
N,
if p 6= q;
(1)
if p = q = 0.5.
There are three communication classes: C1 = {0}, C2 = {1, . . . , N 1}, C3 = {N }. C1 and C3 are recurrent
whereas C2 is transient.
Proof : For our derivation, we let Pi = Pi (N ), that is, we suppress the dependence on N for
ease of notation. The key idea is to condition on the outcome of the first gamble, 1 = 1 or
1 = 1, yielding
Pi = pPi+1 + qPi1 .
(2)
The derivation of this recursion is as follows: If 1 = 1, then the gamblers total fortune
increases to X1 = i+1 and so by the Markov property the gambler will now win with probability
Pi+1 . Similarly, if 1 = 1, then the gamblers fortune decreases to X1 = i 1 and so
by the Markov property the gambler will now win with probability Pi1 . The probabilities
corresponding to the two outcomes are p and q yielding (2). Since p + q = 1, (2) can be
re-written as pPi + qPi = pPi+1 + qPi1 , yielding
Pi+1 Pi =
q
(Pi Pi1 ).
p
i
X
(Pk+1 Pk )
k=1
i
X
q k
( ) P1 ,
p
k=1
yielding
Pi+1 = P1 + P1
i
X
q k
i
X
q k
( ) = P1
( )
p
p
k=1
k=0
P1
P1 (i + 1),
1( pq )i+1
,
1( pq )
if p 6= q;
1 = PN =
P1
P1 N,
1( pq )N
,
1( pq )
(3)
if p = q = 0.5.
1ai+1
1a ,
if p 6= q;
if p = q = 0.5,
q
1 p ,
q N
1(
)
P1 =
p
1
if p 6= q;
N,
if p = q = 0.5,
2
q i
1( p ) ,
q N
Pi = 1( p )
i
if p 6= q;
N,
1.1
(4)
if p = q = 0.5.
(5)
Pi () = 0.
(6)
If p 0.50, then
Thus, unless the gambles are strictly better than fair (p > 0.5), ruin is certain.
Proof : If p > 0.5, then pq < 1; hence in the denominator of (1), ( pq )N 0 yielding the result.
If p < 0.50, then pq > 1; hence in the the denominator of (1), ( pq )N yielding the result.
Finally, if p = 0.5, then pi (N ) = i/N 0.
Examples
1. John starts with $2, and p = 0.6: What is the probability that John obtains a fortune of
N = 4 without going broke?
SOLUTION i = 2, N = 4 and q = 1 p = 0.4, so q/p = 2/3, and we want
P2 (4) =
1 (2/3)2
= 0.91
1 (2/3)4
1.2
Applications
1.3
q b
1(q p ) ,
p(a) = 1( p )a+b
b
a+b ,
if p 6= q;
(7)
if p = q = 0.5.
Examples
1. Ellen bought a share of stock for $10, and it is believed that the stock price moves (day
by day) as a simple random walk with p = 0.55. What is the probability that Ellens
stock reaches the high value of $15 before the low value of $5?
SOLUTION
We want the probability that the stock goes up by 5 before going down by 5. This is
equivalent to starting the random walk at 0 with a = 5 and b = 5, and computing p(a).
p(a) =
1 ( pq )b
1
( pq )a+b
1 (0.82)5
= 0.73
1 (0.82)10
Here we equivalently want to know the probability that a gambler starting with i = 10
becomes infinitely rich before going broke. Just like Example 2 on Page 3:
1 (q/p)i = 1 (0.82)10 1 0.14 = 0.86.
1.4
Formula (7) can immediately be used for computing the probability that the simple random
walk {Rn }, starting initially at R0 = 0, will ever hit level a, for any given positive integer
a 1: Keep a fixed while taking the limit as b in (7). The result depends on wether
p < 0.50 or p 0.50. A little thought reveals that we can state this problem as computing the
def
tail P (M a), a 0, where M = max{Rn : n 0} is the all-time maximum of the random
walk; a non-negative random variable, because {M a} = {Rn = a, for some n 1}.
def
Proposition 1.3 Let M = max{Rn : n 0} for the simple random walk starting initially at
the origin (R0 = 0).
1. When p < 0.50,
P (M a) = (p/q)a , a 0;
M has a geometric distribution with success probability 1 (p/q):
P (M = k) = (p/q)k (1 (p/q)), k 0.
In this case, the random walk drifts down to , wp1, but before doing so reaches the
finite maximum M .
2. If p 0.50, then P (M a) = 1, a 0: P (M = ) = 1; the random walk will, with
probability 1, reach any positive integer a no matter how large.
Proof : Taking the limit in (7) as b yields the result by considering the two cases p < 0.5
or p 0.5: If p < 0.5, then (q/p) > 1 and so both (q/p)b and (q/p)a+b tend to as b .
But before taking the limit, multiply both numerator and denominator by (q/p)b = (p/q)b ,
yielding
(p/q)b 1
p(a) =
.
(p/q)b (q/p)a
Since (p/q)b 0 as b , the result follows.
If p > 0.5, then (q/p) < 1 and so both (q/p)b and (q/p)a+b tend to 0 as b yielding the
limit in (7) as 1. If p = 0.5, then p(a) = b/(b + a) 1 as b .
If p < 0.5, then E() < 0, and if p > 0.5, then E() > 0; so Proposition 1.3 is consistent
with the fact that any random walk with E() < 0 (called the negative drift case) satisfies
limn Rn = , wp1, and any random walk with E() > 0 ( called the positive drift case)
satisfies limn Rn = +, wp1. 2
But furthermore we learn that when p < 0.5, although wp1 the chain drifts off to , it
first reaches a finite maximum M before doing so, and this rv M has a geometric distribution.
2
Rn
n
Finally Proposition 1.3 also offers us a proof that when p = 0.5, the symmetric case, the
random walk will wp1 hit any positive value, P (M a) = 1.
By symmetry, we also obtain analogous results for the minimum, :
def
Corollary 1.1 Let m = min{Rn : n 0} for the simple random walk starting initially at the
origin (R0 = 0).
1. If p > 0.5, then
P (m b) = (q/p)b , b 0.
In this case, the random walk drifts up to +, but before doing so drops down to a finite
minimum m 0. Taking absolute values of m makes it non-negative and so we can
express this result as P (|m| b) = (q/p)b , b 0; |m| has a geometric distribution with
success probability 1 (q/p): P (|m| = k) = (q/p)k (1 (q/p)), k 0.
2. If p 0.50, then P (m b) = 1, b 0: P (m = ) = 1; the random walk will, with
probability 1, reach any negative integer a no matter how small.
Note that when p < 0.5, P (M = 0) = 1 (p/q) > 0. This is because it is possible that
the random walk will never enter the positive axis before drifting off to ; with positive
probability Rn 0, n 0. Similarly, if p > 0.5, then P (m = 0) = 1 (q/p) > 0; with positive
probability Rn 0, n 0.