You are on page 1of 3

A First Course in Information Theory

Yuan Luo
December 3, 2009

Weak Typicality

Problem 1 (p70.5) Let p and q be two probability distributions on the same


alphabet . Denote = |H(p) H(q)|. Then, > 0,
{
1
0 if < ,
lim P r{| logP (X) H(q)| < } =
(1.1)
1 if > ,
n
n
where X is composed of i.i.d. random variables with distribution P = pn .
Proof.
For the case < ,
1
logP (X) H(q)|
n
1
= | logP (X) H(p) + H(p) H(q)|
n
1
|H(p) H(q)| | logP (X) H(p)|
n
1
= | logP (X) H(p)|.
n
|

Then
1
logP (X) H(q)| < }
n
n
1
lim P r{| logP (X) H(p)| > }
n
n
= 0.
lim P r{|

For the case > ,


1
logP (X) H(q)|
n
1
= | logP (X) H(p) + H(p) H(q)|
n
1
+ | logP (X) H(p)|.
n
|

Then
1
logP (X) H(q)| < }
n
n
1
lim P r{| logP (X) H(p)| < }
n
n
= 1.
lim P r{|


Problem 2 (p70.6) Let p and q be two probability distributions on the same
alphabet with the same support. Prove that for any > 0,
1
logQ(x) (H(p) + D(p||q))| < }| 2n(H(p)+D(p||q)+) ,
n
1
lim P r{| logQ(X) (H(p) + D(p||q))| < } = 1,
n
n

|{x n : |

where X is composed of i.i.d. random variables with generic r.v. drawn according to distribution p, Q also denoted by q n is another n dimensional distribution.
Note that, x = (x1 , , xn ).
Proof. Denote W = {x n : | n1 logQ(x) (H(p) + D(p||q))| < }. For
x W , we have
Q(x) 2n(H(p)+D(p||q)+) .
Then it follows from Q(x) 1 that |W | 2n(H(p)+D(p||q)+) .
The second part of this problem is easy to be veried by using
)
( 1
1
1
1
EP [ logQ(X)]H(p) =
P (x)logQ(x) + P (x)logP (x) = D(P ||Q) = D(p||q)
n
n
n
n
x


where P = pn .
Problem 3 (p70.6) Universal source coding ...
1. Proof.
n
lim P r{X(s) An (S)} lim P r{X(s) W[X
s ] } = 1.

2. Proof.
|An (S)| = |

n
W[X
s ] |

sS

n
|W[X
s ] |

sS

2n(H(X

(s)

)+)

(n is suciently large)

sS

2n(H+)

sS

= |S|2n(H+)

2n(H+ )

(n is suciently large).
2

3. In the corresponding Shannons source coding scheme, we will show that,


> 0, there exists a set An such that:
(a) For each s S,

P r{X(s) An } <

(1.2)

when n is suciently large;


(b)
<
|R H|
where R =

log|An
|
n

(1.3)

and n is suciently large.

and a coding scheme


Furthermore, if there exist a such that 0 < < H,
n
log|B
|
n

B such that R < H where R =


and n is suciently large,
n
then there is a source X F such that
(c)

lim P r{X Bn } = 1.

(1.4)

Proof. For any > 0, there exist > 0 such that:


< ;
1 > 2() .
Let An = An (S).
a) Then for each s S, by using the rst part of this problem,
P r{X(s) An } = P r{X(s) An (S)} <

(1.5)

when n is suciently large. (1.2) is obtained.


b) By using the second part of this problem, we have

|An | = |An (S)| < 2n(H+)

(1.6)

when n is suciently large. On the other hand, when n is suciently


large,
n
n(H(X
|An | = |An (S)| |W[X
(s) ] | (1 )2

(s)

))

for any s S,

and therefore
|An | (1 )2n(H) > 2n() 2n(H) = 2n(H) .

(1.7)

Thus, (1.3) follows from (1.6) and (1.7).


and
c) Furthermore, assume that there exist a such that 0 < < H,
n
log|B
|
n

a coding scheme B such that R < H where R =


and n is
n

suciently large. Then for the source X F with the largest entropy H,
by using the converse part of Shannons source coding theorem, we have
lim P r{X Bn } = 1

You might also like