You are on page 1of 16

Outline Discrete memoryless channel Transmission rate over a noisy channel Capacity of DMC

Chapter 3: Discrete memoryless channel and its capacity


Vahid Meghdadi University of Limoges

meghdadi@ensil.unilim.fr

Vahid MeghdadiUniversity of Limoges

Chapter 3: Discrete memoryless channel and its capacity

Outline Discrete memoryless channel Transmission rate over a noisy channel Capacity of DMC

Discrete memoryless channel Transmission rate over a noisy channel Repetition code Transmission rate Capacity of DMC Capacity of a noisy channel Examples

Vahid MeghdadiUniversity of Limoges

Chapter 3: Discrete memoryless channel and its capacity

Outline Discrete memoryless channel Transmission rate over a noisy channel Capacity of DMC

Discrete memoryless channel (DMC)


Input random variable X Output random variable Y

Discrete Memoryless channel

The input of a DMC is a RV (random variable) X who selects its value from a discrete limited set X . The cardinality of X is the number of the point in the used constellation. In an ideal channel, the output is equal to the input. In a non-ideal channel, the output can be dierent from the input with a given probability.
Vahid MeghdadiUniversity of Limoges Chapter 3: Discrete memoryless channel and its capacity

Outline Discrete memoryless channel Transmission rate over a noisy channel Capacity of DMC

Discrete memoryless channel

All these transition probabilities from xi to yj are gathered in a transition matrix. The (i , j ) entry of the matrix is P (Y = yj |X = xi ), which is called forward transition probability. In DMC the output of the channel depends only on the input of the channel at the same instant and not on the input before or after.

Vahid MeghdadiUniversity of Limoges

Chapter 3: Discrete memoryless channel and its capacity

Outline Discrete memoryless channel Transmission rate over a noisy channel Capacity of DMC

DMC model
1 pi1 X i p11 p1j pij piM M
pi P ( X = i ) , qj And,

1 j

pMM
P (Y = j ) and pij
M

M
P (Y = j |X = i ).

qj =
i =1
Vahid MeghdadiUniversity of Limoges

pi pij
Chapter 3: Discrete memoryless channel and its capacity

Outline Discrete memoryless channel Transmission rate over a noisy channel Capacity of DMC

DMC probability model


The last equation q1 q2 Qy = . . . qM can be written using matrix form: p11 p21 . . . pM 1 p12 p22 . . . pM 2 . = . . . . . . . . . . . . p1M p2M . . . pMM

p1 p2 . . . pM

This equation can be compactly written as: Qy = PY |X Px Note:


M M M

pij = 1 and pe =
j =1
Vahid MeghdadiUniversity of Limoges

pi
i =1 j =1,i =j

pij

Chapter 3: Discrete memoryless channel and its capacity

Outline Discrete memoryless channel Transmission rate over a noisy channel Capacity of DMC

Hard- and soft-decision

Normally the size of constellation at the input and at the output are the same, i.e., |X | = |Y |. In this case the receiver employs hard-decision decoding. It means that the decoder makes a rm decision about the transmitted symbol. It is possible also that |X | = |Y |. In this case the receiver employs a soft-decision.

Vahid MeghdadiUniversity of Limoges

Chapter 3: Discrete memoryless channel and its capacity

Outline Discrete memoryless channel Transmission rate over a noisy channel Capacity of DMC

Repetition code Transmission rate

Repetition code

Consider the above channel and say p = 0.1. With observing a one at the output, are we sure what had been sent? We can only say that X was one with the probability of 90%, it means that pe = 0.1. If we send a one three times and make a majority decision at the output, the probability of error will be: pe = 1 pc = 1 (1 p )3 + 3 2 (1 p )2 p 1 = 0.028

It means that with reducing the rate, we can arbitrarily decrease the error probability.
Vahid MeghdadiUniversity of Limoges Chapter 3: Discrete memoryless channel and its capacity

Outline Discrete memoryless channel Transmission rate over a noisy channel Capacity of DMC

Repetition code Transmission rate

Transmission rate
H (X ) is the amount of information per symbol at the input of the channel. H (Y ) is the amount of information per symbol at the output of the channel. H (X |Y ) is the amount of uncertainty remaining on X knowing Y . The information transmission is given by: I (X ; Y ) = H (X ) H (X |Y ) bits/channel use For an ideal channel X = Y , there is no uncertainty over X when we observe Y . So all the information is transmitted for each channel use: I (X ; Y ) = H (X ) If the channel is too noisy, X and Y are independent. So the uncertainty over X remains the same knowing or not Y , i.e. no information passes through the channel: I (X ; Y ) = 0.
Vahid MeghdadiUniversity of Limoges Chapter 3: Discrete memoryless channel and its capacity

Outline Discrete memoryless channel Transmission rate over a noisy channel Capacity of DMC

Repetition code Transmission rate

Transmission over noisy channels


The question is: Can we send the information over a noisy channel without error but with a non zero rate? The answer is yes, and the limit of the rate is given by Shannon.

Example: For the following channel, for each channel use, two bits are sent. Using uniform input variable X , the input entropy is H (X ) = 2 bits per symbol. There is no way to transmit 2 bits per channel use through this channel. However, it is possible to realize an ideal channel if we reduce the rate.

Vahid MeghdadiUniversity of Limoges

Chapter 3: Discrete memoryless channel and its capacity

Outline Discrete memoryless channel Transmission rate over a noisy channel Capacity of DMC

Repetition code Transmission rate

Transmission scheme
From the previous example, we change the probability distribution of X as P (X = 0) = P (X = 2) = 0.5 and P (X = 1) = P (X = 3) = 0. So H (x ) = 1 bit per symbol. At the receiver, we can precisely detect the correct transmitted symbol. So the channel has been transformed to an ideal channel with a non-zero rate. This strategy gives an information transmission rate of 1 bit per symbol. Is it the maximum rate? The answer is yes because: The entropy of Y i.e. H (Y ) is at most equal to 2: H (Y ) 2. The conditional entropy H (Y |X ) is the uncertainty over Y given X . But, if X is known, there is just two possibilities for Y giving H (Y |X ) = 1. So [H (Y ) H (Y |X )] 2 1 = 1. Therefore the information rate cannot be greater than 1. So the proposed scheme is optimal.
Vahid MeghdadiUniversity of Limoges Chapter 3: Discrete memoryless channel and its capacity

Outline Discrete memoryless channel Transmission rate over a noisy channel Capacity of DMC

Capacity of a noisy channel Examples

Capacity of a noisy channel

Denition
The channel capacity of a discrete memoryless channel is dened by: C = max I (X ; Y )
p (x )

= max[H (Y ) H (Y |X )]
p (x )

= max[H (X ) H (X |Y )]
p (x )

Vahid MeghdadiUniversity of Limoges

Chapter 3: Discrete memoryless channel and its capacity

Outline Discrete memoryless channel Transmission rate over a noisy channel Capacity of DMC

Capacity of a noisy channel Examples

Example: Binary symmetric channel BSC


Choose PX (x ) to maximize I (X ; Y ). Solution (bis): The channel can be modeled using a new binary RV with: Z= 1 prob p 0 prob 1 p
0 p X p 1 1-p X 1 Y 0 1 Y

1-p

Therefore we can write: Y = X Z . When X is given, the information on Y is the same as the information over Z . Therefore H (Y |X ) = H (Z ). And H (Z ) = H (p , 1 p ). Since H (Y ) 1, we can write I (X ; Y ) 1 H (Z ). The maximum of this quantity is obtained when H (Y ) = 1 or when PX (0) = PX (1) = 0.5. The capacity is C = 1 H (p , 1 p ).
Vahid MeghdadiUniversity of Limoges

Chapter 3: Discrete memoryless channel and its capacity

Outline Discrete memoryless channel Transmission rate over a noisy channel Capacity of DMC

Capacity of a noisy channel Examples

Binary erasure channel BEC


The binary erasure channel is when some bits are lost (rather than corrupted). Here the receiver knows which bit has been erased. What is the capacity of BEC? C = max[H (Y ) H (Y |X )] = max[H (Y ) H (a, 1 a)]
p (x ) p (x )

Symmetry: P (X = 0) = P (X = 1) = 1/2. So Y has three possibilities with probabilities P (Y = 0) = P (Y = 1) = (1 a)/2 and P (Y = e ) = a. So we can write: C 1a 1a , , a H (a , 1 a ) 2 2 = 1 a bit per channel use = H
Vahid MeghdadiUniversity of Limoges Chapter 3: Discrete memoryless channel and its capacity

Outline Discrete memoryless channel Transmission rate over a noisy channel Capacity of DMC

Capacity of a noisy channel Examples

Exercise
Another way to obtain the capacity of BEC. Let dene the RV E for the BEC as E= 1 0 if error, prob=a if not error, prob=1-a

Because C = max H (Y ) H (Y |X ) and H (Y |X ) = H (a, 1 a), we calculate H (Y ) using this new RV. Using chain rule, we know that H (Y , E ) = H (E ) + H (Y |E ) = H (Y ) + H (E |Y ). Since H (E |Y ) = 0 (why), H (Y ) = H (E ) + H (Y |E ). H (E ) = H (a, 1 a) and H (Y |E ) = P (E = 0)H (Y |E = 0) + P (E = 1)H (Y |E = 1). On the other hand, H (Y |E = 1) = 0 and H (Y |E = 0) = H (X ). So, H (Y ) = H (a, 1 a) + (1 a)H (X ) and H (X ) = 1. So the capacity will be H (a, 1 a) + (1 a) H (a, 1 a) = (1 a).
Vahid MeghdadiUniversity of Limoges Chapter 3: Discrete memoryless channel and its capacity

Outline Discrete memoryless channel Transmission rate over a noisy channel Capacity of DMC

Capacity of a noisy channel Examples

What is the transmission scheme?

We want to know how we can obtain this rate. Suppose a large packet of size N of symbols is transmitted and there is a return path. The receiver, knowing which symbol is not received, can ask the transmitter to re-transmit the lost symbols. So in this block, only N aN symbols are passed to the receiver. So the rate 1 a is achieved. Note: Feedback does not increase the capacity of DMC.

Vahid MeghdadiUniversity of Limoges

Chapter 3: Discrete memoryless channel and its capacity

You might also like