You are on page 1of 58

Solutions of exercise problems

regarding the lecture


Coding Theory“

N T S

Prof. Dr.-Ing. A. Czylwik

Yun Chen
Department Communication Systems
Room: BA 235, Phone: -1051, eMail: chen@nts.uni-due.de
NTS/IW Exercise problems for Page
UDE Coding Theory 1/57

Solution Problem 1
The discrete information source has a source alphabet X = {x1 , x2 , x3 }. To transmit the
symbols of the information source via a binary channel, the symbols are binary coded.
That means that a unique binary codeword is assigned to each symbol of the information
source.
The goal for source coding is to find a code that

• allows a unique decoding of the transmitted binary codewords

• minimizes the lengths of the binary codewords to reduce the redundancy of the
code

1.1 The information content I(xi ) of a symbol xi with the probability of occurrence
p(xi ) is defined as
 
1
I(xi ) = ld = −ld (p(xi )) (1)
p(xi )
= − log2 (p(xi ))
log10 (p(xi )) log (p(xi ))
= − =− (2)
log10 (2) log (2)
So, the lower the probability of occurrence of a symbol is, the greater the information
content is and vice versa.
The unit of the information content is bit .
symbol
Using the given probability of occurrence of the symbols x1 , x2 and x3 yields:
bit
x1 : p(x1 ) = 0.2 =⇒ I(x1 ) = − log(0.2)
log(2)
= 2.32 (3)
symbol
bit
x2 : p(x2 ) = 0.1 =⇒ I(x2 ) = − log(0.1)
log(2)
= 3.32 (4)
symbol
bit
x3 : p(x3 ) = 0.7 =⇒ I(x3 ) = − log(0.7)
log(2)
= 0.51 (5)
symbol

1.2 The entropy of the information source is a measure for the average information
content of the source. The definition of the entropy of the source with N symbols
x1 , x2 , . . . , xN is:

H(X) = hI(xi )i
XN
= p(xi ) I(xi ) (6)
i=1
N  
X 1
= p(xi ) ld (7)
i=1
p(xi )

The entropy becomes a maximum, if the symbols are equiprobable.

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 2/57

Using the values for p(xi ) and I(xi ) yields:


  bit
H(X) = 0.2 · 2.32 + 0.1 · 3.32 + 0.7 · 0.51
symbol
bit
= 1.16 (8)
symbol

1.3 The redundancy of the source Rs is a measure for the difference between the number
of binary decisions H0 that are necessary to select a symbol of the source (without
taking the probability of the symbols into account) and its entropy H(X).

Rs = H0 − H(X)

For a source with N symbols, the number of binary decisions H0 is calculated to

H0 = ld(N )

This is the maximum value for the entropy of an information source with N symbols
that are equiprobable.
The given source with N = 3 symbols yields:
bit
H0 = ld(3) = 1.58
symbol
So, the redundancy of the source is:
  bit bit
Rs = 1.58 − 1.16 = 0.42
symbol symbol

1.4 Shannon- and Huffman-Code are codes, that fulfill the Shannon-Coding-Theorem:
“For every given information source with the entropy H(X), it is possible to find a
binary prefix-code, with an average codeword length L, with:”

H(X) ≤ L ≤ H(X) + 1 (9)

Thereby, a prefix code is a code, where each codeword is not the beginning of ano-
ther codeword.

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 3/57

1.4.1 Carrying out a Shannon-Coding

A Shannon-Code fulfills inequation(9) by finding a code, where the length


of every codeword L(xi ) is in the range between its information content I(xi )
and its information content plus one.

I(xi ) ≤ L(xi ) ≤ I(xi ) + 1 (10)

The Shannon-Code can be derived in a schematical way:


(a) Sort the symbols xi ,such their probabilities of occurrence p(xi ) decrease
(b) Calculate the information content of the symbols I(xi )
(c) The length of every codeword has to fulfill inequation(10). Thus the length
of each codeword L(xi ) equals next higher, the information content I(xi )
following, integer number.
e.g.

I(xi ) = 1.3 =⇒ L(xi ) = 2


I(xi ) = 1.7 =⇒ L(xi ) = 2
I(xi ) = 2.1 =⇒ L(xi ) = 3

(d) Calculate the accumulated probability P (xi ) of every symbol. The ac-
cumulated probability is the sum of the probabilities of all previously
observed symbols in the sorted list.
(e) The code for the symbol xi is the binary value of the accumulated proba-
bility P (xi ), cut after L(xi ) digits.
Using this schematical way yields:

xi p(xi ) P (xi ) I(xi ) L(xi ) 2−1 2−2 2−3 2−4 2−5 2−6
x3 0.7 0 0.52 1 0 0 0 0 0 0
x1 0.2 0.7 2.32 3 1 0 1 1 0 0
x2 0.1 0.9 = 0.7 + 0.2 3.32 4 1 1 1 0 0 1

Example for the calculation of the binary value of 0.7 (symbol x1 ):


0.7 − 2−1 = 0.2 > 0 1
0.2 − 2−2 <0 0
0.2 − 2−3 = 0.075 > 0 1

0.075 − 2−4 = 0.0125 > 0 1


0.0125 − 2−5 <0 0
0.0125 − 2−6 <0 0
..
.

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 4/57

The determined Shannon-Code for the given information source is:


 
x1 : p(x1 ) = 0.2 101 (11)
 
x2 : p(x2 ) = 0.1 1110 (12)
 
x3 : p(x3 ) = 0.7 0 (13)

The symbol with the maximum probability has the minimum codewordlength
and vice versa.
The Shannon-Code is not the optimal code, because not all possible end points
of the codewordtree are used.

0 1

x3
0 1

0 1 0 1

x1
0 1 0 1 0 1

n.u. n.u. n.u. n.u. x2 n.u.

Bild 1: Codetree of the determined Shannoncode (n.u. =


ˆ not used)

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 5/57

1.4.2 Carring out a Huffman-Coding

The schematical way for a Huffman-Coding is given in the example:

STEP 1. The symbols are sorted by their probabilities, such that the probabilities
decrease.

xi x3 x1 x2
p(xi ) 0.7 0.2 0.1
Code

STEP 2. A “1” is assigned to the symbol with the minimum probability and a
“0” is assigned to second smallest probability

xi x3 x1 x2
p(xi ) 0.7 0.2 0.1
Code 0 1

STEP 3. The two symbols with the minimum probability are combined to a
new pair of symbols. The probability of the new pair of symbols is the sum of the
single probabilities.

xi x3 x1 x2
p(xi ) 0.7 0.3 = 0.1 + 0.2
Code 0 1

Now, the sequence starts again with STEP 1. (sorting)

xi x3 x1 x2
p(xi ) 0.7 0.3
Code 0 1

The assignment of “0” and “1” to combined symbols has to be made for every
symbol contained in the combined symbols. The assignment is made
from the left side.

xi x3 x1 x2
p(xi ) 0.7 0.3
Code 0 10 11

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 6/57

STEP4 Combining the symbols

xi x3 x1 x2
p(xi ) 1.0 = 0.7 + 0.3
Code 0 10 11

The coding is finished when all symbols are combined.


The probability of all combined symbols has to be 1.

The determined Huffman-Code for the given information source is:


 
x1 : p(x1 ) = 0.2 10 (14)
 
x2 : p(x2 ) = 0.1 11 (15)
 
x3 : p(x3 ) = 0.7 0 (16)

The symbol with the maximum probability got the minimum codewordlength
and vice versa.
The Huffman-Code is called “optimal code”, because all possible end points
of the codewordtree are used.

0 1

x3
0 1

x1 x2

Bild 2: Codetree of the determined Huffman-Code

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 7/57

1.5 The redundancy of a code RC is a measure for the difference of the average code-
wordlength of the code L and the entropy of the information source H(X)

Rc = L − H(X)

The average codewordlength L is defined by the statistical average length of the


codewords L(xi ) of the symbols xi :
N
X
L= p(xi ) L(xi )
i=1

• Shannoncode

Using the determined codes (eq.(11)..eq.(13)) yields:

bit
LShannon = 0.2 · 3 + 0.1 · 4 + 0.7 · 1 = 1.7
symbol

  bit bit
=⇒ RCShannon = 1.7 − 1.16 = 0.54
symbol symbol

• Huffmancode

Using the determined codes (eq.(14)..eq.(16)) yields:

bit
LHuf f man = 0.2 · 2 + 0.1 · 2 + 0.7 · 1 = 1.3
symbol

  bit bit
=⇒ RCHuf f man = 1.3 − 1.16 = 0.14
symbol symbol

So, the “optimal” Huffman-Code got a significantly smaller redundancy than the
Shannon-Code.

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 8/57

Solution Problem 2
The information source that is used in exercise 1 created the symbols statistically inde-
pendently. Thus, the probability of a sequence of two specific symbols xi , xj is:

p(xi , xj ) = p(xi ) · p(xj ) (17)

In this case, the probability of a symbol xi under the condition that the previous trans-
mitted symbol xj is known is just the probability of the symbol xi

p(xi |xj ) = p(xi ) (18)

Now, the information source is more realistic. It creates the symbols statistically depen-
dently. This means, that the probability of the current transmitted symbol xi is different,
if the sequence of the previous transmitted symbol changes.
Generally, the probability of the current symbol xi depends on the knowlegde of the
previous transmitted sequence of symbols {xj , xk , xl , . . .} and the probability for this
sequence.
 
p(xi , xj , xk , xl , . . .) = p xi {xj , xk , xl , . . .} ·p {xj , xk , xl , . . .} (19)
| {z }
transition probability

Usually and also in this case the information source is a Markov source 1st order. This
means: Only the knowledge of the last transmitted symbol is necessary to determine the
transition probability. Thus, the probability of the current symbol xi depends only on
the knowledge of the last transmitted symbol xj and the probability for this symbol.

p xi {xj , xk , xl , . . .} = p(xi |xj )

The following equations are always valid:

N
X N
X
p(xi ) = p(xi , xj ) = p(xi |xj ) · p(xj ) (20)
j=1 j=1
N
X
p(xi |xj ) = 1 (21)
i=1

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 9/57

2.1 The Markov diagram is shown below. The given probabilities are marked bold.

p(x1 |x1 ) = 0

x1

p(x1 |x2 ) = 0.8 p(x3 |x1 ) = 0.5

p(x2 |x1 ) = 0.5


p(x1 |x3 ) = 0.171

p(x3 |x2 ) = 0.2


x2 x3
p(x3 |x3 ) = 0.829
p(x2 |x2 ) = 0

p(x2 |x3 ) = 0

The other (not bold) transition probabilites are determined as follows:


p(x1 |x2 ) + p(x2 |x2 ) + p(x3 |x2 ) = 1
=⇒ p(x3 |x2 ) = 1 − p(x1 |x2 ) = 0.2 (I)

p(x2 ) = p(x2 |x1 ) · p(x1 ) + p(x2 |x2 ) · p(x2 ) + p(x2 |x3 ) · p(x3 )
= p(x2 |x1 ) · p(x1 )
p(x2 ) 0.1
=⇒ p(x2 |x1 ) = = = 0.5 (II)
p(x1 ) 0.2

=⇒ p(x3 |x1 ) = 1 − p(x2 |x1 ) − p(x1 |x1 )


= 1 − p(x2 |x1 ) = 0.5 (III)

p(x1 ) = p(x1 |x2 ) · p(x2 ) + p(x1 |x3 ) · p(x3 )


p(x1 ) − p(x1 |x2 ) · p(x2 ) 0.12 12
=⇒ p(x1 |x3 ) = = =
p(x3 ) 0.7 70
= 0.171 (IV)

=⇒ p(x3 |x3 ) = 1 − p(x1 |x3 ) − p(x2 |x3 )


58
= 1 − p(x1 |x3 ) = = 0.829 (V)
70
(VI)

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 10/57

2.2 2 symbols X, Y are combined into a new symbol Z ,


e.g assume the output of the information source:

( x3 , x1 , x ,x , x ,x , ··· )
| {z } | 3{z }2 | 3{z }2
zn zn−1 zn−2
↑ ↑ ↑
current last last before
symbol symbol last symbol

3 X
3  
X 1
H(X, Y ) = p(xi , xj ) · ld
i=1 j=1
p(xi , xj )

with p(xi , xj ) = p(xi |xj ) · p(xj ) =⇒

p(x1 , x1 ) = 0 · 0.2 = 0
p(x1 .x2 ) = 0.8 · 0.1 = 0.08
p(x1 , x3 ) = 0.171 · 0.7 = 0.12
p(x2 , x1 ) = 0.5 · 0.2 = 0.1
p(x2 , x2 ) = 0 · 0.1 = 0
p(x2 , x3 ) = 0 · 0.7 = 0
p(x3 , x1 ) = 0.5 · 0.2 = 0.1
p(x3 , x2 ) = 0.2 · 0.1 = 0.02
p(x3 , x3 ) = 0.829 · 0.7 = 0.58

=⇒ H(X, Y ) = 1.892 < 2 · H(X) = 2.314




=H(X,
ˆ Y )
X,Y are statistically independent

2.3 Coding of statistically dependent symbols


=⇒ Coding using the knowledge of the transition probabilities

Definition: xi =
ˆ current transmitted symbol of the sequence
xj =
ˆ last transmitted symbol of the sequence
xk =
ˆ last before last transmitted symbol of the sequence
xl =
ˆ symbol,that was transmitted before xk

=⇒ e.g. assume sequence of transmitted symbols ( xi , xj , xk , , xl . . . )


| {z } | {z }
zn zn−1

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 11/57

=⇒ zn = {xi , xj } =
ˆ current transmitted pair of symbols
=⇒ zn−1 = {xk , xl } =
ˆ last transmitted pair of symbols
coding using: p(zn |zn−1 ) = p({xi , xj }|{xk , xl })
= p({xi , xj }|xk )
p(xi , xj , xk )
=
p(xk )
p(xi |{xj , xk }) · p(xj , xk )
=
p(xk )
= p(xi |{xj , xk }) · p(xj |xk )
= p(xi |xj ) · p(xj |xk )

The codeword for the pair of symbols zn = {xi , xj } depends on the symbol xk , that
was transmitted before {xi , xj }

xk xj xi p({xi , xj }|xk ) Code L({xi , xj }|xk )/ bit


pair of symbol

x1 x2 x1 0.5 · 0.8 = 0.4 00 0.8


x1 x2 x3 0.5 · 0.2 = 0.1 010 0.3
x1 x3 x1 0.5 · 0.171 = 0.086 011 0.258
x1 x3 x3 0.5 · 0.829 = 0.415 1 0.415
Lx1 = 1.77

x2 x1 x2 0.8 · 0.5 = 0.4 1 0.4


x2 x1 x3 0.8 · 0.5 = 0.4 00 0.8
x2 x3 x1 0.2 · 0.171 = 0.034 011 0.102
x2 x3 x3 0.2 · 0.829 = 0.166 010 0.498
Lx2 = 1.8

x3 x1 x2 0.171 · 0.5 = 0.086 101 0.258


x3 x1 x3 0.171 · 0.5 = 0.086 100 0.258
x3 x3 x1 0.829 · 0.171 = 0.142 11 0.284
x3 x3 x3 0.829 · 0.829 = 0.687 0 0.687
Lx3 = 1.487

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 12/57

L{xi , xj } = p(x1 ) · Lx1 + p(x2 ) · Lx2 + p(x3 ) · Lx2


= 0.354 + 0.18 + 1.041
bit
= 1, 575
pair of symbols
L{xi , xj } bit
=⇒ L = = 0.788
2 symbol

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 13/57

Solution Problem 3

3.1 Generally : [Markov source of 1st order]!


The probability for a symbol xi depends on the state of the source

Current state (point in time) : k


previous state (point in time) : k − 1

N
X
pk (xi ) = pk (xi |xj ).pk−1 (xj )
j=1

This equation can be written in matrix form, by using:


• Probability vector at the k th state : wk
• Probability vector at the (k − 1)th state : wk−1
• Transition matrix: P

wk = (pk (x1 ), pk (x2 ), ... pk (xN ))

wk−1 = (pk−1 (x1 ), pk−1 (x2 ), ... pk−1 (xN ))


 
p(x1 |x1 ) p(x2 |x1 ) · · · p(xN |x1 )

 p(x1 |x2 ) p(x2 |x2 ) · · · p(xN |x2 ) 

P =  .. .. ... .. 
 . . . 
p(x1 |xN ) p(x2 |xN ) · · · p(xN |xN )
 
p11 p12 · · · p1N

 p21 p22 · · · p2N  
=  .. .. .. 
 . . ··· . 
pN 1 pN 2 · · · pN N

=⇒ wk = wk−1 · P

Here:
Stationary 1st order Markov source in the steady state (Source was switched on a
long time ago), so the probability for a symbol does not depend anymore on the
state k.
=⇒ k −→ ∞
=⇒ pk (xi ) = pk−1 (xi ) = p(xi )
=
ˆ wk = wk−1 = w

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 14/57

The matrix equation can be rewritten by using the steady state probabilities wi =
p(xi ) as:

w = w·P
=⇒ (w1 , w2 , w3 , w4 ) = (w1 , w2 , w3 , w4 ) · P

The transition matrix P can be obtained by using the given Markov diagram:

zn x1 x2 x3 x4
zn−1
x1 0.75 0.25 0 0
x2 0 0.75 0.25 0
x3 0 0 0.5 0.5
x4 0.5 0 0 0.5
 
0.75 0.25 0 0
 0 0.75 0.25 0 
=⇒ P=
 0

0 0.5 0.5 
0.5 0 0 0.5

The steady state probabilities wi = p(xi ) can be obtained by evaluating the ma-
trix equation and using the normalization property of probabilities (sum over all
probabilities equals one):

3 1
w1 = w1 + w4 =⇒ w1 = 2w4
4 2
1 3
w2 = w1 + w2 =⇒ w2 = w1 = 2w4
4 4
1 1 1
w3 = w2 + w3 =⇒ w3 = w2
4 2 2
1 1
w4 = w3 + w4 =⇒ w3 = w4
2 2
w1 + w2 + w3 + w4 = 1

=⇒ 2w4 + 2w4 + w4 + w4 = 1
1
=⇒ w4 = p(x4 ) =
6
1
=⇒ w3 = p(x3 ) =
6
1
=⇒ w2 = p(x2 ) =
3
1
=⇒ w1 = p(x1 ) =
3

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 15/57

3.2 The steady state entropy is the expected value over all conditional entropies:

H∞ (z) = E{H(zn |zn−1 )}


XN
= wi · H(zn |zn−1 = xi )
i=1
N  
X 1
H(zn |zn−1 = xi ) = p(xj |xi ) · ld
j=1
p(xj |xi )
N
X 1
= pij · ld
j=1
pij
   
1 1
=⇒ H(zn |zn−1 = x1 ) = 0.75 · ld + 0.25 · ld = 0.811
0.75 0.25
H(zn |zn−1 = x2 ) = 0.811
   
1 1
H(zn |zn−1 = x3 ) = 0.5 · ld + 0.5 · ld =1
0.5 0.5
H(zn |zn−1 = x4 ) = 1
1 1 1 1 bit
=⇒ H∞ (z) = · 0.811 + · 0.811 + · 1 + · 1 = 0.874
3 3 6 6 symbol

3.3 Source without memory : symbols are statistically independent


N  
X 1
=⇒ H(X) = p(xi ) · ld
i=1
p(xi )
= 0.528 + 0.528 + 0.431 + 0.431
bit
= 1.918
symbol

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 16/57

Solution Problem 4
Lempel Ziv algorithm : Source encoding without knowing the statistical proper-
ties of the information source like: type of source, pro-
babilities of symbols, transition probabilities ...
input of the encoder : Sequence of symbols
(output of the information source)
Encoding is done by using the repetitions of subsequences within the input sequence
to encode the input sequence

search buffer look ahead buffer

0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
input
sequence

sliding window

Algorithm searches for repetitions of subsequences in the search buffer within the look
ahead buffer.
It transmits codewords c containing the starting position of the subsequence in the search
buffer, the number of repetitions and the next symbol after the repetition sequence in
the look ahead buffer instead of the whole input sequence.

General definitions:
Size of sliding window is : n
Length of look ahead buffer : Ls
=⇒ Length of search buffer: n − Ls

base of the symbol alphabet N


e.g. digital alphabet: N = 2, octal alphabet: N = 8, hex alphabet: N = 16, ASCII:
N = 256

Length of the codeword (depends on the base of the symbol alphabet N ):

• (n − Ls ) different possible starting positions in the search buffer


=⇒ logN (n − Ls ) symbols are needed to tell the starting positions

• Possible number of repetitions: 0 . . . Ls − 1


(not 0 . . . Ls , because one symbol in the look ahead buffer is needed for the next
symbol)
=⇒ Ls different values for number of repetitions
=⇒ logN (Ls ) symbols are needed to tell the number of repetitions

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 17/57

• 1 symbol is needed to tell the next symbol

=⇒ length of the codewords: logN (n − Ls ) + logN (Ls ) + 1

=⇒ c = (starting postion, number of repetitions, next symbol)


| {z } | {z } | {z }
number of symbols logN (n − Ls ) logN (Ls ) 1

4.1 The parameters for the given problem are:

N = 8
n = 16
Ls = 8

So, the codewords consist of 3 symbols:

• one symbol for the starting position: logN (n − Ls ) = log8 (8) = 1


• one symbol for the number of repetitions: logN (Ls ) = log8 (8) = 1
• one symbol for the next symbol

Encoding is done as follows:

• At the beginning of the encoding procedure, the search buffer is filled up with
’0’.
• The input sequence is shifted into the look ahead buffer and the algorithm
searches for subsequences within the search buffer that match to the beginning
sequence of the look ahead buffer. The longest repetition sequence is used. If
there are two or more repetition sequences with the same maximum length,
one is chosen arbitrarily. The repetition sequence starting in the search buffer
can overlap into the look ahead buffer.
• The codeword is determined and the input sequence is shifted by (number of
repetition symbols +1 ) into the sliding window.
• ...

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 18/57

starting position

0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7

0 0 0 0 0 0 0 0 0 0 4 0 4 0 5 3
repetition sequence encoded sequence
c1 = {5, 2, 4}

0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7

0 0 0 0 0 0 0 4 0 4 0 5 3 4 0 5

c2 = {6, 3, 5}

0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7

0 0 0 4 0 4 0 5 3 4 0 5 4 0 5 7

c3 = {2, 0, 3}

0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7

0 0 4 0 4 0 5 3 4 0 5 3 4 0 5 7

c4 = {4, 7, 7}

0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7

4 0 5 3 4 0 5 7 5 1

c5 = {2, 1, 1}

So, the encoder uses 5 code words with 3 symbols each (altogether 15 symbols) to
encode an input sequence of 18 symbols. The performance of the encoder improves,
if there are many repetitions within the input sequence.

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 19/57

4.2 Decoding is done as follows:

• At the beginning of the decoding procedure, the search buffer is filled up with
’0’ and the look ahead buffer is empty.
• The codeword to be decoded tells the starting position, the length of the
repetition sequence and the next symbol. Using this information, the repetition
sequence together with the next symbol is written into the look ahead buffer.
This is the decoded sequence.
• The symbol sequence is shifted into the search buffer such that the look ahead
buffer is empty.
• ...

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 20/57

starting position
c1 = {5, 2, 4}
0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7

0 0 0 0 0 0 0 0 0 0 4
repetition sequence decoded sequence

c2 = {6, 3, 5}
0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7

0 0 0 0 0 0 0 4 0 4 0 5

c3 = {2, 0, 3}
0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7

0 0 0 4 0 4 0 5 3

c4 = {4, 7, 7}
0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7

0 0 4 0 4 0 5 3 4 0 5 3 4 0 5 7

c5 = {2, 1, 1}
0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7

4 0 5 3 4 0 5 7 5 1

The the total decoded sequence is:


 
s=ˆ 004040534053405751

This is the input sequence that was encoded.

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 21/57

Solution Problem 5

5.1
The probabilities for the input symbols are:
p(x1 ) = p1
p(x2 ) = 1 − p(x1 ) = 1 − p1

Using the transition probabilities one get probabilities for the output symbols:
p(y1 ) = p(x1 ) · p(y1 |x1 )
= p1 · (1 − perr )
p(y2 ) = (1 − p1 ) · (1 − perr )
p(y3 ) = p1 · perr + (1 − p1 ) · perr = perr

The information flow is the output entropy minus the information added on the
channel.
T (X, Y ) = H(Y ) − H(Y |X)
Using the definition of the output entropy and the probabilities for the output
symbols:
3
X 1
H(Y ) = p(yi ) · ld
i=1
p(yi )
 
1
= p1 · (1 − perr ) · ld
p1 · (1 − perr )
 
1
+(1 − p1 ) · (1 − perr ) · ld
(1 − p1 ) · (1 − perr )
 
1
+perr · ld
perr
    
1 1
= p1 · (1 − perr ) · ld + ld
p1 1 − perr
    
1 1
+(1 − p1 ) · (1 − perr ) · ld + ld
1 − p1 1 − perr
 
1
+perr · ld
perr
    
1 1
H(Y ) = (1 − perr ) · p1 · ld − ld
p1 1 − p1
      
1 1 1
+perr · ld − ld − ld
perr 1 − p1 1 − perr
   
1 1
+ld + ld
1 − p1 1 − perr

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 22/57

Using the definition of the irrelevance H(Y |X):


3 X
2  
X 1
H(Y |X) = p(yi , xj ) · ld
i=1 j=1
| {z } p(yi |xj )
=p(xj )·p(yi |xj )
3 X
2  
X 1
= p(xj ) · p(yi |xj ) · ld
i=1 j=1
p(yi |xj )
 
1
= p1 · (1 − perr ) · ld
1 − perr
 
1
+p1 · perr · ld
perr
 
1
+(1 − p1 ) · perr · ld
perr
 
1
+(1 − p1 ) · (1 − perr ) · ld
1 − perr
    
1 1
H(Y |X) = perr · ld − ld
perr 1 − perr
 
1
+ld
1 − perr

Using the output entropy and the irrelevance, the transinformation flow can be
determined:
    
1 1
=⇒ T (X, Y ) = (1 − perr ) · p1 · ld − ld
p1 1 − p1
      
1 1 1
+perr · ld − ld − ld
perr 1 − p1 1 − perr
   
1 1
+ld + ld
1 − p1 1 − perr
       
1 1 1
− perr · ld − ld + ld
perr 1 − perr 1 − perr
    
1 1
= (1 − perr ) · p1 · ld − ld
p1 1 − p1
    
1 1
+perr · −ld + ld
1 − p1 1 − p1
       
1 1 1
= (1 − perr ) · p1 · ld − ld + ld
p1 1 − p1 1 − p1
    
1 1
= (1 − perr ) · p1 · ld + (1 − p1 ) · ld
p1 1 − p1
= −(1 − perr ) · [p1 · ld (p1 ) + (1 − p1 ) · ld (1 − p1 )]

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 23/57

5.2
The channel capacity is the maximum transinformation flow with respect to the
probabilities of the input symbols:

1
C = · max T (X, Y )
∆T p(xi )

=⇒ wanted: Maximum of a function


=⇒ 1st derivation:

∂ !
T (X, Y ) = T ′ (X, Y ) = 0
∂p1
"
1 1
T ′ (X, Y ) = −(1 − perr ) · ld (p1 ) + p1 · ·
p1 ln(2)
#
1 1
−ld (1 − p1 ) − (1 − p1 ) ·
1 − p1 ln(2)
 
1 1
= −(1 − perr ) · ld (p1 ) + − ld (1 − p1 ) −
ln(2) ln(2)
!
= −(1 − perr ) · [ld (p1 ) − ld (1 − p1 )] = 0
!
=⇒ ld (p1 ) = ld (1 − p1 )
=⇒ p1 = 1 − P1
1
=⇒ p1 =
2



T (X, Y ) = (1 − perr )
1
p1 = 2

1
=⇒ C = (1 − perr ) ·
∆T

5.3 Shannons theorem of channel capacity:


If the average information flow R = ˆ information
second
bits of an information source is
smaller than the channel capacity C there is a source and channel coding / decoding
method such the information of the source can be transmitted via the channel and
the residual error probability after decoding can be infinite small.
Shannon said: There is such a source and channel (de-)coding method, but he did
not say anything about this method.
The entropy H(X̂) of the source is the average information content per symbol.
The source is the same as in the first lecture and its entropy is determined to:
bit
H(X̂) = 1.16
symbol

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 24/57

symbol
The source emitts symbols with a symbol rate Rsymbol , [Rsymbol ] = . So,
second
the information flow of the source is:
bit
R = H(X̂) · Rsymbol = 1.16 · Rsymbol
symbol
1
The channel capacity of the binary erasure channel is C = (1 − perr ) · ∆T where
1
∆T = Rbinary,c is the binary symbol period after source and channel encoding (so it
is the binary symbol period of the transmission via the channel) .
symbols
With Perr = 0.1 and the binary symbol rate over the channel of Rbinary,c = 1000 second
the channel capacity is:

C = (1 − perr ) · Rbinary,c
bit
= 900
second

So, the information flow of the source must be less than 900 bit .
second
bit bit
R = 1.16 · Rsymbol ≤ C = 900
symbol second
900 bit
−→ Rsymbol = second
1.16 bit
symbol
symbol
= 775.862
second

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 25/57

Solution Problem 6

6.1 Each linear block code can be described by :

c = u·G

u : uncoded information word, k bits


c : code word for information word u , n bit
G : generator matrix, k × n - matrix, k rows, n columns

Each information word corresponds to a unique code word and vice versa.
The number of rows of the generator matrix is the number of information bits k
of the information words, the number of columns of the generator matrix is the
number of code word bits n of the code words.

=⇒ k = 3
=⇒ n = 7
=⇒ N = 2k = 23 = 8 code words
k 3 Number of information bits
code rate =⇒ Rc = = = 0.43 =
ˆ ratio
n 7 Number of code word bits

 
With the information word u = u0 u1 u2 , the matrix multiplication is as follows:

c = u·G  
  1 1 0 1 0 0 1
c = u0 u 1 u2 ·  1 0 1 0 0 1 1 
1 1 1 0 1 0 0
 
= u0 · 1 1 0 1 0 0 1 +
| {z }
 1 row of G 
st

u1 · 1 0 1 0 0 1 1 +
| {z }
2nd row of G
 
u2 · 1 1 1 0 1 0 0
| {z }
3rd row of G

Summation and multiplication is done in the binary domain


(0 + 0 = 0, 0 + 1 = 1, 1 + 1 = 0).

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 26/57

n
z 
 }| {
 1 1 0 1 0 0 1
k  1 0 1 0 0 1 1  =G

1 1 1 0 1 0 0 wH (ci )
 
u0 = 0 0 0 0 0 0 0 0 0 0 = c0 0
u1 = 0 0 1 1 1 1 0 1 0 0 = c1 4
u2 = 0 1 0 1 0 1 0 0 1 1 = c2 4
u3 = 0 1 1 0 1 0 0 1 1 1 = c3 4
u4 = 1 0 0 1 1 0 1 0 0 1 = c4 4
u5 = 1 0 1 0 0 1 1 1 0 1 = c5 4
u6 = 1 1 0 0 1 1 1 0 1 0 = c6 4
u7 = 1 1 1 1 0 0 1 1 1 0 = c7 4

The complete code is given by all linear combinations of the rows of G

6.2 The minimun distance dmin of the code is the minimun number of digits, in which
two code words are different. It is shown in the lecture that the minimum distance
equals the minimum weight of the code words.

dmin = min {wH (ci ) |ci 6= 0}


= 4

=⇒ number of errors in code words that can be detected at the decoder side:

te = dmin − 1 = 3

number of errors that can be corrected at the decoder side:

 dmin −2
 2
dmin is even
t =
 dmin −1
2
dmin is odd

Here: dmin is even:


dmin − 2
=⇒ t= =1
2
6.3 “Each linear blockcode can be converted into equivalent systematic code”

G −→ G′ G′ = k × n − matrix
c −→ c′
The generator matrix of the systematic code G′ has the following structure:
. 
G′ = Ik .. P Ik : Identity matrix (k × k)
P : Parity bit matrix ( k × (n − k) )

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 27/57

 
1 0 0
I3 =  0 1 0 
0 0 1

The rows of G′ are generated by combinations of the rows of G, such the first part
of the G′ is the identity matrix Ik
 
.
(1 + 2 + 3 ) row of G −→  1
st nd rd 0 0 .. 1 1 1 0
.  ′
(2nd + 3rd ) row of G −→  0 1 0 .. 0 1 1 1 
=G
(1st + 3rd ) row of G −→ .
0 0 1 .. 1 1 0 1

So, for the given code the parity bit matrix P is:
 
1 1 1 0
P= 0 1 1 1 
1 1 0 1

The code words of the systematic code are obtained by the matrix equation:

c′ = u · G′
 
.
1 0 0 .. 1 1 1 0

G′ =  .. 

 0 1 0 . 0 1 1 1 
..
0 0 1 . 1 1 0 1
.
ua = (1 0 1) ( 1 0 1 .. 0 0 1 1 ) = c′a
| {z } | {z }
=ua parity check bits
..
ub = (0 1 1) ( 0 1 1 . 1 0 1 0 ) = c′b
| {z } | {z }
=ub parity check bits

6.4 Parity check matrix : H′


is used for error check and error correction.

Property of every parity check matrix H:

c · HT = 0 if c is a valid code word


x · HT 6= 0 if x is not a valid code word

Generation of H′

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 28/57

.
G′ = ( Ik .. P ) generator matrix
.
H′ = ( PT .. In−k ) parity check matrix

With the above determined parity bit matrix P:

 
  1 0 1
1 1 1 0  1 1 1 
P =  0 1 1 1  −→ PT = 
 1

1 0 
1 1 0 1
0 1 1
 
..
 1 0 1 .
..
1 0 0 0 
 
′  1 1 1 . 0 1 0 0 
=⇒ H =  .. 
 1 1 0 . 0 0 1 0 
 
..
0 1 1 . 0 0 0 1

H′ is parity check matrix for code c and code c′ and equivalent codes (codes with
the same set of code words) !
6.5 Transmission model:

u G′ x y H′ x (errorfree)

channel

=⇒ y is the output of the channel

y = x + e
↑ ↑
code word error vector
Syndrome vector :
T
s = y · H′
T
= (x + e) · H′
T T T
H′ } +e · H′ = e · H′
= |x ·{z
0

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 29/57

=⇒ syndrome table for single errors


=
ˆ e has only one “1”

Error at bit No.2 e.g. e = ( 0 0 1 0 0 0 0 )


T
=⇒ s = e · H′
= ( 1 1 0 1 )
ˆ third column of H′
=

error at bit No. Syndrome s


0 1 1 1 0
1 0 1 1 1
2 1 1 0 1
3 1 0 0 0
4 0 1 0 0
5 0 0 1 0
6 0 0 0 1
no error 0 0 0 0

6.6 Received word y ( perhaps with errors )


step 1: calculate syndrome s : s = y ·H′ T

step 2: check s

if s = 0 =⇒ accept the received word


( perhaps more than te = 3 errors)

if s 6= 0 =⇒ search in table
a). s included in the table
=⇒ determine error vector e
b). s not included in the table
=⇒ more than t = 1 errors
=⇒ not correctable

step 3: correction of the error


ycorr = y + e

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 30/57

 
1 1 1 0

 0 1 1 1 


 1 1 0 1 

H′ T =
 1 0 0 0 


 0 1 0 0 

 0 0 1 0 
0 0 0 1

ya = ( 0 0 0 1 1 0 1 ) ( 1 1 0 1 ) = sa
yb = ( 1 1 1 0 0 0 0 ) ( 0 1 0 0 ) = sb
yc = ( 1 0 0 0 1 0 0 ) ( 1 0 1 0 ) = sc

The according error vectors are obtained from the syndrome table.

sa = ( 1 1 0 1 ) =⇒ ea = ( 0 0 1 0 0 0 0 )
sb = ( 0 1 0 0 ) =⇒ eb = ( 0 0 0 0 1 0 0 )
sc = ( 1 0 1 0 ) =⇒ not included

=⇒ ya,corr = ya + ea = ( 0 0 1 1 1 0 1 )
yb,corr = yb + eb = ( 1 1 1 0 1 0 0 )
yc,corr =⇒ not correctable

6.7
T
s = y · H′
T
=⇒ ( s0 s1 s2 s3 ) = ( y0 y1 y2 y3 y4 y5 y6 ) · H′


s0 = y0 + y2 + y3 


s1 = y0 + y1 + y2 + y4
=⇒ parity equations
s2 = y0 + y1 + y5 


s3 = y1 + y2 + y6

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 31/57

Solution Problem 7
Generally, for a binary (n, k) block code with the capability to correct t errors yields:

t  
X
k n
=⇒ 2 · ≤ 2n
i
i=0

Hamming codes are perfect which satisfy the equality:

t  
X
k n
=⇒ 2 · = 2n
i
i=0

=⇒ All received words fall into the decoding spheres with a codeword in the center.

decoding sphere

t=2

dH = 5

For each decoding sphere there is a valid codeword in the center and there are other
words in it, which are in maximum t bits difference. Each received word that falls within
the decoding sphere is decoded with the codeword in the center.
=⇒ These invalid words can be different from the valid codeword in the center in

dH = 1 bit
dH = 2 bits
..
.
dH = t bits

Generally, if an invalid word has a difference of t bits from a codeword, t ”1”s have to be
distributed to n bits

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 32/57

 
n
=⇒ possible invalid words that are different from the codeword in t bits
t

=⇒ Number of vectors within a decoding sphere:

invalid words
codeword z   }|    {
z }| { n n n
1 + + +... +

|{z} 1 2 t
n | {z } | {z } | {z }
  different in 1 bit different in 2 bits different in t bits
0
 
       
k k 
 n n n n 
2 codewords =⇒ 2 ·  + + + ... + 
0 1 2 t 
| {z }
=1
t  
k
X n
= 2 · ≤ 2n
i |{z}
i=0 total number of vectors
| {z }
number of vectors within decoding spheres

7.1 Hamming Code:

t  
k
X n
2 · ≤ 2n
i
i=0

given: k = 4, t = 1
   
k n n
2 · + = 2n
0 1

=⇒ 1 + n = 2n−k = 2m
=⇒ 1 + (m + k) = 2m
=⇒ 1 + k = 2m − m
=⇒ 2m − m = 5

m 2m − m
1 1
2 2
3 5
4 12

=⇒ m = 3, n = k + m = 4 + 3 = 7

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 33/57

7.2 Hamming codes can be written as systematic codes


   
.. T ..
=⇒ G = Ik . P ⇐⇒ H = P . In−k

syndrome: s = e · HT

• n = 7 different syndromes are used for 7 positions of single errors


All zero vector = no error
• Single error at position i
=⇒ syndrome si is the row no. i of HT
⇐⇒ syndrome si is the column no. i of H

=⇒ all n rows of HT are different and no row is the all zero vector
=⇒ use all different vectors si (except s = 0) for the columns of H

 
.
1 1 1 0 .. 1 0 0

=⇒H =  .. 

 1 1 0 1 . 0 1 0 
..
1 0 1 1 . 0 0 1
 
.
= P .. In−k
T

 
.
G = Ik .. P

 
..
 1 0 0 0 .
..
1 1 1 
 
 0 1 0 0 . 1 1 0 
=⇒G =  .. 
 0 0 1 0 . 1 0 1 
 
..
0 0 0 1 . 0 1 1

7.3 Generally (without proof): All hamming codes have dmin = 3


=⇒ Number of errors that can be detected

te = dmin − 1 = 2

=⇒ Number of errors that can be corrected


(dmin − 1)
t= = 1 (dmin odd)
2

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 34/57

Solution Problem 8
Notation of sum-construction of a code

Assume: ca = linear block code with ( n, ka , dmin,a )


cb = linear block code with ( n, kb , dmin,b )
Creating a new linear block code cnew with
( nnew = 2n, knew = ka + kb , dmin,new ) by cnew = ca & cb
. 
=⇒ cnew = ca .. ca + cb ← all combinations of codewords
. 
= ca,0 , ca,1 , · · · ca,n−1 , .. ( ca,0 + cb,0 ) , ( ca,1 + cb,1 ) , · · · ( ca,n−1 + cb,n−1 )
| {z } | {z }
n bits n bits
= ( cnew,0 , cnew,1 , · · · cnew,2n−1 )
| {z }
2n bits

Ga Ga
Gnew =
0 Gb

Reed Muller-Codes are linear block codes

Notation RM (r, m)0≤r≤m


 r
P
m
=⇒ n = 2m , k = i
, dmin = 2m−r
i=0

=⇒ RM (0, 0) =⇒ n = 1, k = 1, dmin = 1
=⇒ G00 = (1)
RM (0, 1) =⇒ n = 2, k = 1, dmin = 2
=⇒ G01 = (1 1)
General
r = 0 =⇒ k = 1, n = 2m , dmin = 2m
=⇒ repitition code =⇒ G0 m = ( |1 1 {z
· · · 1} )
2m
m
r = m =⇒ dmin = 1, n = 2
k = 2m (without proof) =⇒ n = k
 
1 0 ··· 0  
 0 1 · · · 0  
  m
=⇒ uncoded =⇒ Gmm = Im =  .. .. . . ..  2
 . . . . 


0 0 ··· 1 
| {z }
2m

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 35/57

Recursive sum construction:

RM (r + 1, m + 1) = RM (r + 1, m) & RM (r, m)

Gr+1,m Gr+1,m
=⇒ Gr+1,m+1 =
0 Gr,m

Construction by submatrices:

· · · 1} )
G0 = ( |1 1 {z
n=2m
m
G1 = m × 2 matrix
where the columns contain all possible words of the length m
Gl = each row is the vector product of l different rows of G1
For 8.4 : G23
 
G0
=⇒ G23 =  G1 
G2
G0 = ( 1 1 1 1 1 1 1 1 )
| {z }
n=2m
 
0 0 0 0 1 1 1 1  row 1
G1 =  0 0 1 1 0 0 1 1  m row 2

0 1 0 1 0 1 0 1 row 3
| {z }
n=2m
 
0 0 0 0 0 0 1 1 row 1 × row 2
G2 =  0 0 0 0 0 1 0 1  row 1 × row 3
0 0 0 1 0 0 0 1 row 2 × row 3

8.1

G12 G12
G13 =
0 G02

G02 = ( 1 1 1 1 )

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 36/57

1 0 1 0 1010
=⇒ G1,3 = 0 1 0 1 0101
0 0 1 1 0011
0 0 0 0 1111

8.2

G22 G22
G23 =
0 G12

G22 =?
r = 2, m = 2 =⇒ uncoded =⇒ k = 22 = 4 = n
 
1 0 0 0
 0 1 0 0 
=⇒ G22 =   0

0 1 0 
0 0 0 1
  
1 0 0 0 1 0 0 0 1 
 0 
 1 0 0 
 0 1 0 0 2



 0 
 0 1 0 
 0 0 1 0 3



=⇒ G23 =  0 0 0 1 0 0 0 1
 
 4 k=7
  
 0 0 0 0 1 0 1 0  5 


  
 0 0 0 0 0 1 0 1  6 



0 0 0 0 0 0 1 1 7
| {z }
n=8

8.3 RM (2, 3) =⇒ r = 2, m = 3
r        
m
X m 3 3 3
n = 2 = 8, k= = + +
i=0
i 0 1 2
3! 3!
= 1+ +
1!2! 2!1!
= 1+3+3=7

dmin = 23−2 = 2 = min wH (c) | c ∈ RM (2, 3)
c 6= 0

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 37/57

8.4 Alternative way to determine G23


 
G0
 G1 
 
Grm =  .. 
 . 
Gr
 
1 1 1 1 1 1 1 1 1+2+3+4
a
 0 0 0 0 1 1 1 1  5+6 b
 
 0 0 1 1 0 0 1 1  3+4 c
 
=⇒ G23 =   0 1 0 1 0 1 0 1 2+4 d


 0 0 0 0 0 0 1 1  7 e
 
 0 0 0 0 0 1 0 1  6 f

0 0 0 1 0 0 0 1 4
8.2
g
6= G23 |8.2

=⇒ 1 = a+c+d+g
2 = d+g
3 = c+g
4 = g
5 = b+f
6 = f
7 = e

By simply matrix conversions it can be shown that this is equivalent to the result
from 8.2.

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 38/57

Solution Problem 9

9.1
RM (1, 3) : r = 1, m = 3
r      
X m 3 3
n = 2m = 8, k= = + = 4, dmin = 2m−r = 22 = 4
i 0 1
i=0

submatrix construction:

 
  1 1 1 1 1 1 1 1
G0  0 0 0 0 1 1 1 1 
G13 = = 
G1  0 0 1 1 0 0 1 1 
0 1 0 1 0 1 0 1

9.2
u = (1 0 0 1)
x = u · G13 = (1 0 1 0 1 0 1 0)

9.3 Majority vote decoding


 
1 1 1 1 1 1 1 1
 0 0 0 0 1 1 1 1 
(x0 x1 x2 x3 x4 x5 x6 x7 ) = (u0 u1 u2 u3 ) · 
 0

0 1 1 0 0 1 1 
0 1 0 1 0 1 0 1
=⇒

a.) x0 = u0
b.) x1 = u0 + u 3
c.) x2 = u0 + u 2
d.) x3 = u0 + u 2 + u 3
e.) x4 = u0 + u 1
f.) x5 = u0 + u 1 + u 3
g.) x6 = u0 + u 1 + u 2
h.) x7 = u0 + u 1 + u 2 + u3

Goal: determine the information word u from the received word x

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 39/57

n o n o
a.) + b.) =⇒ u3 = x0 + x1 a.) + c.) =⇒ u2 = x0 + x2
n o n o
c.) + d.) =⇒ u3 = x2 + x3 b.) + d.) =⇒ u2 = x1 + x3
n o n o
e.) + f.) =⇒ u3 = x4 + x5 e.) + g.) =⇒ u2 = x4 + x6
n o n o
g.) + h.) =⇒ u3 = x6 + x7 f.) + h.) =⇒ u2 = x5 + x7

n o
a.) + e.) =⇒ u1 = x 0 + x 4
n o
b.) + f.) =⇒ u1 = x 1 + x 5
n o
c.) + g.) =⇒ u1 = x 2 + x 6
n o
d.) + h.) =⇒ u1 = x 3 + x 7

These are the equations on which to apply a majority vote. After determination of
u1 , u2 , u3 , start to determine u0 :

v = x + (0 u1 u2 u3 ) · G13

n o
a.) =⇒ u0 = x 0
n o
b.) =⇒ u0 = x 1 + u3
n o
c.) =⇒ u0 = x 2 + u2
n o
d.) =⇒ u0 = x 3 + u2 + u 3
n o
e.) =⇒ u0 = x 4 + u1
n o
f.) =⇒ u0 = x 5 + u1 + u 3
n o
g.) =⇒ u0 = x 6 + u1 + u 2
n o
h.) =⇒ u0 = x 7 + u1 + u 2 + u 3

9.4
x = (1 0 1 0 1 0 1 0)
1 1 0 1 0)
y = (1 0 1 |{z}
error

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 40/57


u3 = y0 + y1 = 1 


u3 = y2 + y3 = 0
majority vote: u3 = 1
u3 = y4 + y5 = 1 


u3 = y6 + y7 = 1

u2 = y0 + y2 = 0 


u2 = y1 + y3 = 1
majority vote: u2 = 0
u2 = y4 + y6 = 0 


u2 = y5 + y7 = 0

u1 = y0 + y4 = 0 


u1 = y1 + y5 = 0
majority vote: u1 = 0
u1 = y2 + y6 = 0 


u1 = y3 + y7 = 1

now determine u0 :

v = y + (0 u1 u2 u3 ) · G13
 
1 1 1 1 1 1 1 1
 0 0 0 0 1 1 1 1 
= (1 0 1 1 1 0 1 0) + (0 0 0 1) · 
 0

0 1 1 0 0 1 1 
0 1 0 1 0 1 0 1
= (1 1 1 0 1 1 1 1)

majority vote on elements of v:

=⇒ u0 = 1

=⇒
u = (1 0 0 1)

9.5 dmin = 4
td = dmin − 1 = 3 errors can be detected.
dmin −2
tc = 2
= 1 error can be corrected.

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 41/57

Solution Problem 10

10.1 n = 7 =⇒ 7 codeword bits


Generally : g(x) = g0 + g1 · x + g2 · x2 + g3 · x3 + · · ·
Here : g(x) = 1 + x + x3 = 1 + 1 · x + 0 · x2 + 1 · x3
=⇒ g0 = 1
g1 = 1
g2 = 0
g3 = 1

7 codeword bits =⇒ g4 = 0, g5 = 0, g6 = 0

g0 g1 g2 g3 g4 g5 g6
1 1 0 1 0 0 0
0 1 1 0 1 0 0
=⇒ =G
0 0 1 1 0 1 0
0 0 0 1 1 0 1

10.2

ua = ( 0 1 1 0 )
=⇒ ca = ua · G
 
1 1 0 1 0 0 0
 0 1 1 0 1 0 0 
= (0110)·
 0

0 1 1 0 1 0 
0 0 0 1 1 0 1
= ( 0 1 0 1 1 1 0 )
ub = (1010)
=⇒ cb = ub · G
 
1 1 0 1 0 0 0
 0 1 1 0 1 0 0 
= (1010)·
 0

0 1 1 0 1 0 
0 0 0 1 1 0 1
= ( 1 1 1 0 0 1 0 )

10.3 Conversion to polynomial description:

Vector ua = ( 0 1 1 0 ) = ( u0 u1 u2 u3 )
x0 x1 x2 x3 | {z }
length k=4

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 42/57

polynomial =⇒ ua (x) = u0 · x0 + u1 · x1 + u2 · x2 + u3 · x3 degree of


= 0 · x0 + 1 · x1 + 1 · x2 + 0 · x3 polynomial
= x + x2 k−1

Vector ub = ( 1 0 1 0 ) = ( u0 u1 u2 u3 )

polynomial =⇒ ub (x) = u0 · x0 + u1 · x1 + u2 · x2 + u3 · x3
= 1 + 0 · x1 + 1 · x2 + 0 · x3
= 1 + x2
∗ Coding by multiplication of the information polynomial
with the generator polynomial

=⇒ c(x) = u(x) · g(x)


=⇒ ca (x) = ua (x) · g(x)
= (x + x2 ) · (1 + x + x3 )
= x · (1 + x + x3 ) + x2 · (1 + x + x3 )

x + x2 + x4 Modulo 2 summation
x2 + x3 + x 5
(1 + 1 = 0)

= x + x3 + x4 + x5
= 0 + 1 · x + 0 · x2 + 1 · x3 + 1 · x4 + 1 · x5 + 0 · x6
! ( Degree of polynomial : n − 1 = 6 ) !

⇒ ca = ( 0 1 0 1 1 1 0 )

cb (x) = ub (x) · g(x)


= (1 + x2 ) · (1 + x + x3 )
= 1 · (1 + x + x3 ) + x2 · (1 + x + x3 )

1 + x + x3
2
x + x3 + x5

= 1 + x + x2 + x5
= 1 + 1 · x + 1 · x2 + 0 · x3 + 0 · x4 + 1 · x5 + 0 · x6

⇒ cb = ( 1 1 1 0 0 1 0 )
Non systematic coding : c(x) = u(x) · g(x)

10.4 Systematic coding


1.) Multiply ua (x) with xn−k (degree of generator polynomial)

2.) Perform a polynomial division with g(x)


r(x)
(ua (x) · xn−k ) : g(x) = q(x) + g(x)

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 43/57

3.) Add remainder r(x)


⇒ codeword: ca (x) = u(x) · xn−k + r(x)
⇒ g(x) is a divisor of c(x)
.
ˆ a = ( r .. ua )
ca (x)=c
ua = ( 0 1 1 0 )

1.) xn−k = x7−4 = x3


ua (x) · xn−k = ua (x) · x3 = (x + x2 ) · x3
= x4 + x5
2.) ua (x) · xn−k : g(x)
1
(x5 + x4 ) : (x3 + x + 1) = x2 + x + 1 + x3 +x+1
+ (x5 + x3 + x2 ) . . . . . . . . . . . . . . . . x2 · (x3 + x + 1)
0 + x4 + x3 + x2
x4 + x2 + x ........ x · (x3 + x + 1)
0 + x3 + 0 + x
x3 + x + 1··· 1 · (x3 + x + 1)
0 + 0 + 1··· r(x)
3.) ca,s (x) = ua (x) · xn−k + r(x)
= 1 + x4 + x5
= 1 + 0 · x + 0 · x2 + 0 · x3 + 1 · x4 + 1 · x5 + 0 · x6
.
⇒ ca,s = (1| {z0 0} .. |0 1{z1 0})
=r(x)
ˆ =u
ˆ a (x)

ub = ( 1 0 1 0 )

1.) ub (x) · xn−k =(1 + x2 ) · x3


=x3 + x5
2.) ub (x) · xn−k : g(x)
x2
(x5 + x3 ) : (x3 + x + 1) = x2 + x3 +x+1
+ (x5 + x3 + x2 )
x2 = r(x)
3.) cb,s (x)=ub (x) · xn−k + r(x)
=x2 + x3 + x5
=0 + 0 · x + 1 · x2 + 1 · x3 + 0 · x4 + 1 · x5 + 0 · x6
.
0 1} .. |1 0{z1 0})
⇒ cb,s =(0| {z
=r(x)
ˆ =u
ˆ b (x)

10.5 Cyclic code: The codeword length n is equal to the period


of the generator polynomial r
r is the smallest integer number that fulfills:
xr + 1 = 0 mod g(x)
=⇒ xr + 1 : g(x) = h(x) without remainder

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 44/57

Determination of r by reversion of the polynomial division:

x3 + 0 + x + 1 = g(x) · 1
4
x + 0 + x2 + x1 = g(x) · x
x5 + 0 + x3 + x2 = g(x) · x2
x7 + 0 + x5 + x4 = g(x) · x4

x7 + 0 + 0 + 0 + 0 + 0 + 0 + 1 = x7 + 1
⇒r=7
4 2
Proof: (x7 + 1) : (x3 + x + 1) = x
| + x {z+ x + 1}
h(x)
x7 + x5 + x4
x5 + x4 + 1
x5 + x 3
+ x 2

4 3
x + x + x2 + 1
x4 + x2 + x
x3 + x + 1
x3 + x + 1
0
Easier: write down only the coefficients:
( target: 1 0 0 0 0 0 0 1 )
x7 x6 x5 x4 x3 x2 x1 x0
1 0 1 1
1 0 1 1
1 0 1 1
1 0 1 1
1 0 0 0 0 0 0 1 ˆ 7+1
=x
Number of information bits k :

degree of g(x) = n − k = 3
=⇒ k = n − degree of g(x)
= 7−3=4

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 45/57

Solution Problem 11

Galois-field
Direct fields : GF (p) , p = prime number
Extended fields : GF (p m ) , p = prime number
m = integer number, m > 1

here: Direct Field (p=5, prime number)


GF (p) = {0, 1, 2, · · · p−1}, valid only for direct fields
GF (5) = {0, 1, 2, 3, 4}

Properties of elements of the Galois-field (direct or extended)


ai ⊕ ak = al ∈ GF ⊕ =Modulo
ˆ p addition
ai ⊗ ak = am ∈ GF ⊗ =Modulo
ˆ p multiplication

Non-zero primitive elements of direct Galois-fields:


Every non-zero element of the Galois-field GF(p) can be written as ak = (z x ) mod p
with 0 ≤ x < p − 1
z is called a primitive element

Property of inverse elements :

• With respect to addition :


a ⊕ (−a) = 0 =⇒ a + (−a) = n · p
where (-a) is an inverse element, (-a) ∈ GF
• With respect to multiplication:
a ⊗ (a−1 ) = 1 =⇒ a · (a−1 ) = n · p + 1
where (a−1 ) is an inverse element, (a−1 ) ∈ GF

11.1

a 0 1 2 3 4
−a 0 4 3 2 1
a−1 x 1 3 2 4

a · (a−1 ) = 1 mod 5
a + (−a) = 0 mod 5
= i · 5 + 1 mod 5
= n · 5 mod 5
= 1; 6; 11; 16; mod 5
with (−a) ∈ GF(5)
with (a−1 ) ∈ GF(5)

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 46/57

11.2 For Reed Solomon Codes:


n = p m − 1 = 51 − 1 = 4
dmin − 1
t = 1=
2
=⇒ dmin = 2t + 1 = 3

Singelton-Bound

dmin ≤ n − k + 1 is reached for RS


=⇒ dmin = n − k + 1
=⇒ k = n + 1 − dmin
= 2

=⇒ t −→ dmin can be chosen until k > 0


but t ↑ =⇒ k ↓ = ˆ more redundancy

11.3 Code word vector in time domain a


Code word vector in frequency domain A
a = (a1 , a2 , a3 , a4 )
A = (A0 , A1 , A2 , A3 )
Matrix description:
AT = MDFT · aT

     
A0 1 1 1 1 a0
 A1   1 z −1 z −2 z −3   a1 
 A2  = −  1 z −2 z −4 z −6
   · 
  a2 
A3 1 z −3 z −6 z −9 a3

with z =primitive
ˆ element
=⇒ z = 2
24 = 16 = 1 mod 5
z n = z 0 = 1 mod 5

In a Galois field GF(pm ), z i can be written as


mod (p m −1)
zi = zi

To get rid of any negative exponent, one can add k (pm − 1) to the exponent without
changing the result
m mod (p m −1)
=⇒ z i = z [i+k·(p −1)]
here
with pm − 1 = 51 − 1 = 4

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 47/57

 
1 1 1 1
 1 z −1 z −2 z −3 
MDFT = − 
 1 z −2 z −4 z −6 
1 z −3 z −6 z −9
 
1 1 1 1
 1 z3 z2 z1 
= − 1 z2 z0 z2 

1 z1 z2 z3
 
1 1 1 1
 1 3 4 2 
z=2, modulo 5 calculation = − 1 4 1 4 

1 2 4 3
 
4 4 4 4
 4 2 1 3 
inverse elements = 
 4 1 4 1 

4 3 1 2

11.4 Inverse Transform:

aT = MIDFT · AT
     
a0 1 1 1 1 A0
 a1   1 z1 z2 z3   A1 
 a2  =  1 z 2
   · 
z 4 z 6   A2 
a3 1 z3 z6 z9 A3
 
1 1 1 1
m  1 z1 z2 z3 
z i = z i mod (p −1) ! = 
 1 z2

z0 z2 
1 z3 z2 z1
 
1 1 1 1
 1 2 4 3 
=⇒ MIDFT =   1 4

1 4 
1 3 4 2

Property of the matrices :

MDFT · MIDFT = MIDFT · MDFT = I

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 48/57

11.5 Coding:
Codeword in frequency domain:
 
.
A = A0 A1 .. 0 0

= 2 3 0 0
=⇒ Code word in time domain:

aT = MIDFT · AT
   
1 1 1 1 2
 1 2 3   3 
4 
= 
 1 · 
4 4   0 
1
1 3 2 4 0
      
1 1 1 1
 1 
+3· 2 +0· 4  3 
   
= 2·
 1   4 
+0· 
 1   4 
1 3 4 2
   
5 0
 8   3 
= 
 14  mod 5 =  4 
  

11 1


=⇒ transmitted codeword: a = 0 3 4 1

11.6
AT = MDFT · aT
   
4 4 4 4 0
 4 2 3   3 
1 
= 
 4 · 
1 1   4 
4
4 3 2 1 1
       
4 4 4 4
 4   2   1   3 
= 0·
 4 +3· 1 +4·
    +1· 
4   1 
4 3 1 2
   
32 2
 13   3 
 20  mod 5 =  0 
=    

15 0

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 49/57

Problem 12
Considered is : RS (4, 2) over GF(5)
General : RS (n, k) over GF (pm )
here
12.1 r = a + e (modulo pm = 5 operation)
r is the received vector in the time domain
=⇒ r = ( 0 3 4 1 ) + ( 0 0 3 0 )
= ( 0 3 7 1 ) (without modulo 5 operation)
= (0321)

Transforming r in the frequency domain (DFT)


r b r R

RT = MDFT · rT
     
4 4 4 4 0 0+2+3+4
 4 2 1 3   3   0+1+2+3 
=   4 1 4 1 
 ·
 2 = 0+3+3+1
  

4 3 1 2 1 0+4+2+2
| {z }
MDFT , determined
in problem 8
 
4
 1 
= 
 2 

3
=⇒ R = ( 4 1 ... 2 3 )
Without error:
.
R = ( A0 A1 .. 0 0 )
| {z } | {z }
information “parity frequencies”
word length
length k n−k
Here (with error):
.
R = ( 4 1 .. 2 3)
|{z}
S = ( S0 S1 ) = ( 2 3 )
| {z }
n−k=2

General: R = A + E
A=ˆ Codeword in the frequency domain
A consists of 0 in the last (n − k) digits
=⇒ S consists of the last (n − k) digits of R
=
ˆ the last (n − k) digits of E

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 50/57

=⇒ S = 0 =⇒ error free codeword


S 6= 0 =⇒ erroneous received word

12.2 Error position polynomial c(x) b r C(x)

time domain: ci · ei = 0 =⇒ ci = 0 if ei 6= 0
b
r =⇒ ci = 0 if error occurs at position i
frequency domain: C(x) · E(x) = 0 mod (xn − 1)
⇒ x = z i ( z is primitive element ) is a zero, if an error
occurs at position
Q i
⇒ C(x) = ( x − zi )
i,ei 6=0

Determination of the coefficients of C(x) = C0 + · · · + Ce · xe ( degree e )


e is number of errors
t = 1 error can be corrected
assume: e ≤ t

Matrix representation :

     

 Se · · · S0  C0 


  0 
 .. . .. ..  
 ..






!  ..


2t−e  . .  
·
 .

 e+1 = 
  .


 (∗)
    
 S 
2t−1 · · · S 2t−e−1

Ce





0

| {z }
e+1

Equation is fulfilled for correct value of e ( unknown )


e is unknown
=⇒ Starting with the most probable case: e = 1

index of the left element in the last row of the matrix: 2t − 1 = 1


t = 1 =⇒
index of the right element in the last row of the matrix: 2t − e − 1 = 0
   C 
0
=⇒ S1 S0 · = 0
C1
=⇒ S1 · C0 + S0 · C1 = 0
=⇒ 3 · C0 + 2 · C1 = 0 (under-determined, 1 equation 2 variables)
=⇒ Normalize C(x) : e.g. C1 = 1
C(x) is always normalizable, only zeros are important.

=⇒ 3 · C0 + 2 = 0 |−2 = ˆ + 3 ( inverse element )


=⇒ 3 · C0 = 3 | · 3−1 =ˆ · 2 ( inverse element)
=⇒ C0 = 6 mod 5
= 1

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 51/57

=⇒ C(x) = 1 + 1 · x
if the matrix equation (∗) is not solvable for e = 1 =⇒ try e = 2, 3, 4 · · · t
if the matrix equation (∗) is not solvable at all =⇒ no correction possible

Position of the errors:

C(x = z i ) = 0 ⇐⇒ Error on position i


( i = 0 · · · n − 1, |z {z
= 2} )
primitive element
=⇒ C(z 0 = 1) = 2 6= 0
C(z 1 = 2) = 3 6= 0
C(z 2 = 4)[= 5] = 0 ( modulo 5 ) =⇒ Error on position i = 2
C(z 3 = 3) = 4 6= 0 ( must be, only single error assumed e = 1 )
Position is known, but value ?
12.3 Error vector in the frequency domain:
b
r = a+e
r
R = A+E
A is 0 in the last (n − k) digits
=⇒ R and E are equal in the last (n − k) digits
( equals S, s.a. )
..
=⇒ E = ( E0 E1 . E E )
| 2 {z }3
n−k=2
.. ..
= ( E0 E1 . S0 S1 ) = ( E0 E1 . 2 3)
| {z }
k=2
recursive determination of left Ej

e
X
Ej = −(C0−1 ) · Ci mod n · E(j−i) mod n j = 0···k − 1
i=1

=⇒ E0 = −(C0−1 ) · C1 mod 4 · E(0−1) mod 4



= −(1−1 ) · C1 · E(−1) mod 4
x mod y = (x + y) mod y
= −(1) · C1 · E3 mod 4
= −(1) · C1 · E3
= 4·1·3
[= 12]
= 2

E1 = −(C0−1 ) · C1 mod 4 · E(1−1) mod 4


= 4 · 1 · 2
= 3

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 52/57

=⇒ E = ( 2 3 2 3 )

12.4

 = R “ − ” E = (4 1 2 3)−(2 3 2 3)
(modulo 5 operation)
= (4 1 2 3)+(3 2 3 2)
 
= (7 3 5 5)
= (2 3 0 0)

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 53/57

Problem 13
Denotation of convolutional codes:

Information blocks

 ur = (ur,1 , ur,2 , ur,3 , . . . , ur,k )


: actual block
 ur−1 = (ur−1,1 , ur−1,2 , ur−1,3 , . . . , ur−1,k ) : previous block
length k ur−2 = (ur−2,1 , ur−2,2 , ur−2,3 , . . . , ur−2,k ) : before previous block



 ..
. :

Code block

length n ar = (ar,1 , ar,2 , ar,3 , . . . , ar,n )

Memory length
m is the memory length =
ˆ number of information blocks of the past that are used to
create ar .

13.1 Here

• Information blocks consist only of one bit

ur = (ur,1 ) ur−1 = (ur−1,1 )


=⇒ k = 1

• Only one information block of the past (ur−1 ) is used to create ar

=⇒ m = 1

• Code block consists of two bit

ar = (ar,1 , ar,2 )
=⇒ n = 2

• Coderate:
k 1
RC = =
n 2

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 54/57

13.2 Meaning of states: Content of memory (here: ur−1,1 )


Obtaining of the state diagram of the state table:
The state table shows the output and next state of the coder for every possible
combination of information blocks (input and memory (state)) of the coder.

Input Actual State Output Next State


ur ur−1 ar ur−1
ur,1 ur−1,1 ar,1 ar,2 ur−1,1
= ur,1 ⊕ ur−1,1 = ur−1,1 = ur,1
0 0 0 0 0
0 1 1 1 0
1 0 1 0 1
1 1 0 1 1

From this table the state diagram can be directly obtained.


memory content=state
ˆ

1 (10)

0 (00) 0 1 1 (01)

0 (11)

Input Output

For example: Consider the highlighted values: they are obtained from the 2nd row
of the state table
If the actual state of the encoder is ’1’ and the input is ’0’ then the output is ’11’
and the next state will be ’0’.
13.3 The Trellis diagram describes also the states of the encoder, but with a temporal
approach.
• The starting state is always the zero state.
• The reaction of the encoder (output, next state) at the actual state to every
possible input is determined.
• Then the next state is considered and again the reaction of the encoder (output,
next state) at the actual state to every possible input is determined.
• And so on . . .

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 55/57

starting 0(00) 0(00) 0(00) 0(00) 0(00)


state 0 0 0 0 0 0

1(

1(

1(

1(

1(
10

10

10

10

10
)

)
1)

1)

1)

1)
0(1

0(1

0(1

0(1
1 1 1 1 1 1
1(01) 1(01) 1(01) 1(01)

0 1 2 3 4 5 time

13.4 Fundamental way:

• starting at zero state and ending at zero state, without being in the zero state
in between.
• Not all zero state way!
• The way has to be chosen that the number of ’1’ at the output is minimized.

So, the fundamental way is highlighted in the next diagram:

starting 0(00) 0(00) 0(00) 0(00) 0(00)


state 0 0 0 0 0 0
1(

1(

1(

1(

1(
10

10

10

10

10
)

)
1)

1)

1)

1)
0(1

0(1

0(1

0(1

1 1 1 1 1 1
1(01) 1(01) 1(01) 1(01)

0 1 2 3 4 5 time

Every different way, that starts at zero state and ends at zero state (except all zero
state way) has got a higher number of ’1’ at the output.
The free distance df is the number of ’1’ at the output using the fundamental way,
so that it is the distance to the all zero sequence.

=⇒ df = 3

The free distance is a measure for the capability to correct errors. The higher df ,
the higher is the capability to correct errors.

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 56/57

13.5 Every segment is terminated with a single ’0’. So, the sequence, that has to be
encoded is:

 Termination
z}|{ 
1| 0{z1 1} | 0
u

The according way in the Trellis diagram is highlighted in the following diagram.

starting 0(00) 0(00) 0(00) 0(00) 0(00)


state 0 0 0 0 0 0
1(

1(
1(

1(

1(
10

10

10
10

10
)

)
)

)
)

)
1)

1)
11

11
0(1

0(1
0(

0(
1 1 1 1 1 1
1(01) 1(01) 1(01) 1(01)

0 1 2 3 4 5 time

Terminated
uncoded sequence 1 0 1 1 0
encoded 1 0 1 1 1 0 0 1 1 1

13.6 For a not terminated encoder:


k 1
RC = = =50%
ˆ
n 2
Terminated after T = 4 bit with m = 1 ’0’:
T infobits T ·k 4
RC = = = =
ˆ 40%
codebits (T + m) · n 10

13.7 Decoding: Viterbi-Algorithm


Maximum Likelihood decision, which sequence is the most likely transmitted code-
word, if a sequence a is received.

• Starting at the zero state (left) of the trellis diagram.


• Calculating the the number of matches of the output and the corresponding
received bits for any possible way (every arrow in the trellis diagram) to the
next stage.
• For every state on every stage only the maximum number of matches is used
for the further calculations. All other numbers are canceled (thus the ways
that belong to this numbers are also canceled).

Date: 12. Juli 2011


NTS/IW Exercise problems for Page
UDE Coding Theory 57/57

• Finally, the way with the highest number of matches is used and retraced to
the beginning of the trellis diagram.

Using this scheme yields:

received
sequence a 10 10 10 00 11

0(00) 1 0(00) 2 0(00) 4 0(00) 6 0(00) 6


0 0 0 0 0 0
1(

1(

1(

1(

1(
3 4 5 8
10

10

10

10

10
)

)
1)

1)

1)

1)
0(1

0(1

0(1

0(1
2 3 5 5 7
1 1 1 1 1 1
1(01) 2 1(01) 3 1(01) 6 1(01) 7

So, the received sequence is decoded:

decoded 1 0 1 1 0
corresponding codeword 1 0 1 1 1 0 0 1 1 1
received word 1 0 1 0 1 0 0 0 1 1

So, 2 bit errors are corrected, at 4th digit and 8th digit.

Date: 12. Juli 2011

You might also like