You are on page 1of 770

Digital Communication and Error

Correcting Codes
Timothy J. Schulz
Professor and Chair

Engineering Exploration
Fall, 2004

Department of Electrical and Computer Engineering

Department of Electrical and Computer Engineering

Digital Data
ASCII Text
A
B
C
D
E
F
.
.
.

01000001
01000010
01000011
01000100
01000101
01000110
.
.
.

Digital Coding for Error Correction

00101001110101101010101000

Department of Electrical and Computer Engineering

Digital Sampling

00001000100000101101101101000011111011111111
011
010
001
000
111
110
101
100
Digital Coding for Error Correction

00101001110101101010101000

Department of Electrical and Computer Engineering

Digital Communication
Example: Frequency Shift Keying (FSK)
Transmit a tone with a frequency determined by each bit:
s t b cos 2 f 0t 1 b cos 2 f1t

Digital Coding for Error Correction

00101001110101101010101000

Department of Electrical and Computer Engineering

Digital Channels
Binary Symmetric Channel
0

1-p

0
p
p

1
1-p
Error probability: p

Digital Coding for Error Correction

00101001110101101010101000

Department of Electrical and Computer Engineering

Error Correcting Codes


3 channel bits per 1 information bit: rate = 1/3
encode book
information bits

channel bits

0
1

000
111

decode book
channel bits

information bits

000
001
010
011
100
101
110
111

0
0
0
1
0
1
1
1

Digital Coding for Error Correction

00101001110101101010101000

Department of Electrical and Computer Engineering

Error Correcting Codes


information bits
channel code
received bits
decoded bits

0
0
1
0
1
000 000 111 000 111
010 000 100 001 110
0
0
0
0
1

5 channel errors; 1 information error

Digital Coding for Error Correction

00101001110101101010101000

Department of Electrical and Computer Engineering

Error Correcting Codes


An error will only be made if the channel makes 2 or
three errors on a block of 3 channel bits
situation
probability
no errors
(1-p)(1-p)(1-p) = 1-3p+3p -p
2

one error
one error
two errors
one error
two errors
two errors
three errors

(1-p)(1-p)(p)
(1-p)(p)(1-p)
(1-p)(p)(p)
(p)(1-p)(1-p)
(p)(1-p)(p)
(p)(p)(1-p)
(p)(p)(p)

=
=
=
=
=
=
=

p-2p2+p3
p-2p2+p3
p2-p3
p-2p2+p3
p2-p3
p2-p3
p3

0.45

0.4
bit error probability

ccc
cce
cec
cee
ecc
ece
eec
eee

0.5

0.35
0.3
0.25
0.2

error probability = 3p2 2p30.15

0.1
0.05
0

Digital Coding for Error Correction

0.1

0.2
0.3
channel error probability

0.4

0.5

00101001110101101010101000

Department of Electrical and Computer Engineering

Error Correcting Codes


Codes are characterized by the number of channel bits
(M) used for (N) information bits. This is called an N/M
code.
An encode book has 2N entries, and each entry is an
M-bit codeword.
A decode book has 2M entries, and each entry is an
N-bit information-bit sequence.

Digital Coding for Error Correction

00101001110101101010101000

Linear Block Codes

EE576

Dr. Kousa

Linear Block Codes

15

Basic Definitions
Let u be a k-bit information sequence
v be the corresponding n-bit codeword.
A total of 2k n-bit codewords constitute a (n,k) code.
Linear code: The sum of any two codewords is a codeword.
Observation: The all-zero sequence is a codeword in every
linear block code.

EE576

Dr. Kousa

Linear Block Codes

16

Generator Matrix

All 2k codewords can be generated from a set of k linearly independent


codewords.
Let g0, g1, , gk-1 be a set of k independent codewords.
g 0 g 00
G

g k -1 g k 1, 0

g 01

g k 1,1

g 0,n1

g k 1,n1

v = uG

EE576

Dr. Kousa

Linear Block Codes

17

Systematic Codes
Any linear block code can be put in systematic form
n-k
check bits

k
information bits

In this case the generator matrix will take the form

G = [ P Ik]
This matrix corresponds to a set of k codewords
corresponding to the information sequences that have a
single nonzero element. Clearly this set in linearly
independent.
EE576

Dr. Kousa

Linear Block Codes

18

Generator Matrix (contd)


EX: The generating set for the (7,4) code:
1000 ===> 1101000; 0100 ===> 0110100
0010 ===> 1110010; 0001 ===> 1010001
Every codeword is a linear combination of these 4 codewords.
That is: v = u G, where

1 1 0

0 1 1

G
1 1 1

0 1
1

k( nk )

0 0

0 0
P | Ik
1 0

0 1

k k

1 0
0 1
0 0
0 0

Storage requirement reduced from 2k(n+k) to k(n-k).


EE576

Dr. Kousa

Linear Block Codes

19

Parity-Check Matrix
For G = [ P | Ik ], define the matrix H = [In-k | PT]
(The size of H is (n-k)xn).
It follows that GHT = 0.
Since v = uG, then vHT = uGHT = 0.
The parity check matrix of code C is the generator matrix
of another code Cd, called the dual of C.
1
H 0

0
EE576

Dr. Kousa

Linear Block Codes

20

Encoding Using H Matrix


(Parity Check Equations)

v1

v2

v3

1
0
0
v4 v5 v6 v7 1
0
information 1
1

v1+v4 v6 v7 0
v2+v4 v5 v6 0
v3+v5 v6 v7 0

EE576

Dr. Kousa

0
1
0
1
1
1
0

0
0
1
0 0
1
1
1

v1=v4 v6 v7
v2=v4 v5 v6
v3=v5 v6 v7

Linear Block Codes

21

Encoding Circuit

EE576

Dr. Kousa

Linear Block Codes

22

Minimum Distance
DF: The Hamming weight of a codeword v , denoted by
w(v), is the number of nonzero elements in the codeword.
DF: The minimum weight of a code, wmin, is the smallest
weight of the nonzero codewords in the code.
wmin = min {w(v): v C; v 0}.
DF: Hamming distance between v and w, denoted by
d(v,w), is the number of locations where they differ.
Note that d(v,w) = w(v+w)
DF: The minimum distance of the code
dmin = min {d(v,w): v,w C, v 0}
TH3.1: In any linear code, dmin = wmin
EE576

Dr. Kousa

Linear Block Codes

23

Minimum Distance (contd)


TH3.2 For each codeword of Hamming weight l there
exists l columns of H such that the vector sum of these
columns is zero. Conversely, if there exist l columns of H
whose vector sum is zero, there exists a codeword of
weight l.
COL 3.2.2 The dmin of C is equal
to the minimum numbers of
columns in H that sum to zero. 1 0 0 1 0 1 1
H 0 1 0 1 1 1 0
EX:

0 0 1 0 1 1 1

EE576

Dr. Kousa

Linear Block Codes

24

Decoding Linear Codes


Let v be transmitted and r be received, where
r=v+e
v
+
e error pattern = e1e2..... en, where
e
th
1
if
the
error
has
occured
in
the
i
location

ei
0 otherwise

The weight of e determines the number of errors.


We will attempt both processes: error detection, and error
correction.
EE576

Dr. Kousa

Linear Block Codes

25

Error Detection
Define the syndrome
s = rHT = (s0, s1, , sk-1)
If s = 0, then r = v and e =0,
If e is similar to some codeword,
then s = 0 as well, and the error is undetectable.
EX 3.4:
1 0 0
0 1 0
0 0 1
s 0 = r0 r3 r5 r6
s0 s1 s2 r1 r2 r3 r4 r5 r6 r7 1 1 0 0
0 1 1
s1 = r1 r3 r4 r5
1 1 1
s 2 = r2 r4 r5 r6
1 0 1
EE576

Dr. Kousa

Linear Block Codes

26

Error Correction
s = rHT = (v + e) HT = vHT + eHT = eHT
The syndrome depends only on the error pattern.
Can we use the syndrome to find e, hence do the
correction?
Syndrome digits are linear combination of error digits.
They provide information about error location.
Unfortunately, for n-k equations and n unknowns there are
2k solutions. Which one to use?

EE576

Dr. Kousa

Linear Block Codes

27

Example 3.5

Let r = 1001001
s = 111
s0 = e0+e3+e5+e6 =1
s1 = e1+e3+e4+e5 =1
s2 = e2+e4+e5+e6 =1
There are 16 error patterns that satisfy the above
equations, some of them are
0000010 1101010
1010011
1111101
The most probable one is the one with minimum weight.
Hence v* = 1001001 + 0000010 = 1001011
EE576

Dr. Kousa

Linear Block Codes

28

Standard Array Decoding


Transmitted codeword is any one of:
v1, v2, , v2k
The received word r is any one of 2n n-tuple.
Partition the 2n words into 2k disjoint subsets D1, D2,, D2k
such that the words in subset Di are closer to codeword vi
than any other codeword.
Each subset is associated with one codeword.

EE576

Dr. Kousa

Linear Block Codes

29

Standard Array Construction


1. List the 2k codewords in a row, starting with the all-zero codeword v1.
2. Select an error pattern e2 and place it below v1. This error pattern will
be a correctable error pattern, therefore it should be selected such that:
(i) it has the smallest weight possible (most probable error)
(ii) it has not appeared before in the array.
3. Add e2 to each codeword and place the sum below that codeword.
4. Repeat Steps 2 and 3 until all the possible error patterns have been
accounted for. There will always be 2n / 2k = 2 n-k rows in the array.
Each row is called a coset. The leading error pattern is the coset
leader.
Note that choosing any element in the coset as coset leader does not
change the elements in the coset; it simply permutes them.
EE576

Dr. Kousa

Linear Block Codes

30

Standard Array
v1 0

v2

e2
e3

e2 + v2
e3 + v 2

e 2n - k v 2 e 2n - k v3

e 2n - k

v3

v 2k

e 2 + v3 e 2 v 2k
e3 + v3 e3 v 2k

e 2n - k v 2k

TH 3.3
No two n-tuples in the same row are identical.
Every n-tuple appears in one and only one row.

EE576

Dr. Kousa

Linear Block Codes

31

Standard Array Decoding is Minimum


Distance Decoding
Let the received word r fall in Di subset and lth coset.
Then r = el + vi
r will be decoded as vi. We will show that r is closer to vi
than any other codeword.
d(r,vi) = w(r + vi) = w(el + vi + vi) = w(el)
d(r,vj) = w(r + vj) = w(el + vi + vj) = w(el + vs)
As el and el + vs are in the same coset, and el is selected to
be the minimum weight that did not appear before, then
w(el) w(el + vs)
Therefore d(r,vi) d(r,vj)
EE576

Dr. Kousa

Linear Block Codes

32

Standard Array Decoding (contd)


TH 3.4
Every (n,k) linear code is capable of correcting exactly 2 n-k
error patterns, including the all-zero error pattern.
EX: The (7,4) Hamming code
# of correctable error patterns = 23 = 8
# of single-error patterns = 7
Therefore, all single-error patterns, and only single-error
patterns can be corrected. (Recall the Hamming Bound,
and the fact that Hamming codes are perfect.

EE576

Dr. Kousa

Linear Block Codes

33

Standard Array Decoding (contd)


EX 3.6: The (6,3) code defined by the H matrix:
Codewords
1 0 0 0 1 1
000000

H 0 1 0 1 0 1
110001

101010
0 0 1 1 1 0
011011
011100
v1=v5 v6
101101
v2=v4 v6
110110
v3=v4 v5
000111
d min 3
EE576

Dr. Kousa

Linear Block Codes

34

Standard Array Decoding (contd)


Can correct all single errors and one double error pattern
000000 110001 101010 011011 011100 101101 110110 000111
000001 110000 101011 011010 011101 101100 110111 000110
000010 110011 101000 011001 011110 101111 110100 000101
000100 110101 101110 011111 011000 101001 110010 000011
001000 111001 100010 010011 010100 100101 111110 001111
010000 100001 111010 001011 001100 111101 100110 010111
100000 010001 001010 111011 111100 001101 010110 100111
100100 010101 001110 111111 111000 001001 010010 100011
EE576

Dr. Kousa

Linear Block Codes

35

The Syndrome

Huge storage memory (and searching time) is required by standard array


decoding.
Recall the syndrome
s = rHT = (v + e) HT = eHT
The syndrome depends only on the error pattern and not on the transmitted
codeword.
TH 3.6
All the 2k n-tuples of a coset have the same syndrome. The syndromes of
different cosets are different.
(el + vi )HT = elHT (1st Part)
Let ej and el be leaders of two cosets, j<l. Assume they have the same
syndrome.
ejHT = elHT (ej +el)HT = 0.
This implies ej +el = vi, or el = ej +vi
This means that el is in the jth coset. Contradiction.

EE576

Dr. Kousa

Linear Block Codes

36

The Syndrome (contd)


There are 2n-k rows and
2n-k syndromes (one-toone correspondence).
Instead of forming the
standard array we form a
decoding table of the
correctable error patterns
and their syndromes.

EE576

Dr. Kousa

Error Pattern Syndrome

Linear Block Codes

0000000
1000000
0100000
0010000
0001000
0000100
0000010
0000001

000
100
010
001
110
011
111
101

37

Syndrome Decoding
Decoding Procedure:
1. For the received vector r, compute the syndrome s = rHT.
2. Using the table, identify the coset leader (error pattern) el .
3. Add el to r to recover the transmitted codeword v.
EX:
r = 1110101 ==> s = 001
==> e = 0010000
Then, v = 1100101
Syndrome decoding reduces storage memory from nx2n to
2n-k(2n-k). Also, It reduces the searching time considerably.
EE576

Dr. Kousa

Linear Block Codes

38

Hardware Implementation
Let r = r0 r1 r2 r3 r4 r5 r6
From the H matrix:
s0 = r0 + r3 + r5 + r6

and

s = s0 s1 s2

s1 = r1 + r3 + r4 + r5
s2 = r2 + r4 + r5 + r6
From the table of syndromes and their corresponding
correctable error patterns, a truth table can be constructed.
A combinational logic circuit with s0 , s1 , s2 as input and
e0 , e1 , e2 , e3 , e4 , e5 , e6 as outputs can be designed.
EE576

Dr. Kousa

Linear Block Codes

39

Decoding Circuit for the (7,4) HC

EE576

Dr. Kousa

Linear Block Codes

40

Error Detection Capability


A codeword with dmin can detect all error patterns of weight dmin 1 or
less. It can detect many higher error patterns as well, but not all.
In fact the number of undetectable error patterns is 2 k-1 out of the 2n -1
nonzero error patterns.
DF: Ai number of codewords of weight i.
{Ai; i=0,1,,n} = weight distribution of the code.
Note that Ao=1; Aj =0 for 0 < j < dmin

Pu

EE576

Dr. Kousa

i
n i
A
p
(
1

p
)
i

i d min

Linear Block Codes

41

EX: Undetectable error probability of (7,4) HC


A0 =A7 = 1; A1 =A2 =A5 =A6=0; A3=A4=7
Pu(E) =7p3(1-p)4 + 7p4(1-p)3 + p7
n
For p = 10-2 Pu(E) = 7x10-6
i
A
(
z
)

A
z
i
Define the weight enumerator:
i 0
Then
i
n
n

Pu Ai p (1 p)
i

i 1

n i

(1 p) Ai p
i 1
1 p
n

Let z = p/(1-p), and noting that A0=1


i

p
p
n
1 Ai
; Pu (1 p)
A
i 1
1 p
1 p

EE576

Dr. Kousa

Linear Block Codes

p
1
A
1 p

42

The probability of undetected error can as well be found from the


weight enumerator of the dual code

Pu 2 n k B (1 2 p ) (1 p) n

where B(z) is the weight enumerator of the dual code.


When either A(z) and B(z) are not available, Pu may be upper bounded
by
Pu 2-(n-k) [1-(1-p)n]
For good channels (p 0) Pu 2-(n-k)

EE576

Dr. Kousa

Linear Block Codes

43

Error Correction Capability


An (n,k) code of dmin can correct up to t errors where

t (d min 1) / 2
It may be able to correct higher error patterns but not all.
The total number of patterns it can correct is 2 n-k
t
n
If 2 n k
the code is perfect
i 0 i

EE576

Pu

n p i (1 p) n i 1 n p i (1 p) n i
i
i

i t 1
i 0

Dr. Kousa

Linear Block Codes

44

Hamming Codes
Hamming codes constitute a family of single-error correcting codes
defined as:
n = 2m-1, k = n-m, m 3
The minimum distance of the code dmin = 3
Construction rule of H:

H is an (n-k)xn matrix, i.e. it has 2m-1 columns of m tuples.


The all-zero m tuple cannot be a column of H (otherwise dmin=1).
No two columns are identical (otherwise dmin=2).
Therefore, the H matrix of a Hamming code of order m has as its
columns all non-zero m tuples.
The sum of any two columns is a column of H. Therefore the sum of
some three columns is zero, i.e. dmin=3.

EE576

Dr. Kousa

Linear Block Codes

45

Systematic Hamming Codes


In systematic form:

H =[ Im Q]

The columns of Q are all m-tuples of weight 2.


Different arrangements of the columns of Q produces
different codes, but of the same distance property.
Hamming codes are perfect codes
t

n 2nk
i

i 0

Right side = 1+n; Left side = 2m =n+1


EE576

Dr. Kousa

Linear Block Codes

46

Decoding of Hamming Codes


Consider a single-error pattern e(i), where i is a number
determining the position of the error.
s = e(i) HT = HiT = the transpose of the ith column of H.
Example:
1 0 0
0 1 0
0 0 1
0 1 0 0 0 0 0 1 1 0 0 1 0
0 1 1
1 1 1
1 0 1
EE576

Dr. Kousa

Linear Block Codes

47

Decoding of Hamming Codes (contd)


That is, the (transpose of the) ith column of H is the
syndrome corresponding to a single error in the ith position.
Decoding rule:
1. Compute the syndrome s = rHT
2. Locate the error ( i.e. find i for which sT = Hi)
3. Invert the ith bit of r.

EE576

Dr. Kousa

Linear Block Codes

48

Weight Distribution of Hamming Codes


The weight enumerator of Hamming codes is:
A( z )

1
(1 z ) n n(1 z )( z 2 ) ( n 1) / 2
n 1

The weight distribution could as well be obtained from the


recursive equations:
A0=1, A1=0
(i+1)Ai+1 + Ai + (N-i+1)Ai-1 = CNi

i=1,2,,N

The dual of a Hamming code is a (2m-1,m) linear code. Its


m
2 m1
weight enumerator is
B( z ) 1 (2 1) z
EE576

Dr. Kousa

Linear Block Codes

49

History
In the late 1940s Richard Hamming recognized that
the further evolution of computers required greater
reliability, in particular the ability to not only detect
errors, but correct them. His search for errorcorrecting codes led to the Hamming Codes, perfect
1-error correcting codes, and the extended Hamming
Codes, 1-error correcting and 2-error detecting
codes.

EE576

Dr. Kousa

Linear Block Codes

50

Uses
Hamming Codes are still widely used in computing,
telecommunication, and other applications.
Hamming Codes also applied in
Data compression
Some solutions to the popular puzzle The Hat
Game
Block Turbo Codes

EE576

Dr. Kousa

Linear Block Codes

51

A [7,4] binary Hamming Code


Let our codeword be (x1 x2 x7) F27
x3, x5, x6, x7 are chosen according to the message
(perhaps the message itself is (x3 x5 x6 x7 )).
x4 := x5 + x6 + x7 (mod 2)
x2 := x3 + x6 + x7
x1 := x3 + x5 + x7

EE576

Dr. Kousa

Linear Block Codes

52

[7,4] binary Hamming codewords

EE576

Dr. Kousa

Linear Block Codes

53

A [7,4] binary Hamming Code


Let a = x4 + x5 + x6 + x7 (=1 iff one of these bits is in error)
Let b = x2 + x3 + x6 + x7
Let c = x1 + x3 + x5 + x7
If there is an error (assuming at most one) then abc will be
binary representation of the subscript of the offending bit.

If (y1 y2 y7) is received and abc 000, then we


assume the bit abc is in error and switch it. If
abc=000, we assume there were no errors (so if
there are three or more errors we may recover the
wrong codeword).

EE576

Dr. Kousa

Linear Block Codes

54

Definition: Generator and Check


Matrices
For an [n, k] linear code, the generator matrix is a
kn matrix for which the row space is the given code.
A check matrix for an [n, k] is a generator matrix for
the dual code. In other words, an (n-k)k matrix M for
which Mx = 0 for all x in the code.

EE576

Dr. Kousa

Linear Block Codes

55

A Construction for binary


Hamming Codes
For a given r, form an r 2r-1 matrix M, the columns of which
are the binary representations (r bits long) of 1, , 2r-1.
The linear code for which this is the check matrix is a [2 r-1, 2r-1
r] binary Hamming Code = {x=(x1 x2 x n) : MxT = 0}.

Example Check Matrix


A check matrix for a [7,4] binary Hamming Code:

EE576

Dr. Kousa

Linear Block Codes

56

Syndrome Decoding
Let y = (y1 y2 yn) be a received codeword.
The syndrome of y is S:=LryT. If S=0 then there was
no error. If S 0 then S is the binary representation
of some integer 1 t n=2r-1 and the intended
codeword is
x = (y1 yr+1 yn).

EE576

Dr. Kousa

Linear Block Codes

57

Example Using L3
Suppose (1 0 1 0 0 1 0) is received.

100 is 4 in binary, so the intended codeword was (1 0


1 1 0 1 0).

EE576

Dr. Kousa

Linear Block Codes

58

Extended [8,4] binary Hamm.


Code
As with the [7,4] binary Hamming Code:
x3, x5, x6, x7 are chosen according to the message.
x4 := x5 + x6 + x7
x2 := x3 + x6 + x7
x1 := x3 + x5 + x7
Add a new bit x0 such that
x0 = x1 + x2 + x3 + x4 + x5 + x6 + x7 . i.e., the new bit
makes the sum of all the bits zero. x0 is called a
parity check.

EE576

Dr. Kousa

Linear Block Codes

59

Extended binary Hamming Code


The minimum distance between any two codewords
is now 4, so an extended Hamming Code is a 1-error
correcting and 2-error detecting code.
The general construction of a [2r, 2r-1 - r] extended
code from a [2r 1, 2r 1 r] binary Hamming Code
is the same: add a parity check bit.

EE576

Dr. Kousa

Linear Block Codes

60

Check Matrix Construction of


Extended Hamming Code
The check matrix of an extended Hamming Code can
be constructed from the check matrix of a Hamming
code by adding a zero column on the left and a row
of 1s to the bottom.

EE576

Dr. Kousa

Linear Block Codes

61

q-ary Hamming Codes


The binary construction generalizes to Hamming
Codes over an alphabet A={0, , q}, q 2.
For a given r, form an r (qr-1)/(q-1) matrix M over A,
any two columns of which are linearly independent.
M determines a [(qr-1)/(q-1), (qr-1)/(q-1) r] (= [n,k])
q-ary Hamming Code for which M is the check matrix.

EE576

Dr. Kousa

Linear Block Codes

62

Example: ternary [4, 2] Hamming

Two check matrices for the some [4, 2] ternary


Hamming Codes:

EE576

Dr. Kousa

Linear Block Codes

63

Syndrome decoding: the q-ary


case
The syndrome of received word y, S:=MyT, will be a
multiple of one of the columns of M, say S=mi,
scalar, mi the ith column of M. Assume an error vector
of weight 1 was introduced y = x + (0 0), in
the ith spot.

EE576

Dr. Kousa

Linear Block Codes

64

Example: q-ary Syndrome


[4,2] ternary with check matrix
(0 1 1 1) received.

, word

So decode (0 1 1 1) as
(0 1 1 1) (0 0 2 0) = (0 1 2 1).

EE576

Dr. Kousa

Linear Block Codes

65

Perfect 1-error correcting


Hamming Codes are perfect 1-error correcting codes.
That is, any received word with at most one error will
be decoded correctly and the code has the smallest
possible size of any code that does this.
For a given r, any perfect 1-error correcting linear
code of length n=2r-1 and dimension n-r is a
Hamming Code.

EE576

Dr. Kousa

Linear Block Codes

66

Proof: 1-error correcting


A code will be 1-error correcting if
spheres of radius 1 centered at codewords cover the
codespace, and
if the minimum distance between any two codewords 3,
since then spheres of radius 1 centered at codewords will be
disjoint.
Suppose codewords x, y differ by 1 bit. Then x-y is a codeword
of weight 1, and M(x-y) 0. Contradiction. If x, y differ by 2
bits, then M(x-y) is the difference of two multiples of columns of
M. No two columns of M are linearly dependent, so M(x-y) 0,
another contradiction. Thus the minimum distance is at least 3.

EE576

Dr. Kousa

Linear Block Codes

67

Perfect
A sphere of radius centered at x is
S(x)={y in An : dH(x,y) }. Where A is the alphabet,
Fq, and dH is the Hamming distance.
A sphere of radius e contains
words.
If C is an e-error correcting code then
, so
.

EE576

Dr. Kousa

Linear Block Codes

68

Perfect
This last inequality is called the sphere packing
bound for an e-error correcting code C of length n
over Fm:
where n is the length
of the code and in this case e=1.
A code for which equality holds is called perfect.

EE576

Dr. Kousa

Linear Block Codes

69

Proof: Perfect

The right side of this, for e=1 is qn/(1+n(q-1)).


The left side is qn-r where n= (qr-1)/(q-1).
qn-r(1+n(q-1)) = qn-r(1+(qr-1)) = qn.

Applications
Data compression.
Turbo Codes
The Hat Game
EE576

Dr. Kousa

Linear Block Codes

70

Data Compression
Hamming Codes can be used for a form of lossy
compression.
If n=2r-1 for some r, then any n-tuple of bits x is within
distance at most 1 from a Hamming codeword c. Let
G be a generator matrix for the Hamming Code, and
mG=c.
For compression, store x as m. For decompression,
decode m as c. This saves r bits of space but
corrupts (at most) 1 bit.

EE576

Dr. Kousa

Linear Block Codes

71

The Hat Game


A group of n players enter a room whereupon they each receive
a hat. Each player can see everyone elses hat but not his own.
The players must each simultaneously guess a hat color, or
pass.
The group loses if any player guesses the wrong hat color or if
every player passes.
Players are not necessarily anonymous, they can be numbered.
Assignment of hats is assumed to be random.
The players can meet beforehand to devise a strategy.
The goal is to devise the strategy that gives the highest
probability of winning.

EE576

Dr. Kousa

Linear Block Codes

72

EE 551/451, Fall, 2007


Communication Systems

Zhu Han
Department of Electrical and Computer Engineering
Class 25
Dec. 6th, 2007

Outline

Project 2

ARQ Review

Linear Code
Hamming Code Revisit
ReedMuller code

Cyclic Code
CRC Code
BCH Code
RS Code

EE 541/451 Fall 2007

ARQ, FEC, HEC

ARQ

Error detection code

tx

rx
ACK/NACK

Forward Error Correction (error correct coding)


tx

Error correction code

rx

Hybrid Error Correction


Error detection/
Correction code

tx

rx
ACK/NACK

EE 541/451 Fall 2007

Hamming Code

H(n,k): k information bit length, n overall code length

n=2^m-1, k=2^m-m-1:

H(7,4), rate (4/7); H(15,11), rate (11/15); H(31,26), rate (26/31)

H(7,4): Distance d=3, correction ability 1, detection ability 2.

Remember that it is good to have larger distance and rate.

Larger n means larger delay, but usually better code

EE 541/451 Fall 2007

Hamming Code Example

H(7,4)

Generator matrix G: first 4-by-4 identical matrix

Message information vector p

Transmission vector x

Received vector r

and error vector e

Parity check matrix H

EE 541/451 Fall 2007

Error Correction

If there is no error, syndrome vector z=zeros

If there is one error at location 2

New syndrome vector z is

which corresponds to the second column of H. Thus, an error


has been detected in position 2, and can be corrected
EE 541/451 Fall 2007

Exercise

Same problem as the previous slide, but p=(1001) and the error
occurs at location 4 instead.

Pause for 5 minutes

Might be 10 points in the finals.

EE 541/451 Fall 2007

Important Hamming Codes

Hamming (7,4,3) -code. It has 16 codewords of length 7. It can


be used to send 27 = 128 messages and can be used to correct 1
error.

Golay (23,12,7) -code. It has 4 096 codewords. It can be used to


transmit 8 3888 608 messages and can correct 3 errors.
Quadratic residue (47,24,11) -code. It has 16 777 216
codewords and can be used to transmit 140 737 488 355 238
messages and correct 5 errors.

EE 541/451 Fall 2007

ReedMuller code

EE 541/451 Fall 2007

Cyclic code

Cyclic codes are of interest and importance because


They posses rich algebraic structure that can be utilized in a
variety of ways.
They have extremely concise specifications.
They can be efficiently implemented using simple shift register
Many practically important codes are cyclic

In practice, cyclic codes are often used for error detection


(Cyclic redundancy check, CRC)
Used for packet networks
When an error is detected by the receiver, it requests
retransmission
ARQ
EE 541/451 Fall 2007

BASIC DEFINITION of Cyclic Code

EE 541/451 Fall 2007

FREQUENCY of CYCLIC CODES

EE 541/451 Fall 2007

EXAMPLE of a CYCLIC CODE

EE 541/451 Fall 2007

POLYNOMIALS over GF(q)

EE 541/451 Fall 2007

EXAMPLE

EE 541/451 Fall 2007

Cyclic Code Encoder

EE 541/451 Fall 2007

Cyclic Code Decoder

Divider

Similar structure as multiplier for encoder

EE 541/451 Fall 2007

Cyclic Redundancy Checks (CRC)

EE 541/451 Fall 2007

Example of CRC

EE 541/451 Fall 2007

Checking for errors

EE 541/451 Fall 2007

Capability of CRC

An error E(X) is undetectable if it is divisible by G(x). The


following can be detected.

All single-bit errors if G(x) has more than one nonzero term
All double-bit errors if G(x) has a factor with three terms
Any odd number of errors, if P(x) contain a factor x+1
Any burst with length less or equal to n-k
A fraction of error burst of length n-k+1; the fraction is 1-2^(-(-nk-1)).
A fraction of error burst of length greater than n-k+1; the fraction
is 1-2^(-(n-k)).

Powerful error detection; more computation complexity


compared to Internet checksum

Page 652
EE 541/451 Fall 2007

BCH Code

Bose, Ray-Chaudhuri, Hocquenghem


Multiple error correcting ability
Ease of encoding and decoding
Page 653

Most powerful cyclic code


For any positive integer m and t<2^(m-1), there exists a t-error
correcting (n,k) code with n=2^m-1 and n-k<=mt.

Industry standards
(511, 493) BCH code in ITU-T. Rec. H.261 video codec for
audiovisual service at kbit/s a video coding a standard used for
video conferencing and video phone.
(40, 32) BCH code in ATM (Asynchronous Transfer Mode)
EE 541/451 Fall 2007

BCH Performance

EE 541/451 Fall 2007

Reed-Solomon Codes

An important subclass of non-binary BCH

Page 654

Wide range of applications


Storage devices (tape, CD, DVD)
Wireless or mobile communication
Satellite communication
Digital television/Digital Video Broadcast(DVB)
High-speed modems (ADSL, xDSL)

EE 541/451 Fall 2007

Examples

10.2 page 639

10.3 page 648

10.4 Page 651

Might be 4 points in the final

EE 541/451 Fall 2007

1971: Mariner 9

Mariner 9 used a [32,6,16] Reed-Muller


code to transmit its grey images of Mars.

camera rate:
100,000 bits/second
transmission speed:
16,000 bits/second

EE 541/451 Fall 2007

1979+: Voyagers I & II

Voyagers I & II used a [24,12,8] Golay code


to send its color images of Jupiter and Saturn.

Voyager 2 traveled further to Uranus


and Neptune. Because of the higher
error rate it switched to the more
robust Reed-Solomon code.
EE 541/451 Fall 2007

Modern Codes

More recently
Turbo codes
were invented,
which are used in
3G cell phones,
(future) satellites,
and in the CassiniHuygens space
probe [1997].

Other modern codes: Fountain, Raptor, LT, online codes

Next, next class


EE 541/451 Fall 2007

Error Correcting Codes


imperfectness of a given code as the difference between the code's required Eb/No to
attain a given word error probability (Pw), and the minimum possible Eb/No required to
attain the same Pw, as implied by the sphere-packing bound for codes with the same
block size k and code rate r.

EE 541/451 Fall 2007

Radio System Propagation

EE 541/451 Fall 2007

Satellite Communications

Large communication area. Any two


places within the coverage of radio
transmission by satellite can
communicate with each other.
Seldom effected by land disaster
( high reliability)
Circuit can be started upon
establishing earth station (prompt
circuit starting)
Can be received at many places
simultaneously, and realize broadcast,
multi-access communication
economically( feature of multiaccess)
Very flexible circuit installment , can
disperse over-centralized traffic at
any time.
One channel can be used in different
directions or areas (multi-access
connecting).
EE 541/451 Fall 2007

GPS

Just a timer, 24 satellite

Calculation position

EE 541/451 Fall 2007

IV054

CHAPTER 3: Cyclic and convolution


codes
Cyclic codes are of interest and importance because
They posses rich algebraic structure that can be
utilized in a variety of ways.
They have extremely concise specifications.
They can be efficiently implemented using simple
shift registers.
Cyclic codes
Many

practically important codes are cyclic.

Convolution codes allow to encode streams od data


(bits).
EE576

Dr. Kousa

Linear Block Codes

105

IV054

BASIC DEFINITION AND EXAMPLES

Definition A code C is cyclic if


(i) C is a linear code;
(ii) any cyclic shift of a codeword is also a codeword, i.e. whenever a0, an -1 C, then
also an -1 a0 an 2 C.

Example
(i) Code C = {000, 101, 011, 110} is cyclic.
(ii) Hamming code Ham(3, 2): with the generator matrix
1 0 0 0 0 1 1

0
1
0
0
1
0
1

G
0 0 1 0 1 1 0

0 0 0 1 1 1 1

is equivalent to a cyclic code.


(iii) The binary linear code {0000, 1001, 0110, 1111} is not a cyclic, but it is
equivalent to a cyclic code.
(iv) Is Hamming code Ham(2, 3) with the generator matrix

(a) cyclic?
(b) equivalent to a cyclic code?

1 0 1 1

0
1
1
2

Cyclic codes

EE576

Dr. Kousa

Linear Block Codes

106

IV054

FREQUENCY of CYCLIC CODES

Comparing with linear codes, the cyclic codes are quite scarce. For, example there are 11 811
linear (7,3) linear binary codes, but only two of them are cyclic.
Trivial cyclic codes. For any field F and any integer n >= 3 there are always the following cyclic
codes of length n over F:
No-information code - code consisting of just one all-zero codeword.
Repetition code - code consisting of codewords (a, a, ,a) for a F.
Single-parity-check code - code consisting of all codewords with parity 0.
No-parity code - code consisting of all codewords of length n
For some cases, for example for n = 19 and F = GF(2), the above four trivial cyclic codes are
the only cyclic codes.

EE576

Dr. Kousa

Cyclic codes

Linear Block Codes

107

IV054

EXAMPLE of a CYCLIC CODE

The code with the generator matrix

has codewords

c1 = 1011100

c1 + c2 = 1110010

1 0 1 1 1 0 0

G 0 1 0 1 1 1 0
0 0 1 0 1 1 1

c2 = 0101110

c3 =0010111

c1 + c3 = 1001011

c2 + c3 = 0111001

c1 + c2 + c3 = 1100101

and it is cyclic because the right shifts have the following impacts

c1 c2,
c2 c3,
c3 c1 + c3

c1 + c2 c2 + c3,

c1 + c3 c1 + c2 + c3,

EE576

Dr. Kousa

c2 + c3 c1

c1 + c2 + c3 c1 + c2

Cyclic codes

Linear Block Codes

108

IV054

POLYNOMIALS over GF(q)


Fq[x] denotes the set of all polynomials over GF(q ).
deg (f(x )) = the largest m such that xm has a non-zero coefficient in f(x).

Multiplication of polynomials If f(x), g(x) Fq[x], then


deg (f(x) g(x)) = deg (f(x)) + deg (g(x)).
Division of polynomials For every pair of polynomials a(x), b(x) 0 in Fq[x] there
exists a unique pair of polynomials q(x), r(x) in Fq[x] such that
a(x) = q(x)b(x) + r(x), deg (r(x)) < deg (b(x)).
Example Divide x3 + x + 1 by x2 + x + 1 in F2[x].
Definition Let f(x) be a fixed polynomial in Fq[x]. Two polynomials g(x), h(x) are said
to be congruent modulo f(x), notation
g(x) h(x) (mod f(x)),
if g(x) - h(x) is divisible by f(x).

EE576

Dr. Kousa

Cyclic codes

Linear Block Codes

109

IV054

RING of POLYNOMIALS

The set of polynomials in Fq[x] of degree less than deg (f(x)), with addition and multiplication modulo f(x) forms a ring
denoted Fq[x]/f(x).
Example Calculate (x + 1)2 in F2[x] / (x2 + x + 1). It holds
(x + 1)2 = x2 + 2x + 1 x2 + 1 x (mod x2 + x + 1).
How many elements has Fq[x] / f(x)?
Result | Fq[x] / f(x) | = q deg (f(x)).
Example Addition and multiplication in F2[x] / (x2 + x + 1)
+

1+x

1+x

1+x

1+x

1+x

1+x

1+x

1+x

1+x

1+x

1+x

Definition A polynomial f(x) in Fq[x] is said to be reducible if f(x) = a(x)b(x), where a(x), b(x) Fq[x] and
deg (a(x)) < deg (f(x)),
deg (b(x)) < deg (f(x)).
If f(x) is not reducible, it is irreducible in Fq[x].
Theorem The ring Fq[x] / f(x) is a field if f(x) is irreducible in Fq[x].

EE576

Dr. Kousa

Cyclic codes

Linear Block Codes

110

IV054
FIELD Rn, Rn = Fq[x] / (xn - 1)
Computation modulo xn 1
Since xn 1 (mod xn -1) we can compute f(x) mod xn -1 as follow:
In f(x) replace xn by 1, xn +1 by x, xn +2 by x2, xn +3 by x3,
Identification of words with polynomials
a0 a1 an -1

a0 + a1 x + a2 x2 + + an -1 xn -1

Multiplication by x in Rn corresponds to a single cyclic shift


x (a0 + a1 x + an -1 xn -1) = an -1 + a0 x + a1 x2 + + an -2 xn -1

EE576

Dr. Kousa

Cyclic codes

Linear Block Codes

111

IV054 Algebraic characterization of cyclic codes

Theorem A code C is cyclic if C satisfies two conditions


(i) a(x), b(x) C a(x) + b(x) C
(ii) a(x) C, r(x) Rn r(x)a(x) C

Proof
(1) Let C be a cyclic code. C is linear (i) holds.
(ii) Let a(x) C, r(x) = r0 + r1x + + rn -1xn -1

r(x)a(x) = r0a(x) + r1xa(x) + + rn -1xn -1a(x)

is in C by (i) because summands are cyclic shifts of a(x).

(2) Let (i) and (ii) hold

Taking r(x) to be a scalar the conditions imply linearity of C.


Taking r(x) = x the conditions imply cyclicity of C.

EE576

Dr. Kousa

Cyclic codes

Linear Block Codes

112

IV054

CONSTRUCTION of CYCLIC CODES

Notation If f(x) Rn, then

f(x) = {r(x)f(x) | r(x) Rn}

(multiplication is modulo xn -1).


Theorem For any f(x) Rn, the setf(x) is a cyclic code (generated by f).
Proof We check conditions (i) and (ii) of the previous theorem.
(i) If a(x)f(x) f(x) and b(x)f(x) f(x), then

a(x)f(x) + b(x)f(x) = (a(x) + b(x)) f(x) f(x)


(ii) If a(x)f(x) f(x), r(x) Rn, then

r(x) (a(x)f(x)) = (r(x)a(x)) f(x) f(x).

Example C = 1 + x2 , n = 3, q = 2.
We have to compute r(x)(1 + x2) for all r(x) R3.
R3 = {0, 1, x, 1 + x, x2, 1 + x2, x + x2, 1 + x + x2}.
Result
EE576

Dr. Kousa

C = {0, 1 + x, 1 + x2, x + x2}


C = {000, 011, 101, 110}
Cyclic codes

Linear Block Codes

113

IV054

Characterization theorem for cyclic codes

We show that all cyclic codes C have the form C = f(x) for some f(x) Rn.
Theorem Let C be a non-zero cyclic code in Rn. Then
there exists unique monic polynomial g(x) of the smallest degree such that
C = g(x)
g(x) is a factor of xn -1.

Proof
(i) Suppose g(x) and h(x) are two monic polynomials in C of the smallest degree.
Then the polynomial g(x) - h(x) C and it has a smaller degree and a multiplication
by a scalar makes out of it a monic polynomial. If g(x) h(x) we get a contradiction.
(ii) Suppose a(x) C.
Then
a(x) = q(x)g(x) + r(x)
(deg r(x) < deg g(x))
and
r(x) = a(x) - q(x)g(x) C.
By minimality
r(x) = 0
and therefore a(x) g(x).
Cyclic codes

EE576

Dr. Kousa

Linear Block Codes

114

IV054

Characterization theorem for cyclic codes

(iii) Clearly,
xn 1 = q(x)g(x) + r(x) with deg r(x) < deg g(x)
and therefore

r(x) -q(x)g(x) (mod xn -1) and


r(x) C r(x) = 0 g(x) is a factor of xn -1.

GENERATOR POLYNOMIALS
Definition If for a cyclic code C it holds
C = g(x),
then g is called the generator polynomial for the code C.

EE576

Dr. Kousa

Cyclic codes

Linear Block Codes

115

IV054

HOW TO DESIGN CYCLIC CODES?

The last claim of the previous theorem gives a recipe to get all cyclic codes of given length n.
Indeed, all we need to do is to find all factors of

xn -1.
Problem: Find all binary cyclic codes of length 3.
Solution: Since
x3 1 =
(x + 1)(x2 + x + 1)

both factors are irreducible in GF(2)


we have the following generator polynomials and codes.

Generator polynomials

EE576

x+1
x2 + x + 1
x3 1 ( = 0)

Dr. Kousa

Code in R3

Code in V(3,2)

R3

V(3,2)

{0, 1 + x, x + x2, 1 + x2}


{0, 1 + x + x2}
{0}

{000, 110, 011, 101}


{000, 111}
{000}

Cyclic codes

Linear Block Codes

116

IV054 Design of generator matrices for cyclic codes

Theorem Suppose C is a cyclic code of codewords of length n with the generator polynomial

g(x) = g0 + g1x + + grxr.

Then dim (C) = n - r and a generator matrix G1 for C is

g 0 g1 g 2 ...

0 g 0 g1 g 2
G1 0 0 g 0 g1

.. ..
0 0 ... 0
Proof

(i) All rows of G1 are linearly independent.

gr
...
g2

0
gr
...

0
0
gr

0
0
0

...

g0

0
0

..
... g r
...
...
...

(ii) The n - r rows of G represent codewords


g(x), xg(x), x2g(x),, xn -r -1g(x)
(*)
(iii) It remains to show that every codeword in C can be expressed as a linear combination of
vectors from (*).
Inded, if a(x) C, then
a(x) = q(x)g(x).
Since deg a(x) < n we have deg q(x) < n - r.
Hence
q(x)g(x) = (q0 + q1x + + qn -r -1xn -r -1)g(x)
= q0g(x) + q1xg(x) + + qn -r -1xn -r -1g(x).
EE576

Dr. Kousa

Cyclic codes

Linear Block Codes

117

IV054

EXAMPLE

The task is to determine all ternary codes of length 4 and generators for them.
Factorization of x4 - 1 over GF(3) has the form
x4 - 1 = (x - 1)(x3 + x2 + x + 1) = (x - 1)(x + 1)(x2 + 1)
Therefore there are 23 = 8 divisors of x4 - 1 and each generates a cyclic code.

Generator polynomial
1

Generator matrix
I4
1 1
0 1

x+1

x2 + 1

(x - 1)(x + 1) = x2 - 1

(x - 1)(x2 + 1) = x3 - x2 + x - 1
(x + 1)(x2 + 1)
x4 - 1 = 0

EE576

Dr. Kousa

0
1

0
0

1 1
1 1 0 0
0 1 1 0

0 0 1 1
1 0 1 0
0 1 0 1

1 0 1 0
0 1 0 1
[ -1 1 -1 1 ]
0

[1111]
[0000]
Cyclic codes

Linear Block Codes

118

IV054

Check polynomials and parity check matrices for cyclic codes

Let C be a cyclic [n,k]-code with the generator polynomial g(x) (of degree n - k). By the last theorem
g(x) is a factor of xn - 1. Hence

xn - 1 = g(x)h(x)
for some h(x) of degree k (where h(x) is called the check polynomial of C).
Theorem Let C be a cyclic code in Rn with a generator polynomial g(x) and a check polynomial h(x).
Then an c(x) Rn is a codeword of C if c(x)h(x) 0 - this and next congruences are modulo xn - 1.

Proof Note, that g(x)h(x) = xn - 1 0


(i) c(x) C c(x) = a(x)g(x) for some a(x) Rn
c(x)h(x) = a(x) g(x)h(x) 0.
0

(ii) c(x)h(x) 0
c(x) = q(x)g(x) + r(x), deg r(x) < n k = deg g(x)
c(x)h(x) 0 r(x)h(x) 0 (mod xn - 1)
Since deg (r(x)h(x)) < n k + k = n, we have r(x)h(x) = 0 in F[x] and therefore
r(x) = 0Cyclic
c(x)
= q(x)g(x) C.
codes

EE576

Dr. Kousa

Linear Block Codes

119

IV054

POLYNOMIAL REPRESENTATION of DUAL CODES

Since dim (h(x)) = n - k = dim (C) we might easily be fooled to think that the check
polynomial h(x) of the code C generates the dual code C.
Reality is slightly different'':
Theorem Suppose C is a cyclic [n,k]-code with the check polynomial
h(x) = h0 + h1x + + hkxk,
then
(i) a parity-check matrix for C is

hk

0
H
..

hk 1 ... h0
hk ... h1
..
0 ... 0

0
h0
hk

...
...

... h0

(ii) C is the cyclic code generated by the polynomial

h x hk hk 1 x ... h0 x k
i.e. the reciprocal polynomial of h(x).
EE576

Dr. Kousa

Cyclic codes

Linear Block Codes

120

IV054 POLYNOMIAL REPRESENTATION of DUAL CODES


Proof A polynomial c(x) = c0 + c1x + + cn -1xn 1 represents a code from C if c(x)h(x) = 0.
For c(x)h(x) to be 0 the coefficients at xk,, xn -1 must be zero, i.e.
c0 hk c1hk 1 ... ck h0 0

c1hk c2 hk 1 ... ck 1h0 0


..

..

cn k 1hk cn k hk 1 ... cn 1h0 0


Therefore, any codeword c0 c1 cn -1 C is orthogonal to the word hk hk -1h0000 and to
its cyclic shifts.
Rows of the matrix H are therefore in C. Moreover, since hk = 1, these row-vectors are
linearly independent. Their number is n - k = dim (C). Hence H is a generator matrix for
C, i.e. a parity-check matrix for C.
In order to show that C is a cyclic code generated by the polynomial

h x hk hk 1 x ... h0 x k

it is sufficient to show kthat1h x is a factor of xn -1.


Observe that h x x h x and since
h(x -1)g(x -1) = (x -1)n -1
we have that
xkh(x -1)xn -kg(x -1) = xn(x n -1) = 1 xn
h x
and therefore
is indeed a factor of xn -1.

EE576

Dr. Kousa

Cyclic codes

Linear Block Codes

121

IV054

ENCODING with CYCLIC CODES I

ENCODING
with CYCLIC CODES I
Encoding using
a cyclic code can be done by a multiplication of two polynomials - a
message polynomial and the generating polynomial for the cyclic code.
Let C be an (n,k)-code over an field F with the generator polynomial
g(x) = g0 + g1 x + + gr 1 x r -1 of degree r = n - k.
If a message vector m is represented by a polynomial m(x) of degree k and m is encoded
by
m c = mG1,
then the following relation between m(x) and c(x) holds
c(x) = m(x)g(x).
Such an encoding can be realized by the shift register shown in Figure below, where input
is the k-bit message to be encoded followed by n - k 0' and the output will be the encoded
message.
Shift-register encodings of cyclic codes. Small circles represent multiplication by the
corresponding constant, nodes represent modular addition, squares are delay elements

EE576

Dr. Kousa

Cyclic codes

Linear Block Codes

122

IV054

ENCODING of CYCLIC CODES II

Another method for encoding of cyclic codes is based on the following (so called
systematic) representation of the generator and parity-check matrices for cyclic codes.
Theorem Let C be an (n,k)-code with generator polynomial g(x) and r = n - k. For i = 0,1,
,k - 1, let G2,i be the length n vector whose polynomial is G2,i(x) = x r+I -x r+I mod g(x). Then
the k * n matrix G2 with row vectors G2,I is a generator matrix for C.
Moreover, if H2,J is the length n vector corresponding to polynomial H2,J(x) = xj mod g(x),
then the r * n matrix H2 with row vectors H2,J is a parity check matrix for C. If the message
vector m is encoded by
m c = mG2,
then the relation between corresponding polynomials is
c(x) = xrm(x) - [xrm(x)] mod g(x).
On this basis one can construct the following shift-register encoder for the case of a
systematic representation of the generator for a cyclic code:
Shift-register encoder for systematic representation of cyclic codes. Switch A is closed for
first k ticks and closed for last r ticks; switch B is down for first k ticks and up for last r ticks.

EE576

Dr. Kousa

Cyclic codes

Linear Block Codes

123

IV054

Hamming codes as cyclic codes

Definition (Again!) Let r be a positive integer and let H be an r * (2r -1)


matrix whose columns are distinct non-zero vectors of V(r,2). Then the
code having H as its parity-check matrix is called binary Hamming
code denoted by Ham (r,2).
It can be shown that binary Hamming codes are equivalent to cyclic
codes.
Theorem The binary Hamming code Ham (r,2) is equivalent to a cyclic code.
Definition If p(x) is an irreducible polynomial of degree r such that x is a primitive element
of the field F[x] / p(x), then p(x) is called a primitive polynomial.
Theorem If p(x) is a primitive polynomial over GF(2) of degree r, then the cyclic code
p(x) is the code Ham (r,2).

EE576

Dr. Kousa

Cyclic codes

Linear Block Codes

124

IV054
Hamming codes as cyclic codes
Example Polynomial x3 + x + 1 is irreducible over GF(2) and x is
primitive element of the field F2[x] / (x3 + x + 1).
F2[x] / (x3 + x + 1) =
{0, x, x2, x3 = x + 1, x4 = x2 + x, x5 = x2 + x + 1, x6 = x2 + 1}
The parity-check matrix for a cyclic version of Ham (3,2)
1 0 0 1 0 1 1

H 0 1 0 1 1 1 0
0 0 1 0 1 1 1

EE576

Dr. Kousa

Cyclic codes

Linear Block Codes

125

IV054

PROOF of THEOREM
The binary Hamming code Ham (r,2) is equivalent to a cyclic code.
It is known from algebra that if p(x) is an irreducible polynomial of degree r, then the ring F2[x] / p(x) is a field of
order 2r.
In addition, every finite field has a primitive element. Therefore, there exists an element of F2[x] / p(x) such that

F2[x] / p(x) = {0, 1, , 2,, 2r 2}.


Let us identify an element a0 + a1 + ar -1xr -1 of F2[x] / p(x) with the column vector

(a0, a1,, ar -1)T


and consider the binary r * (2r -1) matrix

H = [ 1 2 2^r 2 ].

Let now C be the binary linear code having H as a parity check matrix.
Since the columns of H are all distinct non-zero vectors of V(r,2), C = Ham (r,2).
Putting n = 2r -1 we get
C = {f0 f1 fn -1 V(n, 2) | f0 + f1 + + fn -1 n 1 = 0 (2)

= {f(x) Rn | f() = 0 in F2[x] / p(x)} (3)


If f(x) C and r(x) Rn, then r(x)f(x) C because

r()f() = r() 0 = 0
and therefore, by one of the previous theorems, this version of Ham (r,2) is cyclic.

EE576

Dr. Kousa

Cyclic codes

Linear Block Codes

126

IV054

BCH codes and Reed-Solomon codes

To the most important cyclic codes for applications belong BCH codes and Reed-Solomon codes.
Definition A polynomial p is said to be minimal for a complex number x in Zq if p(x) = 0 and p is
irreducible over Zq.

Definition A cyclic code of codewords of length n over Zq, q = pr, p is a prime, is


called BCH code1 of distance d if its generator g(x) is the least common multiple of
the minimal polynomials for
l, l +1,, l +d 2
for some l, where
is the primitive n-th root of unity.
If n = qm - 1 for some m, then the BCH code is called primitive.
primitive
Definition A Reed-Solomon code is a primitive BCH code with n = q - 1.
Properties:

Reed-Solomon codes are self-dual.

BHC stands for Bose and Ray-Chaudhuri and Hocquenghem who discovered
Cyclic codes
theseDr.codes.
EE576
Kousa
Linear Block Codes
127
1

IV054

CONVOLUTION CODES

Very often it is important to encode an infinite stream or several streams of data say
bits.
Convolution codes, with simple encoding and decoding, are quite a simple
generalization of linear codes and have encodings as cyclic codes.

An (n,k) convolution code (CC) is defined by an k x n generator matrix,


entries of which are polynomials over F2

For example,

G1 [ x 2 1, x 2 x 1]

is the generator matrix for a (2,1) convolution code CC1 and

1 x
0

G2

EE576

0
1

x 1

is the generator matrix for a (3,2) convolution code CC2


Dr. Kousa

Cyclic codes

Linear Block Codes

128

IV054

ENCODING of FINITE POLYNOMIALS


An (n,k) convolution code with a k x n generator matrix G can be usd to encode
a
k-tuple of plain-polynomials (polynomial input information)

I=(I0(x), I1(X),,Ik-1(x))
to get an n-tuple of crypto-polynomials

C=(C0(x), C1(x),,Cn-1(x))
As follows

EE576

C= I . G

Dr. Kousa

Cyclic codes

Linear Block Codes

129

EXAMPLES

EXAMPLE 1

(x3 + x + 1).G1 = (x3 + x + 1) . (x2 + 1, x2 + x + 1]


= (x5 + x2 + x + 1, x5 + x4 + 1)
EXAMPLE 2

1 0 x 1

( x x, x 1).G2 ( x x, x 1).
01 x
2

EE576

Dr. Kousa

Cyclic codes

Linear Block Codes

130

IV054

ENCODING of INFINITE INPUT STREAMS

The way infinite streams are encoded using convolution codes will be
Illustrated on the code CC1.

An input stream I = (I0, I1, I2,) is mapped into the output stream

C= (C00, C10, C01, C11) defined by

and

The first multiplication can be done by the first shift register from the next
figure; second multiplication can be performed by the second shift register
on the next slide and it holds
C0i = Ii + Ii+2,
C1i = Ii + Ii-1 + Ii-2.

That is the output streams C0 and C1 are obtained by convolving the input

stream with polynomials of G1

EE576

C0(x) = C00 + C01x + = (x2 + 1) I(x)


C1(x) = C10 + C11x + = (x2 + x + 1) I(x).

Dr. Kousa

Cyclic codes

Linear Block Codes

131

IV054

ENCODING

The first shift register

output

input
1

x2

will multiply the input stream by x2+1 and the second shift register
output

input
1

x2

will multiply the input stream by x2+x+1.


EE576

Dr. Kousa

Cyclic codes

Linear Block Codes

132

IV054

ENCODING and DECODING

The following shift-register will therefore be an encoder for the


code CC1
C00,C01,C02

x2

Output streams

C10,C11,C12

For encoding of convolution codes so called


Viterbi algorithm
Is used.

EE576

Dr. Kousa

Cyclic codes

Linear Block Codes

133

Cyclic Linear Codes


Rong-Jaye Chen

EE576

Dr. Kousa

Linear Block Codes

134

OUTLIN
E

[1] Polynomials and words

[2] Introduction to cyclic codes

[3] Generating and parity check matrices for cyclic codes

[4] Finding cyclic codes

[5] Dual cyclic codes

EE576

Dr. Kousa

Linear Block Codes

135

Cyclic Linear Codes


[1] Polynomials and words
1. Polynomial of degree n over K
K [x ] {a0 a1x a2x 2 a3x 3 .... an x n }
a0 ,...., an K ,

deg(f (x )) n

2. Eg 4.1.1

Let f (x ) 1 x x 3 x 4 g (x ) x x 2 x 3 h(x ) 1 x 2 x 4 then


(a) f (x ) g (x ) 1 x 2 x 4
(b ) f (x ) h(x ) x x 2 x 3
(c ) f (x )g (x ) (x x 2 x 3) x (x x 2 x 3) x 3(x x 2 x 3)
x 4 (x x 2 x 3) x x 7

EE576

Dr. Kousa

Linear Block Codes

136

Cyclic Linear Codes


3. [Algorithm 4.1.8]Division algorithm

Let f (x ) and h (x ) be in K [x ] with h (x ) 0. Then there exist


unique polynomial q (x ) and r (x ) in K [x ] such that
f ( x ) q ( x )h(x ) r (x ),
with r (x ) 0 or deg(r (x )) deg(h( x ))
4. Eg. 4.1.9

f (x ) x x 2 x 6 x 8 , h (x ) 1 x x 2 x 4
q (x ) x 3 x 4 , r ( x ) x x 2 x 3
f ( X ) h(x )(x 3 x 4 ) (x x 2 x 3 )
deg(r (x )) deg(h (x )) 4

EE576

Dr. Kousa

Linear Block Codes

137

Cyclic Linear Codes


5. Code represented by a set of polynomials
A code C of length n can be represented as a set of polynomials over K of
degree at most n-1
f (x ) a a x a x 2 .... a x n 1 over K
0
1
2
n 1

c a0a1a2...an 1 of length n in Kn

6. E.g 4.1.12

Codeword c Polynomial c(x)


1+x2
0000
x+x3
1010
1+x+x2+
0101
x3
1111
EE576

Dr. Kousa

Linear Block Codes

138

Cyclic Linear Codes


7. f(x) and p(x) are equivalent modulo h(x)

f (x ) mod h (x ) r (x ) p (x ) mod h (x )
f (x ) p (x )(modh (x ))

ie.
8.Eg 4.1.15

f (x ) 1 x 4 x 9 x 11, h(x ) 1 x 5 , p (x ) 1 x 6
f ( x ) modh( x ) r (x ) 1 x p ( x )modh(x )
=>f(x) and p(x) are equivalent mod h(x)!!
9. Eg 4.1.16
f (x ) 1 x 2 x 6 x 9 x 11, h(x ) 1 x 2 x 5 , p (x ) x 2 x 8
f (x )modh (x ) x x 4 , p (x )modh( x ) 1 x 3
=>f(x) and p(x) are NOT equivalent mod h(x)!!

EE576

Dr. Kousa

Linear Block Codes

139

Cyclic Linear Codes


10. Lemma 4.1.17
If f (x ) g (x )(modh (x )), then
f (x ) p (x ) g (x ) p (x )(modh( x ))
and
f (x ) p (x ) g ( x )p ( x )(modh( x ))
11. Eg. 4.1.18
f (x ) 1 x x 7 , g ( x ) 1 x x 2, h( x ) 1 x 5 , p ( x ) 1 x 6
so f (x ) g (x )(modh(x )), then
f (x ) p (x ) and g ( x ) p ( x ) :
((1 x x 7 ) (1 x 6 ))modh ( x ) x 2 ((1 x x 2) (1 x 6 ))modh( x )
f (x ) p (x ) and g (x ) p (x ) :
((1 x x 7 )(1 x 6 ))modh (x ) 1 x 3 ((1 x x 2 )(1 x 6 ))modh (x )
EE576

Dr. Kousa

Linear Block Codes

140

Cyclic Linear Codes

[2]Introduction to cyclic codes


1. cyclic shift (v)
V: 010110, (v ) : 001011

(v )

10110 111000 0000 1011


01011 011100 0000 1101

2.cyclic code
A code C is cyclic code(or linear cyclic code) if (1)the cyclic shift of each
codeword is also a codeword and (2) C is a linear code
C1=(000, 110, 101, 011} is a cyclic code
C2={000, 100, 011, 111} is NOT a cyclic code
V=100,
=010 is not in C2

(v )
EE576

Dr. Kousa

Linear Block Codes

141

Cyclic Linear Codes


3. Cyclic shiftis a linear transformation
Lemma 4.2.3 (v w ) (v ) (w ),
and (av ) a (v ), a K {0,1}
Thus to show a linear code C is cyclic
it is enough to show that (v ) C
for each word v in a basis for C

S={v, (v), 2(v), , n-1(v)}, and C=<S>,


then v is a generator of the linear cyclic code C
EE576

Dr. Kousa

Linear Block Codes

142

Cyclic Linear Codes


4. Cyclic Code in terms of polynomial
v (v ), v (x ) xv (x )

Eg 4.2.11 v=1101000, n=7, v(x)=1+x+x3


word
-----------

polynimial(mod 1+x7 )
-----------------------------

0110100

xv (x ) x x 2 x 4

0011010

x2v (x ) x 2 x 3 x 4

0001101

x3v (x ) x 3 x 4 x 6

1000110

x4v (x ) x 4 x 5 x 7 1 x 4 x 5 mod(1 x 7 )

0100011

x5v (x ) x 5 x 6 x 8 x x 5 x 6 mod(1 x 7 )

1010001

x6v (x ) x 6 x 7 x 9 1 x 2 x 6 mod(1 x 7 )

EE576

Dr. Kousa

Linear Block Codes

143

Cyclic Linear Codes


5. Lemma 4.2.12
Let C be a cyclic code let v in C. Then for any polynomial a(x),
c(x)=a(x)v(x)mod(1+xn) is a codeword in C
6. Theorem 4.2.13
C: a cyclic code of length n,
g(x): the generator polynomial, which is the unique nonzero
polynomial of minimum degree in C.
degree(g(x)) : n-k,
1. C has dimension k
2. g(x), xg(x), x2g(x), ., xk-1g(x) are a basis for C
3. If c(x) in C, c(x)=a(x)g(x) for some polynomial a(x)
with degree(a(x))<k

EE576

Dr. Kousa

Linear Block Codes

144

Cyclic Linear Codes


7. Eg 4.2.16
the smallest linear cyclic code C of length 6 containing g(x)=1+x 3 <-> 100100 is
{000000, 100100, 010010, 001001, 110110,
101101, 011011, 111111}
8. Theorem 4.2.17
g(x) is the generator polynomial for a linear cyclic code of length n if only if g(x) divides 1+x n
(so 1+xn =g(x)h(x)).

9. Corollary 4.2.18
The generator polynomial g(x) for the smallest cyclic code of length n containing
the word v(polynomial v(x)) is g(x)=gcd(v(x), 1+xn)
10. Eg 4.2.19
n=8, v=11011000 so v(x)=1+x+x3+x4
g(x)=gcd(1+x+x3+x4 , 1+x8)=1+x2
Thus g(x)=1+x2 is the smallest cyclic linear code containing
v(x), which has dimension of 6.
EE576

Dr. Kousa

Linear Block Codes

145

Cyclic Linear Codes

[3]. Generating and parity check matrices for cyclic code


1. Effective to find a generating matrix
The simplest generator matrices (Theorem 4.2.13)

g (x )
xg (x )

, n: length of codes, k=n-deg(g(x))

x g (x )

EE576

Dr. Kousa

k-1

Linear Block Codes

146

Cyclic Linear Codes


2. Eg 4.3.2
C: the linear cyclic codes of length n=7 with generator polynomial
g(x)=1+x+x3, and deg(g(x))=3, => k = 4

g (x ) 1 x x 3
xg (x ) x x 2 x 4
x 2g (x ) x 2 x 3 x 5

x 3g (x ) x 3 x 4 x 6

EE576

Dr. Kousa

Linear Block Codes

1101000
0110100

G=
0011010

0001101

147

Cyclic Linear Codes


3. Efficient encoding for cyclic codes
Let C be a cyclic code of length n and dimension k
(so the generator polynomial g(x) has degree n - k).
message polynomial a ( x ) a0 a1 x ak 1 x k 1
( representi ng source message ( a0 , a1 ,, ak 1 ))
Encoding algorithm : c(x) a (x) g (x)
more time efficient compared with that of
a general linear code ( c aG )
EE576

Dr. Kousa

Linear Block Codes

148

Cyclic Linear Codes


4. Parity check matrix
H : wH=0 if only if w is a codeword
Symdrome polynomial s(x)

c(x): a codeword, e(x):error polynomial, and w(x)=c(x)+e(x)


s(x) = w(x) mod g(x) = e(x) mod g(x), because c(x)=a(x)g(x)
H: i-th row ri is the word of length n-k
=> ri(x)=xi mod g(x)
wH = (c+e)H => c(x) mod g(x) + e(x) mod g(x) = s(x)

EE576

Dr. Kousa

Linear Block Codes

149

Cyclic Linear Codes


5. Eg 4.3.7
n=7, g(x)=1+x+x3, n-k = 3

r0(x ) 1mod g (x ) 1

100

r1(x ) x mod g (x ) x
r2(x ) x mod g (x ) x
2

010

001

r3(x ) x 3 mod g (x ) 1 x

110

r4 (x ) x 4 mod g (x ) x x 2

011

r5(x ) x 5 mod g (x ) 1 x x 2

111
101

r6 (x ) x 6 mod g (x ) 1 x 2

EE576

Dr. Kousa

Linear Block Codes

100
010

001

H 110
011

111

101

150

Cyclic Linear Codes

[4]. Finding cyclic codes

1. To construct

a linear cyclic code of length n

Find a factor g(x) of 1+xn, deg(g(x)) = n-k


Irreducible polynomials
f(x) in K[x], deg(f(x)) >= 1
There are no a(x), b(x) such that f(x)=a(x)b(x),
deg(a(x))>=1, deg(b(x))>=1
For n <= 31, the factorization of 1+xn
(see Appendix B)
Improper cyclic codes: Kn and {0}

EE576

Dr. Kousa

Linear Block Codes

151

Cyclic Linear Codes


2. Theorem 4.4.3
if n=2 s then 1+x (1 x )
r

2r

3. Coro 4.4.4

Let n 2 r s, where s is odd and let 1 x s be


the product of z irreducible polynomials.

Then there are (2 r 1) z 2 proper linear


cyclic codes of length n.

EE576

Dr. Kousa

Linear Block Codes

152

Cyclic Linear Codes


4. Idempotent polynomials I(x)
I(x) = I(x)2 mod (1+xn) for odd n
Find a basic set of I(x)
Ci= { s=2j i (mod n) | j=0, 1, , r}
where
1 = 2r mod n
k

j
where
c
(
x
)

i
I (x ) ai ci (x ), ai {0,1}
jC
i

i 0

EE576

Dr. Kousa

Linear Block Codes

153

Cyclic Linear Codes


5. Eg 4.4.12

For n=7,

C0 {0}, so c0( x ) x 0 1
C1 {1, 2, 4} = C2 C4 , so c1( x ) x 1 x 2 x 4
C3 {3, 5, 6} = C5 = C7 , so c2(x ) x 3 x 6 x 5

I(x)=a0c0 (x ) a1c1( x ) a3c3( x ), ai {0,1},


I(x) 0

6. Theorem 4.4.13
Every cyclic code contains a unique idempotent
polynomial which generates the code.(?)

EE576

Dr. Kousa

Linear Block Codes

154

Cyclic Linear Codes


7. Eg. 4.4.14 find all cyclic codes of length 9

C0 {0}, C1 {1,2, 4, 8,7,5}, C3 {3, 6}

c0(x ) 1, c1(x ) x x 2 x 4 x 5 x 7 x 8 , c3(x ) x 3 x 6


I(x)=a0c0(x ) a1c1( x ) a3c3( x )

==>

Idempotent polynomial
I(x)
1

EE576

Dr. Kousa

The generator
polynomial
g(x)=gcd(I(x), 1+x9)
1

x+x2+x4+x5+x7+x8
x3+x6

1+x+x3+x4+x6+x7
1+x3

1+x+x2+x4+x5+x7+x8

1+x+x2

:
Linear Block Codes

155

Cyclic Linear Codes

[5].Dual cyclic codes


1. The dual code of a cyclic code is also cyclic
2. Lemma 4.5.1
a > a(x), b > b(x) and b > b(x)=xnb(x-1) mod 1+xn
then
a(x)b(x) mod 1+xn = 0 iff k(a) b=0
for k=0,1,n-1

EE576

3. Theorem 4.5.2
C: a linear code, length n, dimension k with generator g(x)
If 1+xn = g(x)h(x) then
C: a linear code , dimension n-k with generator xkh(x-1)

Dr. Kousa

Linear Block Codes

156

Cyclic Linear Codes


4. Eg. 4.5.3
g(x)=1+x+x3, n=7, k=7-3=4
h(x)=1+x+x2+x4
h(x)generator for C is
g (x)=x4h(x-1)=x4(1+x-1+x-2+x-4 )=1+x2+x3+x4

5. Eg. 4.5.4
g(x)=1+x+x2, n=6, k=6-2=4
h(x)=1+x+x3+x4
h(x)generator for C is g (x)=x4h(x-1)=1+x+x3+x4

EE576

Dr. Kousa

Linear Block Codes

157

Modulation, Demodulation and


Coding Course
Period 3 - 2005
Sorour Falahati
Lecture 8

EE576

Dr. Kousa

Linear Block Codes

158

Last time we talked about:

Coherent and non-coherent detections

Evaluating the average probability of symbol error for different


bandpass modulation schemes

Comparing different modulation schemes based on their error


performances.

EE576

Dr. Kousa

Lecture 8

Linear Block Codes

159

Today, we are going to talk about:


Channel coding
Linear block codes
The error detection and correction capability
Encoding and decoding
Hamming codes
Cyclic codes

2005-02-09
EE576

Dr. Kousa

Lecture 8

Linear Block Codes

160

What is channel coding?


Channel coding:
Transforming signals to improve communications
performance by increasing the robustness against
channel impairments (noise, interference, fading, ..)
Waveform coding: Transforming waveforms to
better waveforms
Structured sequences: Transforming data
sequences into better sequences, having structured
redundancy.
Better in the sense of making the decision
process less subject to errors.

2005-02-09
EE576

Dr. Kousa

Lecture 8

Linear Block Codes

161

Error control techniques


Automatic Repeat reQuest (ARQ)
Full-duplex connection, error detection codes
The receiver sends a feedback to the transmitter,
saying that if any error is detected in the received
packet or not (Not-Acknowledgement (NACK) and
Acknowledgement (ACK), respectively).
The transmitter retransmits the previously sent
packet if it receives NACK.

Forward Error Correction (FEC)


Simplex connection, error correction codes
The receiver tries to correct some errors

Hybrid ARQ (ARQ+FEC)


Full-duplex, error detection and correction codes
2005-02-09
EE576

Dr. Kousa

Lecture 8

Linear Block Codes

162

Why using error correction coding?

Error performance vs. bandwidth


Power vs. bandwidth
PB
Data rate vs. bandwidth
Capacity vs. bandwidth

Coded
A
F

Coding gain:
For a given bit-error probability,
the reduction in the Eb/N0 that can be
realized through the use of code:

Eb
Eb

[dB]
G [dB]
[dB]

N0 u
N0 c
2005-02-09

EE576

Dr. Kousa

Lecture 8

Linear Block Codes

B
D

Uncoded

Eb / N 0 (dB)

163

Channel models
Discrete memoryless channels
Discrete input, discrete output
Binary Symmetric channels
Binary input, binary output
Gaussian channels
Discrete input, continuous output

2005-02-09
EE576

Dr. Kousa

Lecture 8

Linear Block Codes

164

Linear block codes


Some definitions contd

Binary field :
The set {0,1}, under modulo 2 binary addition and
multiplication forms a field.
Addition

00 0
0 1 1

00 0
0 1 0

1 0 1

1 0 0

11 0

1 1 1

Binary field is also called Galois field, GF(2).

2005-02-09
EE576

Multiplication

Dr. Kousa

Lecture 8

Linear Block Codes

165

Some definitions contd

Fields :

Let F be a set of objects on which two operations + and . are


defined.

F is said to be a field if and only if


1. F forms a commutative group under + operation. The
additive identity element is labeled 0.

a, b F a b b a F

2. F-{0} forms a commutative group under . Operation. The


multiplicative identity element is labeled 1.

a, b F a b b a F

3. The operations + and . distribute:

a (b c) (a b) (a c)

2005-02-09
EE576

Dr. Kousa

Lecture 8

Linear Block Codes

166

Some definitions contd

Vector space:
Let V be a set of vectors and F a fields of
elements called scalars. V forms a vector space
over F if:
1. Commutative: u, v V u v v u F
2. a F , v V a v u V
3. Distributive:
(a b) v a v b v and a (u v ) a u a v
4. Associative: a, b F , v V (a b) v a (b v )
5. v V, 1 v v

2005-02-09
EE576

Dr. Kousa

Lecture 8

Linear Block Codes

167

Some definitions contd


Examples of vector spaces
The set of binary n-tuples, denoted by

Vn

V4 {(0000), (0001), (0010), (0011), (0100), (0101), (0111),


(1000), (1001), (1010), (1011), (1100 ), (1101), (1111)}

Vector subspace:
A subset S of the vector space Vn is called a
subspace if:
The all-zero vector is in S.
The sum of any two vectors in S is also in S.
Example:

{(0000), (0101), (1010), (1111)} is a subspace of V4 .

2005-02-09
EE576

Dr. Kousa

Lecture 8

Linear Block Codes

168

Some definitions contd


Spanning set:

A collection of vectors G v1 , v 2 , , v n,
the linear combinations of which include all vectors in
a vector space V, is said to be a spanning set for V or
to span V.
Example:

(1000), (0110), (1100), (0011), (1001)

spans V4 .

Bases:
A spanning set for V that has minimal cardinality is
called a basis for V.
Cardinality of a set is the number of objects in the set.

Example:

(1000), (0100), (0010), (0001)

2005-02-09
EE576

Dr. Kousa

Lecture 8

Linear Block Codes

is a basis for V4 .
169

Linear block codes


Linear block code (n,k)
k
C

V
2
A set
is called a linear block
nwith cardinality
code if, and only if, it is a subspace of the vector Vn
space .

Vk C Vn
Members of C are called code-words.
The all-zero codeword is a codeword.
Any linear combination of code-words is a
codeword.

2005-02-09
EE576

Dr. Kousa

Lecture 8

Linear Block Codes

170

Linear block codes contd

Vn

mapping

Vk

C
Bases of C

2005-02-09
EE576

Dr. Kousa

Lecture 8

Linear Block Codes

171

Linear block codes contd

The information bit stream is chopped into blocks of k bits.


Each block is encoded to a larger block of n bits.
The coded bits are modulated and sent over channel.
The reverse procedure is done at the receiver.

Data block

Channel
encoder

k bits

n-k

Codeword

n bits

Redundant bits

k
Rc
Code rate
n
2005-02-09
EE576
Dr. Kousa

Lecture 8

Linear Block Codes

172

Linear block codes contd


The Hamming weight of vector U, denoted by
w(U), is the number of non-zero elements in
U.
The Hamming distance between two vectors
U and V, is the number of elements in which
they differ.
d (U, V ) w(U V )
The minimum distance of a block code is
d min min d (U i , U j ) min w(U i )
i j

2005-02-09
EE576

Dr. Kousa

Lecture 8

Linear Block Codes

173

Linear block codes contd


Error detection capability is given by

ed

min which is defined as the


Error correcting-capability t of a code,
maximum number of guaranteed correctable errors per codeword, is

d min 1
t
2

2005-02-09
EE576

Dr. Kousa

Lecture 8

Linear Block Codes

174

Linear block codes contd


For memory less channels, the probability that the
decoder commits an erroneous decoding is
p is the transition probability or bit error probability
over channel.
The decoded bit error probability is
n j
PM p (1 p ) n j
j t 1 j
1 n n j
PB j p (1 p) n j
n j t 1 j
n

2005-02-09
EE576

Dr. Kousa

Lecture 8

Linear Block Codes

175

Linear block codes contd


Discrete, memoryless, symmetric channel model
1-p

p
Tx. bits

Rx. bits
p

1-p

Note that for coded systems, the coded bits are


modulated and transmitted over channel. For example,
for M-PSK modulation on AWGN channels (M>2):
2 log 2 M Ec
2 log 2 M Eb Rc
2
2

p
Q
sin
Q
sin


log 2where
M
M coded
log 2 M
N0
M
isN 0energy per
bit, given by

Ec Rc Eb

Ec
2005-02-09
EE576

Dr. Kousa

Lecture 8

Linear Block Codes

176

Linear block codes contd


Vn

mapping

Vk

C
Bases of C

A matrix G is constructed by taking as its rows the vectors


on the basis,
.

2005-02-09
EE576

Dr. Kousa

v11
V1
v
21

Vk

vk 1

v12

v22

vk 2

Lecture 8

Linear Block Codes

{V1 , V2 , , Vk }
v1n
v2 n

vkn

177

Linear block codes contd


Encoding in (n,k) block code

U mG

V1
V
(u1 , u 2 , , un ) (m1 , m2 , , mk ) 2

V
k
(u1 , u 2 , , un ) m1 V1 m2 V2 m2 Vk

The rows of G, are linearly independent.

2005-02-09
EE576

Dr. Kousa

Lecture 8

Linear Block Codes

178

Linear block codes contd


Example: Block code (6,3)
Message vector

V1
1 1 0 1 0 0
G V2 0 1 1 0 1 0
V3
1 0 1 0 0 1

2005-02-09
EE576

Dr. Kousa

Lecture 8

Codeword

000
100
010

000000
110100
011010

110
001
101
011
1 11

1 011 1 0
1 01 0 0 1
0 111 0 1
1 1 0 011
0 0 0 1 11

Linear Block Codes

179

Linear block codes contd


Systematic block code (n,k)
For a systematic code, the first (or last) k elements in
the codeword are information bits.

G [P I k ]
I k k k identity matrix
Pk k (n k ) matrix
U (u1 , u2 ,..., un ) ( p1 , p2 ,..., pn k , m1 , m2 ,..., mk )

parity bits

2005-02-09
EE576

Dr. Kousa

Lecture 8

Linear Block Codes

message bits

180

Linear block codes contd


For any linear code we can find an G
matrix H ( n k )n , which its rows are
orthogonal to rows of :

GH 0
T

H is called the parity check matrix and


its rows are linearly independent.
For systematic linear block codes:
H [I n k P T ]
2005-02-09
EE576

Dr. Kousa

Lecture 8

Linear Block Codes

181

Linear block codes contd


Data source

Format

Channel
encoding

Modulation
channel

Data sink

Format

Channel
decoding

Demodulation
Detection

r Ue

r (r1 , r2 ,...., rn ) received codeword or vector


e (e1 , e2 ,...., en ) error pattern or vector

Syndrome testing:
S is syndrome of r, corresponding to the error
pattern e.
S rH T eH T
2005-02-09
EE576

Dr. Kousa

Lecture 8

Linear Block Codes

182

Linear block codes contd

Standard array
1. For row i 2,3,...,2 n, k find a vector in
of
minimum weight which is not already
Vn listed in
the array.
i : th
2. Call this patternei and form the
row as the
corresponding coset
zero
codeword

coset leaders

2005-02-09
EE576

Dr. Kousa

U1
e2

e 2 n k

U2
e2 U2

e 2 nk U 2

U 2k
e 2 U 2k

e 2 nk U 2 k

Lecture 8

Linear Block Codes

coset

183

Linear block codes contd

Standard array and syndrome table decoding


T
1. Calculate S rH
r leader,
U
e
e e i , corresponding to S.
2. Find the coset
.
3. Calculate
and corresponding m

EE576

Note that U
r e (U e) e U (e e )
If e e, error is corrected.
If e e, undetectable decoding error occurs.

Dr. Kousa

Linear Block Codes

184
184

Linear block codes contd


codewords

Example: Standard array for the (6,3) code


000000
000001
000010
000100
001000
010000
100000
010001

110100
110101
110111
110011
111100
100100
010100
100101

011010
011011
011000
011100

101110
101111
101100
101010

101001
101000
101011
101101

011101
011100
011111
011010

110011
110010
110001
110111

000111
000110
000101
000110

coset

010110

Coset leaders

EE576

Dr. Kousa

Linear Block Codes

185
185

Linear block codes contd


Error pattern Syndrome

000000
000001
000010
000100
001000
010000
100000
010001

EE576

000
101
011
110
001
010
100
111

Dr. Kousa

U (101110) transmitted.
r (001110) is received.
The syndrome of r is computed :
S rH T (001110) H T (100)
Error pattern corresponding to this syndrome is
e (100000)
The corrected vector is estimated
r e (001110) (100000) (101110)
U

Linear Block Codes

186

Hamming codes
Hamming codes
Hamming codes are a subclass of linear block codes and
belong to the category of perfect codes.
Hamming codes are expressed as a function of a single
integer
. m2
Code length :

n 2m 1

Number of information bits : k 2 m m 1


Number of parity bits :
n-k m
Error correction capability : t 1
The columns of the parity-check matrix, H, consist of all
non-zero binary m-tuples.

EE576

Dr. Kousa

Linear Block Codes

187

Hamming codes
Example: Systematic Hamming code (7,4)

1 0 0 0 1 1 1
H 0 1 0 1 0 1 1 [I 33
0 0 1 1 1 0 1
0 1 1 1 0 0 0
1 0 1 0 1 0 0
[P
G
1 1 0 0 0 1 0

1 1 1 0 0 0 1
EE576

Dr. Kousa

Linear Block Codes

PT ]

I 44 ]

188

Cyclic block codes


Cyclic codes are a subclass of linear block codes.
Encoding and syndrome calculation are easily
performed using feedback shift-registers.
Hence, relatively long block codes can be
implemented with a reasonable complexity.
BCH and Reed-Solomon codes are cyclic codes.

EE576

Dr. Kousa

Linear Block Codes

189

Cyclic block codes


A linear (n,k) code is called a Cyclic code if all cyclic
shifts of a codeword are also a codeword.
Example:

U (u0 , u1 , u2 ,..., un 1 )
U

(i )

i cyclic shifts of U

(un i , un i 1 ,..., un 1 , u0 , u1 , u2 ,..., un i 1 )

U (1101)
U (1) (1110 ) U ( 2 ) (0111) U (3) (1011) U ( 4 ) (1101) U

EE576

Dr. Kousa

Linear Block Codes

190

Cyclic block codes


Algebraic structure of Cyclic codes, implies expressing
codewords in polynomial form

U( X ) u0 u1 X u2 X 2 ... un 1 X n 1

degree (n-1)

Relationship between a codeword and its cyclic shifts:


XU( X ) u0 X u1 X 2 ..., un 2 X n 1 un 1 X n

un 1 u0 X u1 X 2 ... un 2 X n 1 un 1 X n u n 1


U (1 ) ( X )

u n1 ( X n 1)

U (1) ( X ) un 1 ( X n 1)
U (1) ( X ) XU( X ) modulo ( X n 1)

Hence:By extension

EE576

Dr. Kousa

U ( i ) ( X ) X i U ( X ) modulo ( X n 1)
Linear Block Codes

191

Cyclic block codes

Basic properties of Cyclic codes:


Let C be a binary (n,k) linear cyclic code
1. Within the set of code polynomials in C, there
is a unique monic polynomial g ( X ) with
minimal degree r n. g ( X ) is called the
generator polynomials.
g ( X ) g 0 g1 X ... g r X r
2. Every code polynomialU( X ) in C, can be
expressed uniquely as U( X ) m( X )g ( X )
3. The generator polynomial g ( X ) is a factor of
X n 1

EE576

Dr. Kousa

Linear Block Codes

192

Cyclic block codes


4. The orthogonality of G and H in polynomial
n
form is expressed as g ( X )h( X ) X .1This
means h( X ) is also a factor of X n 1
5. The rowi, i 1,..., k , of generator matrix is
formed by the coefficients of the " i 1"cyclic
shift of the generator polynomial.

g( X )
Xg ( X )

Dr. Kousa

g1 g r

g0

k 1
X g( X )

EE576

g0

g1

gr

g0

g1 g r
g0

Linear Block Codes

g1

g r

193

Cyclic block codes

Systematic encoding algorithm for an (n,k) Cyclic


code:
nk
1. Multiply the message polynomial m( X ) by X
2. Divide the result of Step 1 by the generator
polynomial g ( X ) . Let p( X ) be the reminder.
nk
p
(
X
)
X
m( X ) to form the codewordU( X )
3. Add
to

EE576

Dr. Kousa

Linear Block Codes

194

Cyclic block codes

Example: For the systematic (7,4) Cyclic code with


generator polynomial
g( X ) 1 X X 3

1.

Find the codeword for the message

m (1011)

n 7, k 4, n k 3
m (1011) m( X ) 1 X 2 X 3
X n k m( X ) X 3m( X ) X 3 (1 X 2 X 3 ) X 3 X 5 X 6
Divide X n k m( X ) by g ( X) :
X 3 X 5 X 6 (1 X X 2 X 3 )(1 X X 3 )
1


quotient q(X)

generator g(X)

remainder p ( X )

Form the codeword polynomial :


U ( X ) p( X ) X 3m( X ) 1 X 3 X 5 X 6
U (1 0 0 1 0 1 1 )
parity bits message bits

EE576

Dr. Kousa

Linear Block Codes

195

Cyclic block codes


2.

Find the generator and parity check matrices, G and H,


respectively.
g( X ) 1 1 X 0 X 2 1 X 3 ( g 0 , g1 , g 2 , g 3 ) (1101)
1
0
G
0

1
1
0
0

0
1
1
0

1
0
1
1

0
1
0
1

0
0
1
0

0
0
0

1
0
G
1

1
1
1
0

0
1
1
1

1
0
0
0

0
1
0
0

0
0
1
0

0
0
0

EE576

Dr. Kousa

Not in systematic form.


We do the following:
row(1) row(3) row(3)
row(1) row(2) row(4) row(4)

1 0 0 1 0 1 1
H 0 1 0 1 1 1 0
0 0 1 0 1 1 1

I 44
Linear Block Codes

I 33

PT

196

Cyclic block codes


Syndrome decoding for Cyclic codes:
Received codeword in polynomial form is given by
Error
r ( X ) U ( X ) e( X )
Received
pattern

codewor
d

The syndrome is the reminder obtained by dividing the


received polynomial by the generator polynomial.

r ( X ) q ( X )g ( X ) S ( X )

Syndrome

With syndrome and Standard array, error is estimated.


In Cyclic codes, the size of standard array is considerably reduced.

EE576

Dr. Kousa

Linear Block Codes

197

Example of the block codes

PB
8PSK

QPSK

Eb / N 0 [dB]
EE576

Dr. Kousa

Linear Block Codes

198

ADVANTAGE of GENERATOR MATRIX:


we need to store only the k rows of G instead of 2k
vectors of
the code.

for the example we have looked at generator array of


(36)
replaces the original code vector of dimensions (8 6).
This is a definite reduction in complexity .

EE576

Dr. Kousa

Linear Block Codes

199

Systematic Linear Block Codes


Systematic (n,k) linear block codes has such a mapping
that
part of the sequence generated coincides with the k
message
digits.
Remaining (n-k) digits are parity digits
A systematic linear block code has a generator matrix of
p
p
p
1 0 0
12
1,(nk )
11

the form :
G

EE576

Dr. Kousa

p
21

pk1

22

k2

p
2,(nk )

p
k ,(nk )

Linear Block Codes

0 1

0 0 1

200

P is the parity array portion of the generator matrix


pij = (0 or 1)

Ik is the (kk) identity matrix.


With the systematic generator encoding complexity is
further
reduced since we do not need to store the identity matrix.
p
11
Since U = m G
p
u , u ,........, un m , m ,........, mn 21
1 2
1 2

pk1

EE576

Dr. Kousa

12
p

22

k2

Linear Block Codes

p
1,(nk )
p
2,(nk )
p
k ,(nk )

1 0 0
0 1

0 0 1

201

where

u m p m p ....... m p
i
1 1i
2 2i
k ki
m
ink

for
for

i 1,....., (n k )
i (n k 1),...., n

And the parity bits are


p m p m p ........ m p
1 1 11
2 21
k k1
p m p m p ........ m p
2 1 12
2 22
k k2
p
m p
m p
........ m p
1 1,(nk )
2 2,(nk )
nk
k k ,(nk )

given the message k-tuple


m= m1,..,mk
And the general code vector n-tuple

U = u1,u2,,uk
Systematic code vector is:

U = p1,p2..,pk , m1,m2,,mk
EE576

Dr. Kousa

Linear Block Codes

202

Example:
For a (6,3) code the code vectors are described as
1 1 0
U m , m , m 0 1 1
1 2 3
1 0 1

1 0 0

0 1 0

0 0 1

U = m1+m3, m1+m2, m2+m3, m1, m2, m3


u1

EE576

I3

Dr. Kousa

u2 ,

u 3,

Linear Block Codes

u 4, u 5,

u6

203

Parity Check Matrix (H)


We define a parity-check matrix since it will enable us
to
decode the received vectors.
For a (kn) generator matrix G
There exists an (k-n) n matrix H
Such that rows of G are orthogonal to the rows of H
i.e

G HT = 0

To satisfy the orthogonality requirement H matrix is


written as:
H I
PT
nk

EE576

Dr. Kousa

Linear Block Codes

204

1
0

Hence
I

p
P
p

p
nk

0
0

11

0
p

21

k1

12

22

k2

1 ,( n k )

2 ,( n k )

k ,( n k )

The product UHT of each code vector is a zero vector.


UHT p p , p p ,......, p
p
0
1 1 2
2
nk
nk
Once the parity check matrix H is formed we can use it to
test whether a received vector is a valid member of the
codeword set. U is a valid code vector if and only if
UHT=0.

EE576

Dr. Kousa

Linear Block Codes

205

Syndrome Testing
Let r = r1, r2, ., rn be a received code vector (one of 2n ntuples)
Resulting from the transmission of U = u1,u2,.,un
the 2k n-tuples).
r=U+e

(one of

where e = e1, e2, , en is the error vector or error pattern


introduced by the channel
In space of 2n n-tuples there are a total of (2n 1) potential
nonzero error patterns.
The SYNDROME of r is defined as:
S = r HT
The syndrome is the result of a parity check performed on
determine whether r is a valid member of the codeword set.
EE576

Dr. Kousa

Linear Block Codes

r to
206

If r contains detectable errors the syndrome has some nonzero value

syndrome of r is seen as
S = (U+e) HT = UHT + eHT
since UHT = 0 for all code words then :
S = eHT
An important property of linear block codes, fundamental to
the
decoding process, is that the mapping between correctable
error
patterns and syndromes is one-to-one.

EE576

Dr. Kousa

Linear Block Codes

207

Parity check matrix must satisfy:

EE576

1.

No column of H can be all zeros, or else an error


in the
corresponding code vector position would not
affect the
syndrome and would be undetectable

2.

All columns of H must be unique. If two columns


are identical errors corresponding to these code
word locations will be indistinguishable.

Dr. Kousa

Linear Block Codes

208

Example:
Suppose that code vector U = [ 1 0 1 1 1 0 ] is transmitted and
the vector r = [ 0 0 1 1 1 0 ] is received.
Note one bit is in error..
Find the syndrome vector,S, and verify that it is equal to eHT.
(6,3) code has generator matrix G we have seen before:
1 1 0 1 0 0
G 0 1 1 0 1 0

1 0 1 0 0 1

P is the parity matrix and I is the identity matrix.

EE576

Dr. Kousa

Linear Block Codes

209

HT

0
1 0 0

0
0 1 0

1
0 0 1
p

1
1
0
1,3

p 0 1 1
2,3

p 1 0 1
3,3
1 0 0
0 1 0

0 0 1
T
[
001110
]
=rH =

1
1
0

0 1 1

1 0 1
1
0
0
p
1,1
p
2,1
p
3,1

0
1
0
p
1,2
p
2,2
p
3,2

= [ 1, 1+1, 1+1 ] = [ 1 0 0]
(syndrome of corrupted code vector)

Now we can verify that syndrome of the corrupted code vector is the same as
the syndrome of the error pattern:
S = eHT = [1 0 0 0 0]HT = [ 1 0 0 ]
( =syndrome of error pattern )

EE576

Dr. Kousa

Linear Block Codes

210

Error Correction
Since there is a one-to-one correspondence between correctable
error patterns and syndromes we can correct such error patterns.
Assume the 2n n-tuples that represent possible received vectors
are arranged in an array called the standard array.
1. The first row contains all the code vectors starting with allzeros
vector
2. First column contains all the correctable error patterns
The standard
array
(n,k) code
is:
U
U for a
U

U
1

e
2

e
j

e
2nk
EE576

Dr. Kousa

2
U e
2 2

U e
2 2nk

i
2k
U e
U e
i 2
2
2k

U e
i
j

U e
2k
2nk

Linear Block Codes

211

Each row called a coset consists of an error pattern in the first


column, also known as the coset leader, followed by the code
vectors perturbed by that error pattern.
The array contains 2n n-tuples in the space Vn
each coset consists of 2k n-tuples
2n
there are k 2nk
2

cosets

If the error pattern caused by the channel is a coset leader, the


received vector will be decoded correctly into the transmitted
code vector Ui. If the error pattern is not a coset leader the
decoding will produce an error.

EE576

Dr. Kousa

Linear Block Codes

212

Syndrome of a Coset
If

ej is the coset leader of the jth coset then ;


Ui + e j

is an n-tuple in this coset

Syndrome of this coset is:

S = (Ui + ej)HT = Ui HT + ejHT


= e jH T
All members of a coset have the same syndrome and in fact the
syndrome is used to estimate the error pattern.

EE576

Dr. Kousa

Linear Block Codes

213

Error CorrectionDecoding
The procedure for error correction decoding is as follows:
1.

Calculate the syndrome of r using S = rHT

2.

Locate the coset leader (error pattern) , ej, whose


syndrome equals rHT
This error pattern is the corruption caused by the channel
The corrected received vector is identified as U = r + ej .

3.
4.

We retrieve the valid code vector by subtracting out the


identified error
Note: In modulo-2 arithmetic subtraction is identical to that of
addition

EE576

Dr. Kousa

Linear Block Codes

214

Example:

Locating the error pattern:


For the (6,3) linear block code we have seen before the standard
array can be arranged as:
000000
000001
000010
000100
001000
010000
100000
010001

EE576

Dr. Kousa

110100
110101
110110
110000
111100
100100
010100
100101

011010
011011
011000
011110
010010
001010
111010
001011

101110
101111
101100
101010
100110
111110
001110
111111

101001
101000
101011
101101
100001
111001
001001
111000

Linear Block Codes

011101
011100
011111
011001
010101
001101
111101
001100

110011
110010
110001
110111
111011
100011
010011
100010

000111
000110
000101
000011
001111
010111
100111
010110

215

The valid code vectors are the eight vectors in the first row and
the correctable error patterns are the eight coset leaders in the
first column.
Decoding will be correct if and only if the error pattern caused
by the channel is one of the coset leaders
We now compute the syndrome corresponding to each of the
correctable error sequences by computing ejHT for each coset
leader
1 0 0
0

0
S e
j 1

EE576

Dr. Kousa

1
0
1
1
0

0
1

Linear Block Codes

216

Syndrome lookup table..


error pattern
000000
000001
000010
000100
001000
010000
100000
010001

EE576

Dr. Kousa

Linear Block Codes

Syndrome
000
101
011
110
001
010
100
111

217

Error Correction
We receive the vector r and calculate its syndrome S
We then use the syndrome-look-up table to find the
corresponding error pattern.
This error pattern is an estimate of the error, we denote it as
The decoder then adds to r to obtain an estimate of the
transmitted
code vector

= r + = (U + e) + = U + (e )
If the estimated error pattern is the same as the actual error
pattern that is if = e then

=U

If e the decoder will estimate a code vector that was not


transmitted and hence we have an undetectable decoding error.

EE576

Dr. Kousa

Linear Block Codes

218

Example
Assume code vector U = [ 1 0 1 1 1 0 ] is transmitted and the
vector r=[0 0 1 1 1 0] is received.
The syndrome of r is computed as:

S = [0 0 1 1 1 0 ]HT = [ 1 0 0 ]
From the look-up table 100 has corresponding error pattern:
= [1 0 0 0 0 0 ]
The corrected vectors is the = r + = 0 0 1 1 1 0 + 1 0 0 0 0
0
= 1 0 1 1 1 0 (corrected)
In this example actual error pattern is the estimated error
pattern,
Hence
EE576

Dr. Kousa

=U
Linear Block Codes

219

3F4 Error Control Coding


Dr. I. J. Wassell

EE576

Dr. Kousa

Linear Block Codes

220

Introduction
Error Control Coding (ECC)
Extra bits are added to the data at the transmitter
(redundancy) to permit error detection or
correction at the receiver
Done to prevent the output of erroneous bits
despite noise and other imperfections in the
channel
The positions of the error control coding and
decoding are shown in the transmission model

EE576

Dr. Kousa

Linear Block Codes

221

Transmission Model
Digital

Source

Source

Encode
r

Error
Control
Coding

Line
Coding

Modulator
(Transmit
Filter, etc)

Hc()

Transmitter
N()
Digital

Source

Sink

Decode
r

Error
Control
Decoding

Line
Decoding

Receiver
EE576

Dr. Kousa

X()

Linear Block Codes

Noise

Demod
(Receive
Filter,
etc)

Channe
l

+
Y()

222

Error Models
Binary Symmetric Memoryless Channel
Assumes transmitted symbols are binary
Errors affect 0s and 1s with equal probability
(i.e., symmetric)
Errors occur randomly and are independent from
bit to bit (memoryless)

0
IN
1
EE576

Dr. Kousa

1-p
p
p
1-p

0
OUT
1
Linear Block Codes

p is the probability
of bit error or the Bit
Error Rate (BER) of
the channel
223

Error Models
Many other types
Burst errors, i.e., contiguous bursts of bit errors
output from DFE (error propagation)
common in radio channels
Insertion, deletion and transposition errors
We will consider mainly random errors

Error Control Techniques

Error detection in a block of data


Can then request a retransmission, known as automatic repeat request
(ARQ) for sensitive data
Appropriate for
Low delay channels
Channels with a return path
Not appropriate for delay sensitive data, e.g., real time speech and data
Forward Error Correction (FEC)
Coding designed so that errors can be corrected at the receiver
Appropriate for delay sensitive and one-way transmission (e.g., broadcast
TV) of data
Two main types, namely block codes and convolutional codes. We will only
look at block codes

EE576

Dr. Kousa

Linear Block Codes

224

Block Codes

We will consider only binary data


Data is grouped into blocks of length k bits (dataword)
Each dataword is coded into blocks of length n bits (codeword), where
in general n>k
This is known as an (n,k) block code
A vector notation is used for the datawords and codewords,
Dataword d = (d1 d2.dk)
Codeword c = (c1 c2..cn)
The redundancy introduced by the code is quantified by the code rate,
Code rate = k/n
i.e., the higher the redundancy, the lower the code rate

Block Code - Example

Dataword length k = 4
Codeword length n = 7
This is a (7,4) block code with code rate = 4/7
For example, d = (1101), c = (1101001)

EE576

Dr. Kousa

Linear Block Codes

225

Error Control Process


10110 1000
1
1000

Source code
data chopped
into blocks
Chann
Datawor
d (k bits)

el
coder

Datawor
d (k bits)

Codeword
(n bits)

Codeword +
possible
errors (n bits)

Chann
el
decode
Error flags r

Chann
el

Decoder gives corrected data


May also give error flags to
Indicate reliability of decoded data
Helps with schemes employing multiple layers of error correction
EE576

Dr. Kousa

Linear Block Codes

226

Parity Codes
Example of a simple block code Single Parity Check Code
In this case, n = k+1, i.e., the codeword is the dataword with
one additional bit
For even parity the additional bit is,

q i 1 d i (mod 2)
k

For odd parity the additional bit is 1-q


That is, the additional bit ensures that there are an even or
odd number of 1s in the codeword

EE576

Parity Codes Example 1

Even parity
(i) d=(10110) so,
c=(101101)
(ii) d=(11011) so,
c=(110110)
Dr. Kousa

Linear Block Codes

227

Parity Codes Example 2


Coding table for (4,3) even parity code
Datawor
0d
0
0
0
0
1
1
1
1

EE576

Dr. Kousa

0
1
1
0
0
1
1

0
1
0
1
0
1
0
1

Codewor
0d 0
0
0
0
0
1
1
1
1

Linear Block Codes

0
1
1
0
0
1
1

1
0
1
0
1
0
1

0
1
1
0
1
0
0
1

228

Parity Codes
To decode
Calculate sum of received bits in block (mod 2)
If sum is 0 (1) for even (odd) parity then the dataword is the first k bits of
the received codeword
Otherwise error
Code can detect single errors
But cannot correct error since the error could be in any bit
For example, if the received dataword is (100000) the transmitted dataword
could have been (000000) or (110000) with the error being in the first or
second place respectively
Note error could also lie in other positions including the parity bit
Known as a single error detecting code (SED). Only useful if probability
of getting 2 errors is small since parity will become correct again
Used in serial communications
Low overhead but not very powerful
Decoder can be implemented efficiently using a tree of XOR gates

EE576

Dr. Kousa

Linear Block Codes

229

Hamming Distance

Error control capability is determined by the Hamming distance


The Hamming distance between two codewords is equal to the number
of differences between them, e.g.,
10011011
11010010 have a Hamming distance = 3
Alternatively, can compute by adding codewords (mod 2)
=01001001 (now count up the ones)
The Hamming distance of a code is equal to the minimum Hamming
distance between two codewords
If Hamming distance is:
1 no error control capability; i.e., a single error in a received
codeword yields another valid
codeword
XXXXXXX
X is a valid codeword
Note that this representation is diagrammatic
only.
In reality each codeword is surrounded by n codewords. That is,
one for every bit that
could be changed

EE576

Dr. Kousa

Linear Block Codes

230

Hamming Distance

If Hamming distance is:


2 can detect single errors (SED); i.e., a single error will yield an
invalid codeword
XOXOXO
X is a valid codeword
O in not a valid codeword
See that 2 errors will yield a valid (but
incorrect) codeword

If Hamming distance is:


3 can correct single errors (SEC) or can detect double errors (DED)
XOOXOOX
X is a valid codeword
O in not a valid codeword
See that 3 errors will yield a valid but
incorrect codeword

EE576

Dr. Kousa

Linear Block Codes

231

Hamming Distance - Example


Hamming distance 3 code, i.e., SEC/DED
Or can perform single error correction (SEC)
10011011
11011011
11010011
11010010

X
O
O
X

This code corrected this


way
This code corrected this
way

X is a valid codeword
O is an invalid codeword
EE576

Dr. Kousa

Linear Block Codes

232

Hamming Distance
The maximum number of detectable errors is

d min 1
That is the maximum number of correctable
errors is given by,

d min 1
t

where dmin is the minimum Hamming distance


.
between 2 codewords and
means the
smallest integer
EE576

Dr. Kousa

Linear Block Codes

233

Linear Block Codes

As seen from the second Parity Code example, it is possible to use a


table to hold all the codewords for a code and to look-up the
appropriate codeword based on the supplied dataword
Alternatively, it is possible to create codewords by addition of other
codewords. This has the advantage that there is now no longer the
need to held every possible codeword in the table.
If there are k data bits, all that is required is to hold k linearly
independent codewords, i.e., a set of k codewords none of which can
be produced by linear combinations of 2 or more codewords in the set.
The easiest way to find k linearly independent codewords is to choose
those which have 1 in just one of the first k positions and 0 in the
other k-1 of the first k positions.

EE576

Dr. Kousa

Linear Block Codes

234

Linear Block Codes


For example for a (7,4) code, only four codewords are
required, e.g.,
1

0
0
0

1
0
0

0
1
0

0
0
1

1
0
1

0
1
1

1
1
1

So, to obtain the codeword for dataword 1011, the first, third and fourth
codewords in the list are added together, giving 1011010
This process will now be described in more detail

EE576

Dr. Kousa

Linear Block Codes

235

Linear Block Codes

An (n,k) block code has code vectors


d=(d1 d2.dk) and
c=(c1 c2..cn)

The block coding process can be written as


where G is the Generator Matrix a
a

... a1n a1
a
a
a
...
a
22
2n
G 21
2
.
.
...
. .

ak1 ak 2 ... akn a k


11

Thus,
k

c di a i

c=dG

12

i 1

ai must be linearly independent, i.e.,


Since codewords are given by summations of the ai vectors, then to
avoid 2 datawords having the same codeword the ai vectors must be
linearly independent

EE576

Dr. Kousa

Linear Block Codes

236

Linear Block Codes

Sum (mod 2) of any 2 codewords is also a codeword, i.e.,


Since for datawords d1 and d2 we have;

d 3 d1 d 2

So,

i 1

i 1

i 1

i 1

c 3 d 3i a i (d1i d 2i )a i d1i a i d 2i a i

c3 c1 c 2

0 is always a codeword, i.e.,


Since all zeros is a dataword then,
k

c 0 ai 0
i 1

EE576

Dr. Kousa

Linear Block Codes

237

Error Correcting Power of LBC


The Hamming distance of a linear block code (LBC) is
simply the minimum Hamming weight (number of 1s or
equivalently the distance from the all 0 codeword) of the
non-zero codewords
Note d(c1,c2) = w(c1+ c2) as shown previously
For an LBC, c1+ c2=c3
So min (d(c1,c2)) = min (w(c1+ c2)) = min (w(c3))
Therefore to find min Hamming distance just need to
search among the 2k codewords to find the min
Hamming weight far simpler than doing a pair wise
check for all possible codewords.

EE576

Dr. Kousa

Linear Block Codes

238

Linear Block Codes example 1


For example a (4,2) code, suppose;
1 0 1 1
G

0
1
0
1

For d = [1 1], then;

EE576

Dr. Kousa

a1 = [1011]
a2 = [0101]

1
0

0
1

1
0

1
1

_
1

_
1

_
1

_
0

Linear Block Codes

239

Linear Block Codes example 2


A (6,5) code with
1
0

G 0

0
0

0
1

0
0

0
0

0
0

1
1
1

1
1

Is an even single parity code

EE576

Dr. Kousa

Linear Block Codes

240

Systematic Codes
For a systematic block code the dataword appears unaltered in
the codeword usually at the start
The generator matrix has the structure,

R=n-k

1 0 .. 0 p11 p12 .. p1R


0 1 .. 0 p p .. p
21
22
2R
G
I | P
.. .. .. .. .. .. .. ..

0 0 .. 1 pk1 pk 2 .. pkR
P is often referred to as parity bits
I is k*k identity matrix. Ensures dataword appears as beginning
of codeword
P is k*R matrix.
EE576

Dr. Kousa

Linear Block Codes

241

Decoding Linear Codes

One possibility is a ROM look-up table


In this case received codeword is used as an address
Example Even single parity check code;
Address Data
000000
0
000001
1
000010
1
000011
0

.
Data output is the error flag, i.e., 0 codeword ok,
If no error, dataword is first k bits of codeword
For an error correcting code the ROM can also store datawords
Another possibility is algebraic decoding, i.e., the error flag is computed
from the received codeword (as in the case of simple parity codes)
How can this method be extended to more complex error detection and
correction codes?

EE576

Dr. Kousa

Linear Block Codes

242

Parity Check Matrix

A linear block code is a linear subspace Ssub of all length n vectors


(Space S)
Consider the subset Snull of all length n vectors in space S that are
orthogonal to all length n vectors in Ssub
It can be shown that the dimensionality of Snull is n-k, where n is the
dimensionality of S and k is the dimensionality of Ssub
It can also be shown that Snull is a valid subspace of S and
consequently
Ssub is alsobythe
space
of Snull
S can be represented
its null
basis
vectors.
In this case the generator
null

basis vectors (or generator matrix H) denote the generator matrix for
Snull - of dimension n-k = R

This matrix is called the parity check matrix of the code defined by G,
where G is obviously the generator matrix for Ssub- of dimension k

Note that the number of vectors in the basis defines the dimension of
thethe
subspace
So
dimension of H is n-k (= R) and all vectors in the null space are

orthogonal to all the vectors of the code


Since the rows of H, namely the vectors bi are members of the null
space they are orthogonal to any code vector
So a vector y is a codeword only if yHT=0
Note that a linear block code can be specified by either G or H

EE576

Dr. Kousa

Linear Block Codes

243

Parity Check Matrix


So H is used to check if a codeword is valid,

b11
b
21

bR1

b12
b22
.
bR 2

... b1n

... b2 n

...
.

... bRn

b1

b 2
.

R=n-k

bR

The rows of H, namely, bi, are chosen to be


orthogonal to rows of G, namely ai
Consequently the dot product of any valid
codeword with any bi is zero
EE576

Dr. Kousa

Linear Block Codes

244

Parity Check Matrix


This is so since,
k

c di a i
i 1

and so,
k

i 1

i 1

b j .c b j . d i a i d i (a i .b j ) 0
This means that a codeword is valid (but not necessarily
correct) only if cHT = 0. To ensure this it is required that the rows
of H are independent and are orthogonal to the rows of G
That is the bi span the remaining R (= n - k) dimensions of the
codespace

EE576

Dr. Kousa

Linear Block Codes

245

Parity Check Matrix


For example consider a (3,2) code. In this case G has 2 rows, a 1 and a2
Consequently all valid codewords sit in the subspace (in this case a
plane) spanned by a1 and a2
In this example the H matrix has only one row, namely b1. This vector is
orthogonal to the plane containing the rows of the G matrix, i.e., a 1 and
a2
Any received codeword which is not in the plane containing a 1 and a2
(i.e., an invalid codeword) will thus have a component in the direction of
b1 yielding a non- zero dot product between itself and b1

EE576

Dr. Kousa

Linear Block Codes

246

Parity Check Matrix


Similarly, any received codeword which is in the
plane containing a1 and a2 (i.e., a valid codeword) will
not have a component in the direction of b1 yielding a
zero dot product between itself and b1

c1
a2

a1

c2

c3
b1
EE576

Dr. Kousa

Linear Block Codes

247

Error Syndrome
For error correcting codes we need a method to compute the
required correction
To do this we use the Error Syndrome, s of a received
codeword, cr
s = crHT
If cr is corrupted by the addition of an error vector, e, then
cr = c + e
and
s = (c + e) HT = cHT + eHT
s = 0 + eHT
Syndrome depends only on the error

EE576

Dr. Kousa

Linear Block Codes

248

Error Syndrome
That is, we can add the same error pattern to different
codewords and get the same syndrome.
There are 2(n - k) syndromes but 2n error patterns
For example for a (3,2) code there are 2 syndromes and 8
error patterns
Clearly no error correction possible in this case
Another example. A (7,4) code has 8 syndromes and 128
error patterns.
With 8 syndromes we can provide a different value to
indicate single errors in any of the 7 bit positions as well as
the zero value to indicate no errors
Now need to determine which error pattern caused the
syndrome

EE576

Dr. Kousa

Linear Block Codes

249

Error Syndrome
For systematic linear block codes, H is constructed
as follows,
G = [ I | P] and so H = [-PT | I]
where I is the k*k identity for G and the R*R identity
for H
Example, (7,4) code, dmin= 3
1
0
G I | P
0

EE576

Dr. Kousa

0
1
0
0

0
0
1
0

0
0
0
1

0
1
1
1

1
0
1
1

1
1
0

0 1 1 1 1 0 0
H - P T | I 1 0 1 1 0 1 0
1 1 0 1 0 0 1

Linear Block Codes

250

Error Syndrome - Example


For a correct received codeword cr = [1101001]

In this case,

s c r H T 1 1

EE576

Dr. Kousa

0
1

1 1
1

0
0

Linear Block Codes

1
0
1
1
0
1
0

1
1
0

1 0
0

0
1

251

Error Syndrome - Example


For the same codeword, this time with an error in the
first bit position, i.e.,
cr = [1101000]

s c r H T 1 1

0
1

0 1
1

0
0

1
0
1
1
0
1
0

1
1
0

1 0
0

0
1

In this case a syndrome 001 indicates an error in bit 1 of the


codeword
EE576

Dr. Kousa

Linear Block Codes

252

Comments about H

The minimum distance of the code is equal to the minimum number of


columns (non-zero) of H which sum to zero
We can express

d0
d
c r H T [cr 0 , cr1 ,..., cr n 1 ] 1 cr 0 d 0 cr1d1 ... cr n 1d n 1
.

d
n 1

Where do, d1, dn-1 are the column vectors of H


Clearly crHT is a linear combination of the columns of H

For a codeword with weight w (i.e., w ones), then crHT is a linear


combination of w columns of H.
Thus we have a one-to-one mapping between weight w codewords and
linear combinations of w columns of H
Thus the min value of w is that which results in crHT=0, i.e., codeword cr
will have a weight w (w ones) and so dmin = w

EE576

Dr. Kousa

Linear Block Codes

253

Comments about H
For the example code, a codeword with min weight (dmin = 3) is
given by the first row of G, i.e., [1000011]
Now form linear combination of first and last 2 cols in H, i.e.,
[011]+[010]+[001] = 0
So need min of 3 columns (= dmin) to get a zero value of cHT in
this example

Standard Array
From the standard array we can find the most likely transmitted
codeword given a particular received codeword without having
to have a look-up table at the decoder containing all possible
codewords in the standard array
Not surprisingly it makes use of syndromes

EE576

Dr. Kousa

Linear Block Codes

254

Standard Array
The Standard Array is constructed as follows,
c1 (all zero)
e1
e2
e3

eN

c2
c2+e1
c2+e2
c2+e3

c2+eN

cM
cM+e1
cM+e2
cM+e3

cM+eN

s0
s1
s2
s3

sN

All patterns in
row have same
syndrome
Different rows
have distinct
syndromes

The array has 2k columns (i.e., equal to the number


of valid codewords) and 2R rows (i.e., the number of
syndromes)
EE576

Dr. Kousa

Linear Block Codes

255

Standard Array
The standard array is formed by initially choosing ei to be,
All 1 bit error patterns
All 2 bit error patterns

Ensure that each error pattern not already in the array has a
new syndrome. Stop when all syndromes are used

Imagine that the received codeword (cr) is c2 + e3 (shown in bold in the


standard array)
The most likely codeword is the one at the head of the column
containing c2 + e3
The corresponding error pattern is the one at the beginning of the row
containing c2 + e3
So in theory we could implement a look-up table (in a ROM) which
could map all codewords in the array to the most likely codeword (i.e.,
the one at the head of the column containing the received codeword)
This could be quite a large table so a more simple way is to use
syndromes

EE576

Dr. Kousa

Linear Block Codes

256

Standard Array
This block diagram shows the proposed
implementation
e

s
Compute
syndrome

Look-up
table
c

cr

EE576

Dr. Kousa

Linear Block Codes

257

Standard Array
For the same received codeword c2 + e3, note that the
unique syndrome is s3
This syndrome identifies e3 as the corresponding error
pattern
So if we calculate the syndrome as described previously,
i.e., s = crHT
All we need to do now is to have a relatively small table
which associates s with their respective error patterns. In
the example s3 will yield e3
Finally we subtract (or equivalently add in modulo 2
arithmetic) e3 from the received codeword (c2 + e3) to yield
the most likely codeword, c2

EE576

Dr. Kousa

Linear Block Codes

258

Hamming Codes
We will consider a special class of SEC codes (i.e.,
Hamming distance = 3) where,
Number of parity bits R = n k and n = 2R 1
Syndrome has R bits
0 value implies zero errors
2R 1 other syndrome values, i.e., one for each bit
that might need to be corrected
This is achieved if each column of H is a different
binary word remember s = eHT

EE576

Dr. Kousa

Linear Block Codes

259

Hamming Codes
Systematic form of (7,4) Hamming code is,
1
0
G I | P
0

0
1
0
0

0
0
1
0

0
0
0
1

0
1
1
1

1
0
1
1

1
1
0

0 1 1 1 1 0 0
H - P T | I 1 0 1 1 0 1 0
1 1 0 1 0 0 1

The original form is non-systematic,


1 1 1 0 0 0 0
0 0 0 1 1 1 1
1 0 0 1 1 0 0

H 0 1 1 0 0 1 1
G
0 1 0 1 0 1 0
1 0 1 0 1 0 1

1 1 0 1 0 0 1
Compared with the systematic code, the column orders of both
G and H are swapped so that the columns of H are a binary
count
EE576

Dr. Kousa

Linear Block Codes

260

Hamming Codes
The column order is now 7, 6, 1, 5, 2, 3, 4, i.e., col. 1 in the nonsystematic H is col. 7 in the systematic H.

Hamming Codes - Example


For a non-systematic (7,4) code
d = 1011
c = 1110000
+ 0101010
+ 1101001
= 0110011
e = 0010000
cr= 0100011
s = crHT = eHT = 011
Note the error syndrome is the binary address of the bit to be
corrected
EE576

Dr. Kousa

Linear Block Codes

261

Hamming Codes
Double errors will always result in wrong bit being
corrected, since
A double error is the sum of 2 single errors
The resulting syndrome will be the sum of the
corresponding 2 single error syndromes
This syndrome will correspond with a third single
bit error
Consequently the corrected codeword will now
contain 3 bit errors, i.e., the original double bit
error plus the incorrectly corrected bit!

EE576

Dr. Kousa

Linear Block Codes

262

Bit Error Rates after Decoding

For a given channel bit error rate (BER), what is the BER after
correction (assuming a memoryless channel, i.e., no burst
errors)?
To do this we will compute the probability of receiving 0, 1, 2, 3,
. errors
And then compute their effect
Example A (7,4) Hamming code with a channel BER of 1%, i.e., p = 0.01
P(0 errors received) = (1 p)7 = 0.9321
P(1 error received) = 7p(1 p)6 = 0.0659

P(2 errors received)

76 2
p (1 p ) 5 0.002
2

P(3 or more errors) = 1 P(0) P(1) P(2) = 0.000034

EE576

Dr. Kousa

Linear Block Codes

263

Bit Error Rates after Decoding

EE576

Single errors are corrected, so,


0.9321+ 0.0659 = 0.998 codewords are correctly
detected
Double errors cause 3 bit errors in a 7 bit
codeword, i.e., (3/7)*4 bit errors per 4 bit dataword,
that is 3/7 bit errors per bit.
Therefore the double error contribution is 0.002*3/7
= 0.000856

Dr. Kousa

Linear Block Codes

264

Bit Error Rates after Decoding


The contribution of triple or more errors will be less
than 0.000034 (since the worst that can happen is
that every databit becomes corrupted)
So the BER after decoding is approximately
0.000856 + 0.000034 = 0.0009 = 0.09%
This is an improvement over the channel BER by a
factor of about 11

EE576

Dr. Kousa

Linear Block Codes

265

Perfect Codes
If a codeword has n bits and we wish to correct up to
t errors, how many parity bits (R) are needed?
Clearly we need sufficient error syndromes (2R of
them) to identify all error patterns up to t errors
Need 1 syndrome to represent 0 errors
Need n syndromes to represent all 1 bit errors
Need n(n-1)/2 to syndromes to represent all 2 bit
errors
Need nCe = n!/(n-e)!e! syndromes to represent all e
bit errors

EE576

Dr. Kousa

Linear Block Codes

266

Perfect Codes
So,

2R 1 n

to correct up to 1 error

n(n - 1)
1 n
to correct up to 2 errors
2
n(n - 1) n(n - 1)(n - 2)
1 n

to correct up to 3 errors
2
6
If equality then code is Perfect

Only known perfect codes are SEC Hamming codes and TEC Golay
(23,12) code (dmin=7). Using previous equation yields

23(23 - 1) 23(23 - 1)(23 - 2)


1 23

2048 211 2( 2312 )


2
6
EE576

Dr. Kousa

Linear Block Codes

267

Summary
In this section we have
Used block codes to add redundancy to messages
to control the effects of transmission errors
Encoded and decoded messages using Hamming
codes
Determined overall bit error rates as a function of
the error control strategy

EE576

Dr. Kousa

Linear Block Codes

268

Error Correction Codes &


Multi-user Communications

Agenda

Shannon Theory
History of Error Correction Code
Linear Block Codes
Decoding
Convolution Codes
Multiple-Access Technique
Capacity of Multiple Access
Random Access Methods

2006/07/07

Wireless Communication Engineering I

270

Shannon Theory:
R < C Reliable communication Redundancy (Parity bits) in
transmitted data stream
error correction capability
Encoding
Block Code

Convolutional Code

Code length is fixed

Coding Rate is fixed

Decoding

2006/07/07

Hard-Decoding

Soft-Decoding

Digital Information

Analog Information

Wireless Communication Engineering I

271

History of Error Correction Code


Shannon (1948):
Random Coding, Orthogonal Waveforms
Golay (1949): Golay Code, Perfect Code
Hamming (1950):
Hamming Code
(Single Error Correction, Double Error
Detection)
Gilbert (1952): Gilbert Bound on Coding Rate
2006/07/07

Wireless Communication Engineering I

272

Muller (1954):
Combinatorial Digital Function and
Error Correction Code
Elias (1954): Tree Code, Convolutional Code
Reed and Solomon (1960):
Reed-Solomon Code (Maximal Separable Code)
Hocquenghem (1959) and Bose and
Chaudhuri (1960):
BCH Code (Multiple Error Correction)
Peterson (1960):
Binary BCH Decoding, Error Location Polynomial
2006/07/07

Wireless Communication Engineering I

273

Wozencraft and Reiffen (1961):


Sequencial decoding for convolutional code
Gallager (1962) LDPC
Fano (1963):
Fano Decoding Algorithm for convolutional code
Ziganzirov (1966):
Stack Decoding Algorithm for convolutional code
Forney (1966):
Generalized Minimum Distance Decoding
(Error and Erasure Decoding)
Viterbi (1967):
Optimal Decoding Algorithm for convulutional code
2006/07/07

Wireless Communication Engineering I

274

Berlekamp (1968): Fast Iterative BCH Decoding


Forney (1966): Concatinated Code
Goppa (1970):
Goppa Code (Rational Function Code)
Justeson (1972):
Justeson Code (Asmptotically Good Code)
Ungerboeck and Csajka (1976):
Trellis Code Modulation,
Bandwidth-constraint channel
Goppa (1980): Algebraic-Geometry Code
2006/07/07

Wireless Communication Engineering I

275

Welch and Berlekamp (1983):


Remainder Decoding Algorithm without using
Syndrome
Araki, Sorger and Kotter (1993):
Fast GMD Decoding Algorithm
Berrou (1993): Turbo Code,
Parallel concatinated convolutional code

2006/07/07

Wireless Communication Engineering I

276

Basics of Decoding

a) Hamming distance d (ci , c j ) 2t 1


b) Hamming distance d (ci , c j ) 2t
The received vector is denoted by r.

t errors correctable
2006/07/07

Wireless Communication Engineering I

277

Linear Block Codes


(n, k, dmin) code

n : code length
k : number of information bits
d min : minimum distance
k n : coding rate

d Good Error correction capability


k / n Low rate
r = n-k d
(n, k, d) Linear Block Code is
Linear Subspace with k-dimension in n-dimension linear space.

2006/07/07

Wireless Communication Engineering I

278

(, , , /)

Arithmetic operations
for encoding
and decoding over an finite field GF(Q)
where Q = pr, p: prime number r: positive
integer
Example GF(2):

+ 0 1
addition 0 0 1
1 1 0
XOR

2006/07/07

0 1
multiplication 0 0 0
1 0 1
AND

Wireless Communication Engineering I

279

[Encoder]

The Generator Matrix G and the Parity Check Matrix H

k information bits X encoder G n-bits codeword C

C XG

Dual (n, n - k) code


Complement orthogonal subspace
Parity Check Matrix H = Generator Matrix of Dual code
CHt = 0
GHt = 0

2006/07/07

Wireless Communication Engineering I

280

2006/07/07

Wireless Communication Engineering I

281

error vector & syndrome


c : codeword vector
e : error vector
r : received vector (after Hard decision)
s : syndrome

s rH t c e H t eH t
s e (decoding process)

2006/07/07

Wireless Communication Engineering I

282

[Minimum Distance]
Singleton Bound
If no more than dmin - 1 columns of H are linearly
independent.
dmin n - k + 1 (Singleton Bound)
Maximal Separable Code:
dmin = n - k + 1, e.g. Reed-Solomon Code

Some Specific Linear Block Codes


Hamming Code (n, k, dmin) = (2m - 1, 2m - 1 - m, 3)
Hadamard Code (n, k, dmin) = (2m, m + 1, 2m - 1)

2006/07/07

Wireless Communication Engineering I

283

Easy Encoding
Cyclic Codes
C = (cn -1, , c0) is a codeword (cn - 2, , c0, cn
- 1) is also a codeword
Codeword polynomial: C(p) = cn - 1 pn - 1 + ...+ cn

pC p mod p 1 Cyclic Shift


n

2006/07/07

Wireless Communication Engineering I

284

Encoding: Message polynomial


k 1
X p xk 1 p xn
Codeword polynomial C p X p g p
nk
where g(p): generator polynomial of degree
n
p 1 g p h p
h(p): Parity polynomial
Encoder is implemented by Shift registers.
2006/07/07

Wireless Communication Engineering I

285

Encoder for an (n, k) cyclic code.


2006/07/07

Wireless Communication Engineering I

286

Syndrome calculator for (n, k) cyclic code.

Digital to Analog (BPSK)


c 1 s 1
0 s 1
s 2c 1
2006/07/07

Wireless Communication Engineering I

287

Soft-Decoding & Maximum Likelihood


( k)

r = s +n

(
)
= ( s ,, s ) + ( n ,, n )
= r1 , , rn
1

Prob r s : Likelihood

Min r s

Max : Correlation r, c

Max Prob r s
k

2006/07/07

Wireless Communication Engineering I

288

Optimum Soft-Decision Decoding of Linear


Block Codes
Optimum receiver has M = 2k Matched Filter
M correlation metrics
C r, Ci 2cij 1 rj
n

j 1

Ci : i - th codeword

wher c : j - th position bit of the i - th codeword


ij
e
rj : j - th received signal
Largest matched filter output is selected.
2006/07/07

Wireless Communication Engineering I

289

Error probability for soft-decision decoding


(Coherent PSK)
PM exp b Rc d min k ln 2
b : SNR per bit
wher R : Coding rate k n
c
e
Uncoded binary PSK
1
Pe exp b
2
2006/07/07

Wireless Communication Engineering I

290

Coding gain:

Cg 10 log Rc d min k ln 2 b
dmin

2006/07/07

Cg

Wireless Communication Engineering I

291

Hard-Decision Decoding

Discrete-time channel =
modulator + AWGN channel + demodulator
BSC with crossover probability

p Q 2 b Rc : coherent PSK

Q b Rc : coherent FSK

1
2
2006/07/07

exp 12 b Rc : noncoherent FSK


Wireless Communication Engineering I

292

Maximum-Likelihood Decoding
Minimum Distance Decoding
Syndrome Calculation by Parity check matrix H

S YH

Cm e H

eH t
where C m : transmitted codeword
Y : received codeword at the demodulator
e : binary error vector
2006/07/07

Wireless Communication Engineering I

293

Comparison of Performance between Hard-Decision and Soft-Decision


Decoding
At most

2dB difference

Bounds on Minimum Distance of Linear Block Codes (Rc vs. dmin)


Hamming upper bound (2t < dmin)
t
1
n
1 Rc log 2
n
i 0 i

Plot kin upper bound

d min
n

2006/07/07

1
1
2
log 2 d min 1 Rc
2d min
n
2

Wireless Communication Engineering I

294

Elias upper bound


d min

2 A1 A

n
Rc 1 A log 2 A 1 A log 2 1 A
Gilbert-Varsharmov lower bound
d min

n
Rc 1 H
2006/07/07

Wireless Communication Engineering I

295

2006/07/07

Wireless Communication Engineering I

296

Interleaving of Coded Data for Channels


with Burst Errors
Multipath and fading channel burst error
Burst error correction code: Fire code
Correctable burst length b

b n k
2

Block and Convolution interleave is


effective for burst error.
2006/07/07

Wireless Communication Engineering I

297

Convolution Codes
Performance of convolution code > block code
shown by Viterbis Algorithm.

P (e) z

nE ( R )

E(R) : Error Exponent

2006/07/07

Wireless Communication Engineering I

298

2006/07/07

Wireless Communication Engineering I

299

Constraint length-3, rate-1/2 convolutional encoder.


2006/07/07

Wireless Communication Engineering I

300

Parameter of convolution code:


Constraint length, K
Minimum free distance
Optimum Decoding of Convolution
Codes
The Viterbi Algorithm
For K 10, this is practical.
Probability of Error for Soft-Decision
Decoding
2006/07/07

Wireless Communication Engineering I

301

Trellis for the convolutional encoder


2006/07/07

Wireless Communication Engineering I

302

Pe

a Q

d d free

2 b Rc d

where ad : the number of paths of


distance d
Probability of Error for Hard-Decision
Decoding Hamming distance is a metric
for hard-decision
2006/07/07

Wireless Communication Engineering I

303

Turbo Coding

2006/07/07

Wireless Communication Engineering I

304

RSC Encoder

2006/07/07

Wireless Communication Engineering I

305

Shannon Limit & Turbo Code

2006/07/07

Wireless Communication Engineering I

306

Multi-user Communications

Multiple Access Techniques


1.

A common communication channel is shared by many users.


up-link in a satellite communication, a set of terminals
a central computer, a mobile cellular system
2. A broadcast network
down-links in a satellite system, radio and TV broadcast systems
3. Store-and-forward networks
4. Two-way communication systems
-FDMA (Frequency-division Multiple Access)
-TDMA (Time-division Multiple Access)
- CDMA (Code-division Multiple Access):
for burst and low-duty-cycle information transmission
Spread spectrum signals small cross-correlations
For no spread random access, collision and interference occur.
Retransmission Protocol

2006/07/07

Wireless Communication Engineering I

307

Capacity of Multiple Access


Methods
In FDMA, normalized total capacity Cn = KCK / W
(total bit rate for all K users per unit of bandwidth)

Eb

Cn log 2 1 Cn
N0

where

W : Bandwidth
Eb : Energy per bit
N 0 : Noise power spectrum desity

2006/07/07

Wireless Communication Engineering I

308

Normalized capacity as a function of b / N0 for FDMA.


2006/07/07

Wireless Communication Engineering I

309

Total capacity per hertz as a function of b / N0 for FDMA.


2006/07/07

Wireless Communication Engineering I

310

In TDMA, there is a practical limit for the


transmitter power
In no cooperative CDMA,

1
Cn log 2 e
Eb N 0
2006/07/07

Wireless Communication Engineering I

311

Normalized capacity as a function of b / N0 for noncooperative CDMA.


2006/07/07

Wireless Communication Engineering I

312

Capacity region for multiple users

Capacity region of two-user CDMA multiple access Gaussian channel.


2006/07/07

Wireless Communication Engineering I

313

Code-Division Multiple Access


CDMA Signal and Channel Models
The Optimum Receiver
Synchronous Transmission
Asynchronous Transmission
- Sub optimum Detectors
Computational complexity grows linearly
with the number of users, K.
Conventional Single-user Detector
Near-far problem
Decor relation Detector
Minimum Mean-Square-Error Detector
Other Types of Detectors
- Performance Characteristics of Detectors
2006/07/07

Wireless Communication Engineering I

314

Random Access Methods


ALOHA Systems and Protocols Channel access
protocol
synchronized (slotted) ALOHA
unsynchronized (unslotted) ALOHA
Throughput for slotted ALOHA

2006/07/07

Wireless Communication Engineering I

315

2006/07/07

Wireless Communication Engineering I

316

Throughput & Delay Performance

2006/07/07

Wireless Communication Engineering I

317

Carrier Sense Systems and Protocols


CSMA / CD
(carrier sense multiple access with
collision detection)
No persistent CSMA
1-persistent CSMA
p-persistent CSMA

2006/07/07

Wireless Communication Engineering I

318

2006/07/07

Wireless Communication Engineering I

319

2006/07/07

Wireless Communication Engineering I

320

2006/07/07

Wireless Communication Engineering I

321

Chapter 10
Error Detection
and
Correction

10.322

101INTRODUCTION

some issues related, directly or indirectly, to error


detection and correction.
Topics discussed in this section:
Types of Errors
Redundancy
Detection Versus Correction
Modular Arithmetic

10.323

Figure 10.1 Single-bit error

In a single-bit error, only 1 bit in the data unit has


changed.

10.324

Figure 10.2 Burst error of length 8

A burst error means that 2 or more bits in the data unit


have changed.
10.325

Error detection/correction

Error detection

Error correction

10.326

Check if any error has occurred


Dont care the number of errors
Dont care the positions of errors
Need to know the number of errors
Need to know the positions of errors
More difficult

Figure 10.3 The structure of encoder and decoder

To detect or correct errors, we need to send extra


(redundant) bits with data.
10.327

Modular Arithmetic

10.328

Modulus N: the upper limit


In modulo-N arithmetic, we use only
the integers in the range 0 to N 1,
inclusive.
If N is 2, we use only 0 and 1
No carry in the calculation (sum and
subtraction)

Figure 10.4 XORing of two single bits or two words

10.329

102BLOCKCODING

In block coding, we divide our message into blocks,


each of k bits, called datawords. We add r redundant bits
to each block to make the length n = k + r. The resulting
n-bit blocks are called codewords.
Topics discussed in this section:
Error Detection
Error Correction
Hamming Distance
Minimum Hamming Distance
10.330

Figure 10.5 Datawords and codewords in block coding

10.331

Example 10.1

The 4B/5B block coding discussed in Chapter 4 is a good


example of this type of coding. In this coding scheme, k =
4 and n = 5.
As we saw, we have 2k = 16 datawords and 2n = 32
codewords. We saw that 16 out of 32 codewords are used
for message transfer and the rest are either used for other
purposes or unused.

10.332

Figure 10.6 Process of error detection in block coding

10.333

Table 10.1 A code for error detection (Example 10.2)

10.334

Figure 10.7 Structure of encoder and decoder in error correction

10.335

Table 10.2 A code for error correction (Example 10.3)

10.336

Hamming Distance

10.337

The Hamming distance between two


words is the number of differences
between corresponding bits.
The minimum Hamming distance is
the smallest Hamming distance
between
all possible pairs in a set of words.

We can count the number of 1s in the Xoring of two words


1. The Hamming distance d(000, 011) is 2 because

2. The Hamming distance d(10101, 11110) is 3 because

10.338

Example 10.5

Find the minimum Hamming distance of the coding scheme


in Table 10.1.

Solution
WefirstfindallHammingdistances.

Thedmininthiscaseis2.
10.339

Example 10.6

Find the minimum Hamming distance of the coding scheme


in Table 10.2.

Solution
WefirstfindalltheHammingdistances.

The dmin in this case is 3.

10.340

Minimum Distance for


Error Detection

To guarantee the detection of up to s errors in all cases, the


minimum Hamming distance in a block code must be dmin
= s + 1.
Why?

Example 10.7

The minimum Hamming distance for our first code scheme (Table 10.1) is 2. This code guarantees
detection of only a single error.
For example, if the third codeword (101) is sent and one error occurs, the received codeword does not
match any valid codeword. If two errors occur, however, the received codeword may match a valid
codeword and the errors are not detected.

10.341

Example 10.8

Table 10.2 has dmin = 3. This code can detect up to two


errors. When any of the valid codewords is sent, two errors
create a codeword which is not in the table of valid
codewords. The receiver cannot be fooled.
What if there are three error occurrance?

10.342

Figure 10.8 Geometric concept for finding dmin in error detection

10.343

Figure 10.9 Geometric concept for finding dmin in error correction

To guarantee correction of up to t errors in all cases, the


minimum Hamming distance in a block code must be dmin
= 2t + 1.

10.344

Example 10.9

A code scheme has a Hamming distance dmin = 4. What is


the error detection and correction capability of this
scheme?
Solution
This code guarantees the detection of up to three errors
(s = 3), but it can correct up to one error. In other words,
if this code is used for error correction, part of its capability is wasted. Error
correctioncodesneedtohaveanoddminimumdistance(3,5,7,...).

10.345

103LINEARBLOCKCODES

Almost all block codes used today belong to a subset called


linear block codes.
A linear block code is a code in which the exclusive OR
(addition modulo-2 / XOR) of two valid codewords creates
another valid codeword.

10.346

Example 10.10

Let us see if the two codes we defined in Table 10.1 and


Table 10.2 belong to the class of linear block codes.
1. The scheme in Table 10.1 is a linear block code
because the result of XORing any codeword with any
other codeword is a valid codeword. For example, the
XORing of the second and third codewords creates the
fourth one.
2. The scheme in Table 10.2 is also a linear block code.
We can create all four codewords by XORing two
other codewords.
10.347

Minimum Distance for


Linear Block Codes

The minimum hamming distance is the number of


1s in the nonzero valid codeword with the
smallest number of 1s

10.348

Linear Block Codes

Simple parity-check code


Hamming codes

Table 10.3 Simple parity-check code C(5, 4)


A simple parity-check code is a single-bit
error-detecting code in which n = k + 1 with dmin = 2.
The extra bit (parity bit) is to make the total number
of 1s in the codeword even
A simple parity-check code can detect an odd
number of errors.
10.349

Figure 10.10 Encoder and decoder for simple parity-check code

10.350

Example 10.12

Let us look at some transmission scenarios. Assume the


sender sends the dataword 1011. The codeword created
from this dataword is 10111, which is sent to the receiver.
We examine five cases:
1. No error occurs; the received codeword is 10111. The
syndrome is 0. The dataword 1011 is created.
2. One single-bit error changes a1 . The received
codeword is 10011. The syndrome is 1. No dataword
is created.
3. One single-bit error changes r0 . The received codeword
is 10110. The syndrome is 1. No dataword is created.
10.351

Example 10.12 (continued)

4. An error changes r0 and a second error changes a3 .


The received codeword is 00110. The syndrome is 0.
The dataword 0011 is created at the receiver. Note that
here the dataword is wrongly created due to the
syndrome value.
5. Three bitsa3, a2, and a1are changed by errors.
The received codeword is 01011. The syndrome is 1.
The dataword is not created. This shows that the simple
parity check, guaranteed to detect one single error, can
also find any odd number of errors.

10.352

Figure 10.11 Two-dimensional parity-check code

10.353

Figure 10.11 Two-dimensional parity-check code

10.354

Figure 10.11 Two-dimensional parity-check code

10.355

Table 10.4 Hamming code C(7, 4)

1. All Hamming codes discussed in this book have dmin = 3.


2. The relationship between m and n in these codes is
n = 2m 1.

10.356

Figure 10.12 The structure of the encoder and decoder for a Hamming code

10.357

Table 10.5 Logical decision made by the correction logic analyzer

r0=a2+a1+a0

S0=b2+b1+b0+q0

r1=a3+a2+a1

S1=b3+b2+b1+q1

r2=a1+a0+a3

S2=b1+b0+b3+q2

10.358

Example 10.13

Let us trace the path of three datawords from the sender to the
destination:
1. The dataword 0100 becomes the codeword 0100011.
The codeword 0100011 is received. The syndrome is
000, the final dataword is 0100.
2. The dataword 0111 becomes the codeword 0111001.
The codeword 0011001 is received. The syndrome is \
011. After flipping b2 (changing the 1 to 0), the final
dataword is 0111.
3. The dataword 1101 becomes the codeword 1101000.
The codeword 0001000 is received. The syndrome is
101. After flipping b0, we get 0000, the wrong dataword.
This shows that our code cannot correct two errors.
10.359

104CYCLICCODES

Cyclic codes are special linear block codes with one


extra property. In a cyclic code, if a codeword is
cyclically shifted (rotated), the result is another
codeword.
Topics discussed in this section:
Cyclic Redundancy Check
Hardware Implementation
Polynomials
Cyclic Code Analysis
Advantages of Cyclic Codes
Other Cyclic Codes
10.360

Table 10.6 A CRC code with C(7, 4)

10.361

Figure 10.14 CRC encoder and decoder

10.362

Figure 10.15 Division in CRC encoder

10.363

Figure 10.16 Division in the CRC decoder for two cases

10.364

Figure 10.21 A polynomial to represent a binary word

10.365

Figure 10.22 CRC division using polynomials

The divisor in a cyclic code is normally called the


generator polynomial or simply the generator.
10.366

105CHECKSUM

The last error detection method we discuss here is


called the checksum. The checksum is used in the
Internet by several protocols although not at the data
link layer. However, we briefly discuss it here to
complete our discussion on error checking
Topics discussed in this section:
Idea
Ones Complement
Internet Checksum
10.367

Example 10.18

Suppose our data is a list of five 4-bit numbers that we


want to send to a destination. In addition to sending these
numbers, we send the sum of the numbers. For example, if
the set of numbers is (7, 11, 12, 0, 6), we send (7, 11, 12, 0,
6, 36), where 36 is the sum of the original numbers. The
receiver adds the five numbers and compares the result
with the sum. If the two are the same, the receiver assumes
no error, accepts the five numbers, and discards the sum.
Otherwise, there is an error somewhere and the data are
not accepted.

10.368

Example 10.19

We can make the job of the receiver easier if we send the


negative (complement) of the sum, called the checksum. In
this case, we send (7, 11, 12, 0, 6, 36). The receiver can
add all the numbers received (including the checksum). If
the result is 0, it assumes no error; otherwise, there is an
error.

10.369

Example 10.20

How can we represent the number 21 in ones


complement arithmetic using only four bits?

Solution
The number 21inbinaryis10101 (itneeds five bits). We
can wrap the leftmost bit and add it to the four rightmost
bits.Wehave(0101+1)=0110or6.

10.370

Example 10.21

How can we represent the number 6 in ones


complement arithmetic using only four bits?
Solution
In ones complement arithmetic, the negative or
complement of a number is found by inverting all bits.
Positive6is0110;negative6is1001.Ifweconsideronly
unsigned numbers, this is 9. In other words, the
complementof6is9.Anotherwaytofindthecomplement
ofanumberinonescomplementarithmeticistosubtract
thenumberfrom2n1(161inthiscase).
10.371

Figure 10.24 Example 10.22

1 1 1 1
0 0 0 0

10.372

Note
Sender site:

1. The message is divided into 16-bit words.


2. The value of the checksum word is set to 0.
3. All words including the checksum are
added using ones complement addition.
4. The sum is complemented and becomes the
checksum.
5. The checksum is sent with the data.

10.373

Note
Receiver site:

1. The message (including checksum) is


divided into 16-bit words.
2. All words are added using ones
complement addition.
3. The sum is complemented and becomes the
new checksum.
4. If the value of checksum is 0, the message
is accepted; otherwise, it is rejected.
10.374

Example 10.23

Let us calculate the checksum for a text of 8 characters


(Forouzan). The text needs to be divided into 2-byte (16bit) words. We use ASCII (see Appendix A) to change each
byte to a 2-digit hexadecimal number. For example, F is
represented as 0x46 and o is represented as 0x6F. Figure
10.25 shows how the checksum is calculated at the sender
and receiver sites. In part a of the figure, the value of
partial sum for the first column is 0x36. We keep the
rightmost digit (6) and insert the leftmost digit (3) as the
carry in the second column. The process is repeated for
each column. Note that if there is any corruption, the
checksum recalculated by the receiver is not all 0s. We
leave this an exercise.
10.375

Figure 10.25 Example 10.23

10.376

Modern Coding Theory: LDPC Codes

Hossein Pishro-Nik

University of Massachusetts Amherst


November 7, 2006

Outline

Introduction and motivation


Error control coding
Block codes
Minimum distance
Modern coding: LDPC codes
Practical challenges
Application of LDPC codes to
holographic data storage
378

Errors in Information Transmission


Digital Communications:
Transporting information from
one party to another, using a
sequence of symbols, e.g. bits.

Received bits = Corrupted version


of the transmitted bits

. 0100010101

Noise & interference:


received sequence may be
different from the transmitted
one.
. 0110010101
Transmitted bits

379

Errors in Information Transmission: Cont.


Magnetic recording

Track

Sector

Some of the bits may change


during the transmission from the
disk to the disk drive

010101011110010101001

380

Errors in Information Transmission: Cont.


These communication systems can be modeled
as Binary Symmetric Channels (BSC):

Receiver

Sender

Information bits

BSC

Corrupted bits

1001010101

1011000101
1-p

Each bit is flipped


with probability p:

0<p<0.5
p

1-p

1
381

Pioneers of Coding Theory


Bell Telephone Laboratories

Richard Hamming

Claude Shannon
382

Error Control Coding: Repetition Codes


Error Control Coding:
Use redundancy to reduce the bit error rate

1001010101

BSC

1011000101

Bit error probability p=0.01


Example:
Three-fold repetition code: send each bit three times

383

Repetition Codes: Cont.


( x1 )

Encoder: Repeat
each bit three times

( y1 , y 2 , y3 ) ( x1 , x1 , x1 )

codeword

BSC

( x1 )

Decoder: majority
voting

( z1 , z 2 , z3 )
Corrupted codeword

384

Repetition Codes: Cont.


( x1 ) (0)

Encoder

(0,0,0)

codeword

BSC
(0)

Successful
decoding!

Decoder

(1,0,0)

Corrupted codeword

Decoding: majority voting


385

Decoding Error
Decoding Error Probability p
:e
= Prob{2 or 3 bits in the codeword received in error}

pe p 3 3 p 2 (1 p ),
p 0.01 pe 3 10-4
Advantage: reduced bite error rate
Disadvantage: we lose bandwidth because each bit
should be sent three times

386

Error Control Coding: Block Codes


( x1 , x2 ,..., xk )

Information block
( x1 , x2 ,..., xk )

Encoder

( y1 , y 2 ,..., y n )

codeword
n>k
Decoder

BSC

( z1 , z 2 ,..., z n )

Corrupted codeword

Encoding: mapping the information block to the


corresponding codeword
Decoding: an algorithm for recovering the
information block from the corrupted codeword
Always n>k: redundancy in the codeword

387

Error Control Coding: Block Codes

( x1 , x2 ,..., xk )

Encoder

Codeword

Information block
( x1 , x2 ,..., xk )

( y1 , y 2 ,..., y n )

Decoder

BSC

( z1 , z 2 ,..., z n )

Corrupted codeword

K: code dimension, n: code length


This is called an (n,k) block code
388

Code Rate
In general an (n,k) block code is a 1-1 mapping
( x1 , x2 ,..., xk ) (y1 , y 2 ,..., y n )
from k bits to n bits:
R Code rate

Dimension k

Code length n

0 R 1
R shows the amount of redundancy in the codeword
Higher R = Lower redundancy
389

Repetition Codes Revisited

For repetition code: k=1, n=3, R=1/3

( x1 )
K=1

Encoder

( y1 , y 2 , y3 ) ( x1 , x1 , x1 )

n=3

BSC
( x1 )

Decoder

( z1 , z 2 , z3 )

The repetition code is a (3,1) block code


390

Block Codes: Cont.


There are two valid codewords in the repetition code:
(0) (0,0,0)
(1) (1,1,1)

Valid codewords

A (5,3) block code: (n=5, k=3, R=3/5):


(0

0) ( 0

0)

(0

1) (0

1 1

0)

(0

0) (1

0)

(0 1 1) (0 1 0 1 1)
(1 0 0) (1 0 1 1 1)
(1

(1 1

1)

(0

0) (0

1 1 1

8 valid codewords

0)

1)

(1 1 1) (1 1

0)

The number of valid codewords is equal


to the number of possible information blocks,

.2

k
391

Block Codes: Cont.

( x1 , x2 ,..., xk ) (y1 , y 2 ,..., y n )


All k-tuples

All n-tuples

Valid codewords

(0,0)
(0,1)

(1,0)

(1,1)

2 k points

2 n points
392

Good Block Codes


There exist more efficient and more powerful
codes.

Good Codes:
High rates = lower redundancy (depends on the
channel error rate p)
Low error rate at the decoder
Simple and practical encoding and decoding
393

Linear Block Codes


C : ( x1 , x2 ,..., xk ) (y1 , y2 ,..., y n )
A linear mapping
g11

g 21
( y1 , y2 ,..., y n ) ( x1 , x2 ,..., xk ) .

.
g k1

g 1n

g12

...

g 22
.

.... g 2 n
.
.

.
gk 2

.
...

Linear block codes:

.
g kn

Generator matrix G

Simple structure: easier to analyze


Simple encoding algorithms.
394

Linear Block Codes


There are many practical linear block codes:

Hamming codes
Cyclic codes
Reed-Solomon codes
BCH codes

395

Channel Capacity
( x1 , x2 ,..., xk )

( x1 , x2 ,..., xk )

Encoder

Decoder

( y1 , y2 ,..., y n )

( z1 , z 2 ,..., z n )

Noisy
channel

Channel capacity (Shannon):


The maximum achievable data rate
Shannon capacity is achievable using random codes
396

Shannon Codes

( x1 , x2 ,..., xk ) (y1 , y 2 ,..., y n ) Random mapping


All k-tuples

All n-tuples

Valid codewords

(0,0)
(0,1)

(1,0)

(1,1)

2 k points

2 n points
397

Shannon Random Codes

As n (block length) goes to infinity,


random codes achieve the channel
capacity, i.e,
Code rate R approaches C, while the
decoding error probability goes to zero

398

Error Control Coding:


Low-Density Parity-Check (LDPC) Codes
Ideal codes

Have efficient encoding


Have efficient decoding
Can approach channel capacity

Low-density parity-check (LDPC) codes

Random codes: based on random graphs


Simple iterative decoding

399

t-Error-Correcting Codes
The repetition codes can correct one error in the
codeword; however, it fails to correct higher number
of errors.

A code that is capable of correcting t errors in the


codewords is called a t-error-correcting code.

The repetition code is a 1-error-correcting code.


400

Minimum Distance
The minimum distance of a code is the minimum
Hamming distance between its codewords:

d min= Min{dist(u,v): u and v are codewords}


For the repetition code, since there are only two
valid codewords,

c1 (0,0,0) and c2 (1,1,1)

d min dist (c1 , c2 ) 3


401

Minimum Distance: Cont.


All vectors of length n (n-tuples)

402

Minimum Distance: Cont.

Higher minimum distance = Stronger code


Example: For repetition code t=1, and

d min . 3
403

Modern Coding Theory

Random linear codes can achieve


channel capacity
Linear codes can be encoded efficiently
Decoding of linear codes: NP hard
Gallagers idea:
Find a subclass of random linear codes that
can be decoded efficiently
404

Modern Coding Theory


Iterative coding schemes: LDPC codes, Turbo codes
Encoder

BSC
Iterative
Decoder

Iterative decoding instead of distance-based decoding


405

Introduction to Channel Coding


Noisy channels:
Information bits
1001010101

Noisy
channel

Corrupted bits
10e10e01e1

Example: binary erasure channel (BEC)


(0,1)

BEC

(0,e)

Other channels: Gaussian channel, binary symmetric channel,


406

Low-Density Parity-Check Codes


Defined by random sparse graphs (Tanner graphs)

y1 y2 y3 0

Check (message) nodes


Variable (bit) nodes

y1

y2

y3

yn

Simple iterative decoding: message-passing algorithm


407

Important Recent Developments


Luby et al. and Richardson et al.
Density evolution
Optimization using density evolution

Shokrollahi et al.
Capacity-achieving LDPC codes for the binary erasure
channel (BEC)

Richardson et al. and Jin et al.


Efficient encoding
Irregular repeat-accumulate codes

408

Standard Iterative Decoding over the BEC


Codeword

Received word

01101001 01e0ee01
Standard Iterative Algorithm:
Repeat for any check node
{
If only one of the
neighbors is missing,
recover it
}

Check node

Neighbors

y1

y2

y3

y3 y1 y2 0 1 1

409

Standard Iterative Decoding: Cont.


f

e =0

e =1

e =1

Decoding is successful!
410

Algorithm A: Cont.

The algorithm may fail


f

Stopping Set: S

411

Practical Challenges: Finite-Length Codes

In practice, we need to use short or


moderate-length codes
Short or moderate length codes do not
perform as well as long codes

412

Error Floor of LDPC Codes


10-1

10-5

BER

Capacityapproaching LDPC
Codes suffer from
the error floor
problem

10

-7

High Error
Floor

Low Error Floor

10-9

Average erasure probability of the channel


413

Volume Holographic Memory (VHM) Systems

414

Noise and Error Sources

Thermal noise, shot noise,


Limited diffraction
Aberration
Misalignment error
Inter-page interference (IPI)
Photovoltaic damage
Non-uniform erasure

415

The Scaling Law


The magnitudes of systematic errors and thermal noise
are assumed to remain unchanged with respect to M
1
(number of pages) and SNR is proportional to 2
M

SNR 2
M

SNR decreases as the number of pages increases.


There exists an optimum number of pages that maximizes
the storage capacity.
416

Raw Error Distribution over a Page

Bits in different regions

of a page are affected


by different noise
powers.
The noise power is
higher at edges.

417

Properties and Requirements


Can use large block lengths
Non-uniform error correction
Error floor: Target BER<

10

12

Simple implementation: Simple decoding

418

Ensembles for Non-uniform Error Correction

Check Nodes

Variable Nodes
c1

c2

ck

Bits from the first region


419

Ensemble Properties
Threshold effect
Concentration
theorem
Density evolution:

420

Ensemble Properties
Stability condition (BEC):

421

Design Methodology
The performance of the decoder is not directly related to
the minimum distance.

However, the minimum distance still plays an important


role:
Example: error floor effect

To eliminate error floor, we avoid degree-two variable


nodes to have large minimum distance.

For efficient decoding, and also for simplicity of design,


we use low degrees for variable nodes.
422

Performance on VHM
10-2

Rate=.85
Avg degree=6
Gap from capacity
at BER 1e-9:
0.6dB

4
n= 10

10-7

n= 105

BER

10-5

10-9
423

Storage Capacity
Information theoretic capacity
for soft-decision decoding: .95Gb

Storage capacity (Gbits)

1
0.8
0.6

LDPC: soft
.84 Gb
LDPC: hard
.76 Gb
RS: hard
.52Gb

0.4
0.2
0
2000

4000

Number of pages

6000

424

Conclusion
Carefully designed LDPC codes can result in
significant increase in the storage capacity.
By incorporating channel information in
design of LDPC codes
- Small gap from capacity
- Error floor reduction
- More efficient decoding

425

Modern Coding Theory


The performance of the decoder is not directly
related to the minimum distance.

However, the minimum distance still plays an


important role:
Example: error floor effect

426

Chapter 4

Digital
Transmission
McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

4.1 Line Coding

Some Characteristics
Line Coding Schemes
Some Other Schemes

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Figure 4.1

McGraw-Hill

Line coding

The McGraw-Hill Companies, Inc., 2004

Figure 4.2
data level

McGraw-Hill

Signal level versus

The McGraw-Hill Companies, Inc., 2004

Figure 4.3

McGraw-Hill

DC component

The McGraw-Hill Companies, Inc., 2004

Example 1
Asignalhastwodatalevelswithapulsedurationof1ms.Wecalculatethe
pulserateandbitrateasfollows:
Pulse Rate = 1/ 10-3= 1000 pulses/s
Bit Rate = Pulse Rate x log2 L = 1000 x log2 2 = 1000 bps
Example 2
Asignalhasfourdatalevelswithapulsedurationof1ms.Wecalculatethe
pulserateandbitrateasfollows:
Pulse Rate = = 1000 pulses/s
Bit Rate = PulseRate x log2 L = 1000 x log2 4 = 2000 bps

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Figure 4.4 Lack of


synchronization

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Example 3

Inadigitaltransmission,thereceiverclockis0.1percent
fasterthanthesenderclock.Howmanyextrabitsper
seconddoesthereceiverreceiveifthedatarateis1
Kbps?Howmanyifthedatarateis1Mbps?
Solution
At 1 Kbps:
1000 bits sent 1001 bits received1 extra bps
At 1 Mbps:
1,000,000 bits sent 1,001,000 bits received1000 extra bps

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Figure 4.5
schemes

McGraw-Hill

Line coding

The McGraw-Hill Companies, Inc., 2004

Figure 4.6

Unipolar encoding

Note:

Unipolar encoding uses only one voltage level.


McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Figure 4.7
encoding

Types of polar

Note:

Polar encoding uses two voltage levels (positive and negative).


McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Note:
In NRZ-L the level of the signal is dependent upon the state of the
bit.

Note:
In NRZ-I the signal is inverted if a
1 is encountered.
McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Figure 4.8
encoding

McGraw-Hill

NRZ-L and NRZ-I

The McGraw-Hill Companies, Inc., 2004

Figure 4.9

McGraw-Hill

RZ encoding

The McGraw-Hill Companies, Inc., 2004

Note:
A good encoded digital signal
must contain a provision for
synchronization.

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Figure 4.10
encoding

McGraw-Hill

Manchester

The McGraw-Hill Companies, Inc., 2004

Note:
In Manchester encoding, the
transition at the middle of the bit
is used for both synchronization
and bit representation.

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Figure 4.11 Differential


Manchester encoding

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Note:

In differential Manchester encoding, the transition at the middle of


the bit is used only for synchronization.
The bit representation is defined by the inversion or noninversion
at the beginning of the bit.

Note:

In bipolar encoding, we use three levels: positive, zero,


and negative.

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Figure 4.12
encoding

McGraw-Hill

Bipolar AMI

The McGraw-Hill Companies, Inc., 2004

Figure 4.13

McGraw-Hill

2B1Q

The McGraw-Hill Companies, Inc., 2004

Figure 4.14

McGraw-Hill

MLT-3 signal

The McGraw-Hill Companies, Inc., 2004

4.2 Block Coding

Steps in Transformation
Some Common Block Codes

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Figure 4.15

McGraw-Hill

Block coding

The McGraw-Hill Companies, Inc., 2004

Figure 4.16 Substitution in


block coding

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Table 4.1 4B/5B encoding

McGraw-Hill

Data

Code

Data

Code

0000

11110

1000

10010

0001

01001

1001

10011

0010

10100

1010

10110

0011

10101

1011

10111

0100

01010

1100

11010

0101

01011

1101

11011

0110

01110

1110

11100

0111

01111

1111

11101

The McGraw-Hill Companies, Inc., 2004

Table 4.1 4B/5B encoding (Continued)


Data

McGraw-Hill

Code

Q (Quiet)

00000

I (Idle)

11111

H (Halt)

00100

J (start delimiter)

11000

K (start delimiter)

10001

T (end delimiter)

01101

S (Set)

11001

R (Reset)

00111
The McGraw-Hill Companies, Inc., 2004

Figure 4.17
encoding

McGraw-Hill

Example of 8B/6T

The McGraw-Hill Companies, Inc., 2004

4.3 Sampling

Pulse Amplitude Modulation


Pulse Code Modulation
Sampling Rate: Nyquist
Theorem
How Many Bits per Sample?
Bit Rate
McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Figure 4.18

McGraw-Hill

PAM

The McGraw-Hill Companies, Inc., 2004

Note:
Pulse amplitude modulation has some
applications, but it is not used by itself in data
communication. However, it is the first step in
another very popular conversion method called
pulse code modulation.

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Figure 4.19
signal

McGraw-Hill

Quantized PAM

The McGraw-Hill Companies, Inc., 2004

Figure 4.20 Quantizing by using sign


and magnitude

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Figure 4.21

McGraw-Hill

PCM

The McGraw-Hill Companies, Inc., 2004

Figure 4.22 From analog signal to


PCM digital code

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Note:
According to the Nyquist theorem,
the sampling rate must be at least 2
times the highest frequency.

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Figure 4.23

McGraw-Hill

Nyquist theorem

The McGraw-Hill Companies, Inc., 2004

Example 4
Whatsamplingrateisneededforasignalwitha
bandwidthof10,000Hz(1000to11,000Hz)?

Solution
The sampling rate must be twice the highest
frequency in the signal:
Sampling rate = 2 x (11,000) = 22,000 samples/s

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Example 5
Asignalissampled.Eachsamplerequiresatleast12
levelsofprecision(+0to+5and0to5).Howmanybits
shouldbesentforeachsample?

Solution
We need 4 bits; 1 bit for the sign and 3 bits for the
value. A 3-bit value can represent 23 = 8 levels (000
to 111), which is more than what we need. A 2-bit
value is not enough since 22 = 4. A 4-bit value is too
much because 24 = 16.
McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Example 6
Wewanttodigitizethehumanvoice.Whatisthebitrate,assuming8bitspersample?

Solution
The human voice normally contains frequencies from 0 to 4000 Hz.
Sampling rate = 4000 x 2 = 8000 samples/s
Bit rate = sampling rate x number of bits per sample
= 8000 x 8 = 64,000 bps = 64 Kbps
Note:

Note that we can always change a band-pass signal to a low-pass


signal before sampling. In this case, the sampling rate is twice the
bandwidth.
McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

4.4 Transmission Mode

Parallel Transmission
Serial Transmission

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Figure 4.24

McGraw-Hill

Data transmission

The McGraw-Hill Companies, Inc., 2004

Figure 4.25 Parallel


transmission

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Figure 4.26 Serial


transmission

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Note:

In asynchronous transmission, we send 1 start bit (0) at the


beginning and 1 or more stop bits (1s) at the end of each byte.
There may be a gap between each byte.

Note:

Asynchronous here means asynchronous at the byte level, but


the bits are still synchronized; their durations are the same.

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Figure 4.27 Asynchronous


transmission

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Note:
In synchronous transmission,
we send bits one after another
without start/stop bits or gaps.
It is the responsibility of the
receiver to group the bits.

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Figure 4.28 Synchronous


transmission

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

PART III
Data Link Layer

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Position of the data-link layer

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Data link layer duties

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

LLC and MAC sublayers

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

IEEE standards for LANs

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Chapters
Chapter 10

Error Detection and Correction

Chapter 11

Data Link Control and Protocols

Chapter 12

Point-To-Point Access

Chapter 13

Multiple Access

Chapter 14

Local Area Networks

Chapter 15

Wireless LANs

Chapter 16

Connecting LANs

Chapter 17

Chapter 18
McGraw-Hill

Cellular Telephone and Satellite Networks

Virtual Circuit Switching


The McGraw-Hill Companies, Inc., 2004

Chapter 10

Error Detection
and
Correction
McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Note:
Data can be corrupted during
transmission. For reliable
communication, errors must be
detected and corrected.

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

10.1 Types of Error

Single-Bit Error
Burst Error

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Note:
In a single-bit error, only one bit in
the data unit has changed.

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

10.1

Single-bit error

Note:

A burst error means that 2 or more bits in the data unit have changed.

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

10.2

McGraw-Hill

Burst error of length 5

The McGraw-Hill Companies, Inc., 2004

10.2 Detection
Redundancy
Parity Check
Cyclic Redundancy Check (CRC)
Checksum

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Note:
Error detection uses the concept of
redundancy, which means adding
extra bits for detecting errors at the
destination.

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

10.3

McGraw-Hill

Redundancy

The McGraw-Hill Companies, Inc., 2004

10.4

McGraw-Hill

Detection methods

The McGraw-Hill Companies, Inc., 2004

10.5

McGraw-Hill

Even-parity concept

The McGraw-Hill Companies, Inc., 2004

Note:
In parity check, a parity bit is added
to every data unit so that the total
number of 1s is even
(or odd for odd-parity).

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Example 1
Supposethesenderwantstosendthewordworld.InASCIIthefivecharacters
arecodedas
1110111 1101111 1110010 1101100 1100100
Thefollowingshowstheactualbitssent
1110111011011110111001001101100011001001

Example 2
NowsupposethewordworldinExample1isreceivedbythereceiverwithout
beingcorruptedintransmission.
1110111011011110111001001101100011001001
Thereceivercountsthe1sineachcharacterandcomesupwithevennumbers
(6,6,4,4,4).Thedataareaccepted.

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Example 3
NowsupposethewordworldinExample1iscorruptedduringtransmission.
1111111011011110111011001101100011001001
Thereceivercountsthe1sineachcharacterandcomesupwithevenandodd
numbers(7,6,5,4,4).Thereceiverknowsthatthedataarecorrupted,discards
them,andasksforretransmission.

Note:

Simple parity check can detect all single-bit errors. It can detect burst
errors only if the total number of errors in each data unit is odd.

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

10.6

McGraw-Hill

Two-dimensional parity

The McGraw-Hill Companies, Inc., 2004

Example 4
Supposethefollowingblockissent:
1010100100111001110111011110011110101010

However,itishitbyaburstnoiseoflength8,andsomebitsarecorrupted.
1010001110001001110111011110011110101010

Whenthereceivercheckstheparitybits,someofthebitsdonotfollowthe
evenparityruleandthewholeblockisdiscarded.
1010001110001001110111011110011110101010

Note:

In two-dimensional parity check, a block of bits is divided into


rows and a redundant row of bits is added to the whole block.
McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

10.7

McGraw-Hill

CRC generator and checker

The McGraw-Hill Companies, Inc., 2004

10.8 Binary division in a CRC


generator

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

10.9

McGraw-Hill

Binary division in CRC checker

The McGraw-Hill Companies, Inc., 2004

10.10

A polynomial

10.11 A polynomial representing a


divisor

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Table 10.1 Standard polynomials

McGraw-Hill

Name

Polynomial

Application

CRC-8

x8 + x2 + x + 1

ATM header

CRC-10

x10 + x9 + x5 + x4 + x 2 + 1

ATM AAL

ITU-16

x16 + x12 + x5 + 1

HDLC

ITU-32

x32 + x26 + x23 + x22 + x16 + x12 + x11 + x10


+ x8 + x7 + x5 + x4 + x2 + x + 1

LANs

The McGraw-Hill Companies, Inc., 2004

Example 5
Itisobviousthatwecannotchoosex(binary10)orx2+x(binary110)asthe
polynomialbecausebotharedivisiblebyx.However,wecanchoosex+1
(binary11)becauseitisnotdivisiblebyx,butisdivisiblebyx+1.Wecanalso
choosex2+1(binary101)becauseitisdivisiblebyx+1(binarydivision).
Example 6
TheCRC12
x12+x11+x3+x+1
whichhasadegreeof12,willdetectallbursterrorsaffectinganoddnumberof
bits,willdetectallbursterrorswithalengthlessthanorequalto12,andwill
detect,99.97percentofthetime,bursterrorswithalengthof12ormore.

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

10.12

McGraw-Hill

Checksum

The McGraw-Hill Companies, Inc., 2004

10.13

McGraw-Hill

Data unit and checksum

The McGraw-Hill Companies, Inc., 2004

Note
:
The sender follows these steps:
The unit is divided into k sections, each of n bits.
All sections are added using ones complement to get the sum.
The sum is complemented and becomes the checksum.
The checksum is sent with the data.
Note
:
The receiver follows these steps:
The unit is divided into k sections, each of n bits.
All sections are added using ones complement to get the sum.
The sum is complemented.
If the result is zero, the data are accepted: otherwise, rejected.

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Note:
The receiver follows these steps:
The unit is divided into k sections, each of n bits.
All sections are added using ones complement to get
the sum.
The sum is complemented.
If the result is zero, the data are accepted: otherwise,
rejected.
McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Example 7
Supposethefollowingblockof16bitsistobesentusinga
checksumof8bits.
1010100100111001
Thenumbersareaddedusingonescomplement
10101001
00111001

Sum 11100010
Checksum00011101
Thepatternsentis101010010011100100011101
McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Example 8
NowsupposethereceiverreceivesthepatternsentinExample7andthereisno
error.
101010010011100100011101
Whenthereceiveraddsthethreesections,itwillgetall1s,which,after
complementing,isall0sandshowsthatthereisnoerror.
10101001
00111001
00011101
Sum

11111111

Complement00000000meansthatthepatternisOK.

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Example 9
Nowsupposethereisabursterroroflength5thataffects4bits.
101011111111100100011101
Whenthereceiveraddsthethreesections,itgets
10101111
11111001
00011101
PartialSum111000101
Carry
Sum

1
11000110

Complement00111001thepatterniscorrupted.

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

10.3 Correction

Retransmission
Forward Error Correction
Burst Error Correction
McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

Table 10.2 Data and redundancy bits

McGraw-Hill

Number of
data bits
m

Number of
redundancy bits
r

Total
bits
m+r

10

11

The McGraw-Hill Companies, Inc., 2004

10.14 Positions of redundancy bits in


Hamming code

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

10.15

McGraw-Hill

Redundancy bits calculation

The McGraw-Hill Companies, Inc., 2004

10.16 Example of redundancy bit


calculation

McGraw-Hill

The McGraw-Hill Companies, Inc., 2004

10.17
code

McGraw-Hill

Error detection using Hamming

The McGraw-Hill Companies, Inc., 2004

10.18

McGraw-Hill

Burst error correction example

The McGraw-Hill Companies, Inc., 2004

Chapter 11

Chapter 11 : Error-Control
Coding

Lecture edition by K.Heikkinen

EE576

Dr. Kousa

Linear Block Codes

517

Chapter 11 goals

To understand error-correcting codes in use theorems and their


principles
block codes, convolutional codes, etc.

Chapter 11 contents

Introduction
Discrete Memoryless Channels
Linear Block Codes
Cyclic Codes
Maximum Likelihood decoding of Convolutional Codes
Trellis-Coded Modulation
Coding for Compound-Error Channels

EE576

Dr. Kousa

Linear Block Codes

518

Introduction
Cost-effective facility for transmitting information at a
rate and a level of reliability and quality
signal energy per bit-to-noise power density ratio
achieved practically via error-control coding
Error-control methods
Error-correcting codes

EE576

Dr. Kousa

Linear Block Codes

519

Discrete Memoryless Channels

EE576

Dr. Kousa

Linear Block Codes

520

Discrete Memoryless Channels


Discrete memoryless channles (see fig. 11.1)
described by the set of transition probabilities
in simplest form binary coding [0,1] is used of which
BSC is an appropriate example
channel noise modelled as additive white gaussian
noise channel
the two above are so called hard-decision
decoding
other solutions, so called soft-decision coding

EE576

Dr. Kousa

Linear Block Codes

521

Linear Block Codes


A code is said to be linear if any twowords in the
code can be added in modulo-2 arithmetic to
produce a third code word in the code
Linear block code has n bits of which k bits are
always identical to the message sequence
Then n-k bits are computed from the message
bits in accordance with a prescribed encoding
rule that determines the mathematical structure
of the code
these bits are also called parity bits
EE576

Dr. Kousa

Linear Block Codes

522

Linear Block Codes


Normally code equations are written in the form of
matrixes (1-by-k message vector)
P is the k-by-(n-k) coefficient matrix
I (of k) is the k-by-k identity matrix
G is k-by-n generator matrix
Another way to show the relationship between the
message bits and parity bits
H is parity-check matrix

EE576

Dr. Kousa

Linear Block Codes

523

Linear Block Codes


In Syndrome decoding the generator matrix (G) is
used in the encoding at the transmitter and the paritycheck matrix (H) atthe receiver
if corrupted bit, r = c+e, this leads to two important
properties
the syndrome is dependant only on the error
pattern, not on the trasmitted code word
all error patterns that differ by a code word, have
same syndrome

EE576

Dr. Kousa

Linear Block Codes

524

Linear Block Codes


The Hamming distance (or minimum) can be used to
calculate the difference of the code words
We have certain amount (2_power_k) code vectors, of
which the subsets constitute a standard array for an
(n,k) linear block code
We pick the error pattern of a given code
coset leaders are the most obvious error patterns

EE576

Dr. Kousa

Linear Block Codes

525

Linear Block Codes

Example : Let us have H as parity-check matrix which


vectors are
(1110), (0101), (0011), (0001), (1000), (1111)
code generator G gives us following codes (c) :
000000, 100101,111010, 011111
Let us find n, k and n-k ?
what will we find if we multiply Hc ?

EE576

Dr. Kousa

Linear Block Codes

526

Linear Block Codes


Examples of (7,4) Hamming code
words and error patterns

EE576

Dr. Kousa

Linear Block Codes

527

Cyclic Codes
Cyclic codes form subclass of linear block codes
A binary code is said to be cyclic if it exhibits the two
following properties
the sum of any two code words in the code is also a code
word (linearity)
this means that we speak linear block codes
any cyclic shift of a code word in the code is also a code
word (cyclic)
mathematically in polynomial notation

EE576

Dr. Kousa

Linear Block Codes

528

Cyclic Codes
The polynomial plays major role in the generation of cyclic
codes
If we have a generator polynomial g(x) of an (n,k) cyclic code
with certain k polynomials, we can create the generator
matrix (G)
Syndrome polynomial of the received code word corresponds
error polynomial

EE576

Dr. Kousa

Linear Block Codes

529

Cyclic Codes

EE576

Dr. Kousa

Linear Block Codes

530

Cyclic Codes

Example : A (7,4) cyclic code that has a block length of 7, let us find the
polynomials to generate the code (see example 3 on the book)
find code polynomials
find generation matrix (G) and parity-check matrix (H)

Other remarkable cyclic codes


Cyclic redundancy check (CRC) codes
Bose-Chaudhuri-Hocquenghem (BCH) codes
Reed-Solomon codes

EE576

Dr. Kousa

Linear Block Codes

531

Convolutional Codes

Convolutional codes work in serial manner, which suits better to such kind of
applications
The encoder of a convolutional code can be viewed as a finite-state machine that
consists of an M-stage shift register with prescribed connections to n modulo-2
adders, and a multiplexer that serializesthe outputs of the address
Convolutional codes are portrayed in graphical form by using three different
diagrams
Code Tree
Trellis
State Diagram

EE576

Dr. Kousa

Linear Block Codes

532

Maximum Likelihood Decoding of


Convolutional Codes
We can create log-likelihood function to a convolutional code
that have a certain hamming distance
The book presents an example algorithm (Viterbi)
Viterbi algorithm is a maximum-likelihood decoder, which
is optimum for a AWGN (see fig. 11.17)
initialisation
computation step
final step

EE576

Dr. Kousa

Linear Block Codes

533

Trellis-Coded Modulation
Here coding is described as a process of imposing certain
patterns on the transmitted signal
Trellis-coded modulation has three features
Amount of signal point is larger than what is required,
therefore allowing redundancy without sacrificing
bandwidth
Convolutional coding is used to introduce a certain
dependancy between successive signal points
Soft-decision decoding is done in the receiver

EE576

Dr. Kousa

Linear Block Codes

534

Coding for Compound-Error


Channels
Compound-error channels exhibit independent and burst
error statistics (e.g. PSTN channels, radio channels)
Error-protection methods (ARQ, FEC)

EE576

Dr. Kousa

Linear Block Codes

535

The Logical Domain


Chapter 6

Error Control in the


Binary Channel

Fall 2007
EE576

Dr. Kousa

Telecommunications Technology
Linear Block Codes

536

The Exclusive OR
A
If A and B are binary
variables, the XOR of A
and B is defined as:
0+0=1+1=0
0+1=1+0=1

XOR

XOR with 1 complements


variable
Note dij = w(xi XOR xj)
EE576

Dr. Kousa

Telecommunications Technology
Linear Block Codes

537

Hamming Distance
The weight of a code word is the number of 1s in it.
The Hamming Distance between two code words is
equal to the number of digits in which they differ.
The distance dij between xi = 1110010 and xj =
1011001 is 4.

EE576

Dr. Kousa

Telecommunications Technology
Linear Block Codes

538

x1=1110010
x2=1100001
y=1100010
d(x1,y)=w(1110010+1100010)
=w(0010000)
=1
d(x2,y)=w(110001+1100010)
=w(000011)
=2

EE576

Dr. Kousa

Telecommunications Technology
Linear Block Codes

539
539

The Binary Symmetric Channel


1- p

x0
p

y0

p
x1

EE576

Dr. Kousa

1- p

y1

Telecommunications Technology
Linear Block Codes

540
540

BSC and Hamming Distance


If x is the input code word to a BSC, and y is the
output, y = x + n, where the noise vector n has a 1
wherever an error has occurred:
x= 1110010, n = 0010000
y = 1100010
An error causes a distance of 1 between input and
output code words.

EE576

Dr. Kousa

Telecommunications Technology
Linear Block Codes

541
541

Geometric Point-of-View
Code set of
all 8, 3-digit
words
Minimum
distance = 1

010
110

011
111

000

001

101

100

In the BSC, any error changes a code word into a code word
EE576

Dr. Kousa

Telecommunications Technology
Linear Block Codes

542
542

Reduced Rate Source


Code set of
4, 3-digit
words

010
110

011
111

000

Minimum
distance = 2

101

100

EE576

Dr. Kousa

001

Telecommunications Technology
Linear Block Codes

543
543

Error Correction and Detection


Capability
The distance between two code words is the
number of places in which they differ.
dmin is the distance between the two codes
which are closest together.
A code with minimum distance dmin may be
used
to detect dmin-1 errors, or
to correct (dmin-1)/2 errors.

EE576

Dr. Kousa

Telecommunications Technology
Linear Block Codes

544
544

x1

x2

The probability that y came from x1 = Pr{1 error} = pq6


The probability that y came from x2 = Pr{2 errors} = p 2q5
p(y/x1) > p(y/x2); p <1/2
Received word is more likely to have come from closest code word
Decode received vector as closest code word => correction

EE576

Dr. Kousa

Telecommunications Technology
Linear Block Codes

545
545

Error Detection and Correction


Shannons Noisy Channel Coding Theorem:
To achieve error-free communications
Reduce Rate below Capacity
Add structured redundancy
Increase the distance between code words

Error Correcting Codes

Add redundancy in a structured way


For example, add a single parity check digit
Choose value of the appended digit to make number of 1s even
(or odd)

Fall 2007Dr. Kousa


EE576

Telecommunications Technology
Linear Block Codes

546
546

Parity Check Equation


m0

m1

m2

ck

m3

i n 1

m
i 0

Addition is modulo 2

EE576

Dr. Kousa

c1

Linear Block Codes

A
+
0
1

0
0
1

1
1
0

547

An (n,k) Group Code


Code word

x7

x6

x5

x4

x3

x2

x1

m4 m3 m2

c3

m1

c2

c1

c1 m1 m2 m4
c2 m1 m3 m4
m4
1
1
1
EE576

Dr. Kousa

c3 m2 m3 m4

m3
1
1
0

m2
1
0
1

m1
0
1
1

Linear Block Codes

c3
1
0
0

c2
0
1
0

c1
0
0
1
548

Parity Check and Generator Matrices


1
0
H 0
0

0
1
0
0

0
0
1
0

0
0
0
1

1
1
1
0

1
1
0
1

1
0
1
1

G 1
1

1
1
0

1
0
1

0
1
1

1
0
0

0
1
0

0
0
1

EE576

Dr. Kousa

Linear Block Codes

549

Codes
Message

Code word

m3

m2

m1

m0

x6

x5

x4

x3

Transmitted code word


Received word

Error event e
EE576

Dr. Kousa

x2

x1

x0


x mH

y xe

has a 1 wherever an error has occurred.


Linear Block Codes

550

Syndrome Calculation


y mH e
T
s yG
T T
s mHG e G
T
s eG
EE576

Dr. Kousa

Linear Block Codes

551

Hamming Code

In the example, n = 7 and k = 4.


There are r = n - k, or 3, parity check digits.
This code has a minimum distance of 1, thus all single errors can be
corrected.
If no error occurs the syndrome is 000.
If more errors occur, the syndrome is another 3-bit sequence.
Each single error gives a unique syndrome.
Any single error is more likely to occur than any double, triple, or higher
order error.
Any non-zero syndrome is most likely to have occurred because the
single error that could cause it occurred, than for any other reason.
Therefore, deciding that the single error occurred is most likely the
correct decision.
Hence, the term error correction.

EE576

Dr. Kousa

Telecommunications Technology
Linear Block Codes

552

Properties of Binomial Variables


Given n bits with a probability of error p and a probability of no error q =
1-p.
The probability of no errors is qn
The probability of one error is pqn-1
The probability of k errors is pkqn-k
It is no problem to show that if p<1/2 then any k-error event is more
likely than any k+1-error event.
The most likely number of errors is np. When p is very low then the
most likely error event is NO ERRORS, single errors are next most
likely.
Single-error-correcting codes can be very effective!

EE576

Dr. Kousa

Telecommunications Technology
Linear Block Codes

553

Hamming Codes
Hamming codes are (n,k) group codes where n = k+r is the
length of the code words k is the number of data bits. R is the
number of parity check bits, and 2r = n + 1.
Typical codes are
(7,4), r = 3
(15,11), r = 4 (24 = 16)
(63, 57), r = 6
Hamming codes are ideal, single error correcting codes.

EE576

Dr. Kousa

Telecommunications Technology
Linear Block Codes

554

Hamming Code Performance

If the probability of bit error without coding is pu and pc with


coding
the probability of a word error without coding is 1 1 1

pu

the probability of a word error using a (7,4) Hamming Code is:

1 (1 p )
c

7 pc (1 pc ) 6

pu is the uncoded channel error probability.

pc is the probability of bit error when Eb/No is reduced to 4/7 ths of


that at which pu was calculated.

EE576

Dr. Kousa

Linear Block Codes

555

Cyclic Codes

Cyclic codes are algebraic group codes in which the code words
form an ideal.
If the bits are considered coefficients of a polynomial, every code
word is divisible by a generator polynomial.
The rows of the generator matrix are cyclic permutations of one
another.

An ideal is a group in which every member is the product of two others.

EE576

Dr. Kousa

Telecommunications Technology
Linear Block Codes

556

Cyclic Code Generation


A message M = 110011 can be expressed as the polynomial M(X) = X 5 + X4
+X+1
(code digits are coefficients of polynomial)
With a generator polynomial P(X) = X4 + X3 + 1, the code word can be
generated as: T(X) = XnM(X) + R(X),
where R(X) is the remainder when XnM(X) is divided by P(X), i.e.
XnM(X)/P(X) = Q(X) + R(X)/P(X),
where n is the order of P(X).

EE576

Dr. Kousa

Telecommunications Technology
Linear Block Codes

557

Applications of Cyclic Codes

Cyclic codes (or cyclic redundancy check CRC) are used routinely to
detect errors in data transmission.
Typical codes are the
CRC-16: P(X) = X16 + X15 + X2 + 1
CRC-CCITT: P(X) = X16 + X12 + X5 + 1

Cyclic Code Capabilities

A cyclic code will detect:


All single-bit errors.
All double-bit errors.
Any odd number of errors.
Any burst error for which the length of the burst is less than the
CRC.
Most larger burst errors.

EE576

Dr. Kousa

Telecommunications Technology
Linear Block Codes

558

Convolutional Codes

Block codes are memoryless codes - each output depends only the current k-bit block
being coded.
The bits in a convolutional code depend on previous source bits.
The source bits are convolved with the impulse response of a filter.

Why convolutional codes? Because the code set grows exponentially with code length
the hypothesis being that the Rate could be maintained as n grew, unlike all block codes
the Wozencraft contribution.

EE576

Dr. Kousa

Telecommunications Technology
Linear Block Codes

559

Convolutional Coder
1 1 0 1 0 1 ...

Input

11 10 11 01 01 01 ...
O1

Xi

Xi-1

Xi-2

Encoded output
O2

Rate 1/2 convolutional coder

EE576

Dr. Kousa

Telecommunications Technology
Linear Block Codes

560

Trellis Diagram
1
00

01

10

11
EE576

Dr. Kousa

Telecommunications Technology
Linear Block Codes

561

1
11

1
10

0
11

1
01

0
01

1
01

00
11
01
01
10

10

01

01

11

11
EE576

Dr. Kousa

Telecommunications Technology
Linear Block Codes

562

Decoding
00

01

10

11
Insert an error in a sequence of transmitted bits and try and decode it.
EE576

Dr. Kousa

Telecommunications Technology
Linear Block Codes

563

Sequential Decoding

Decoder determines the most likely output sequence.


Compares the received sequence with all possible sequences that might have
been obtained with the coder
Selects the sequence that is closest to the received sequence.

Viterbi Decoding

Choose a decoding-window width b in excess of the block length.


Compute all code words of length b and compare each to the received code
word
Select that code word closest to the received word.
Re-encode the decoded frame and subtract from the received word.

Turbo Codes

Turbo codes were invented by Berrou, Clavieux and Thimajshima in 1993


Turbo codes achieve excellent error correction capabilities at rates very close to
the Shannon bound
Turbo codes are concatenated or product codes
Sequential decoding is used.

EE576

Dr. Kousa

Telecommunications Technology
Linear Block Codes

564

Interleaved Concatenated Code

EE576

Dr. Kousa

Telecommunications Technology
Linear Block Codes

565

Coding and Decoding

EE576

Dr. Kousa

Inner Coder
(Block)

Outer Coder
(Convolutional)

Outer Decoder

Inner Decoder

Telecommunications Technology
Linear Block Codes

566

Turbo Code Performance

= spectral efficiency = bits per second per Hertz


EE576

Dr. Kousa

Telecommunications Technology
Linear Block Codes

567

W
h

The Problem: Noise


Information
Source

Transmitter

Reciever

Signal
Message

Received
Signal

Message = [1 1 1
1]

Destination

Message

Message = [1 1 0
1]
Noise
NoiseSource
= [0 0
1 0]

EE576

Dr. Kousa

Linear Block Codes

568

W
h
a

Poor solutions
Single CheckSum Truth table:
A
0
0
1
1

B
0
1
0
1

X-OR
0
1
1
0

Repeats
Data = [1 1 1 1]
Message=
[1 1 1 1]
[1 1 1 1]
[1 1 1 1]

General form:
Data=[1 1 1 1]
Message=[1 1 1 1 0]
EE576

Dr. Kousa

Linear Block Codes

569

W
h
a
t

Why they are poor


Shannon Efficiency
C W log 2 1 S / N

Repeat 3 times:
This divide W by 3
It divides overall capacity by at
least a factor of 3x.

C is Channel Capacity
W is raw Channel Capacity
S/N is the signal to noise ratio

EE576

Dr. Kousa

Single Checksum:
Allows an error to be detected
but requires the message to be
discarded and resent.
Each error reduces the channel
capacity by at least a factor of 2
because of the thrown away
message.
Linear Block Codes

570

W
h
a
t

Hammings Solution

Encoding:

Multiple Checksums
Message=[a b c d]
r= (a+b+d) mod 2
s= (a+b+c) mod 2
t= (b+c+d) mod 2

Code=[r s a t b c d]

EE576

Dr. Kousa

Message=[1 0 1 0]
r=(1+0+0) mod

2 =1
s=(1+0+1) mod 2 =0
t=(0+1+0) mod

2 =1

Code=[ 1 0 1 1 0 1 0 ]

Linear Block Codes

571

W
h
a
t

Simulation

i
s

Stochastic Simulation:
100,000 iterations
Add Errors to (7,4) data
No repeat randoms
Measure Error Detection
Results:
Error Detection
One Error: 100%
Two Errors: 100%
Three Errors: 83.43%
Four Errors: 79.76%
EE576

Dr. Kousa

Linear Block Codes

572

W
h
a
t

How it works: 3 dots

i
s

Only 3 possible words


Distance Increment = 1

One Excluded State (red)

Two valid code words (blue)


It is really a checksum.
Single Error Detection
No error correction

This is a graphic representation of the Hamming Distance


EE576

Dr. Kousa

Linear Block Codes

573

W
h
a
t

Hamming Distance
Definition:
The number of elements that need to be changed (corrupted) to
turn one codeword into another.
The hamming distance from:
[0101] to [0110] is 2 bits
[1011101] to [1001001] is 2 bits
butter to ladder is 4 characters
roses to toned is 3 characters

EE576

Dr. Kousa

Linear Block Codes

574

i
s
t
h

W
h
a
t

Another Dot
The code space is now 4.
The hamming distance is still 1.

i
s
t
h
e

Allows:
Error DETECTION for
Hamming Distance = 1.

For Hamming distances greater than 1


an error gives a false correction.

Error CORRECTION for


Hamming Distance =1
EE576

Dr. Kousa

Linear Block Codes

575

W
h
a
t

Even More Dots

i
s
t
h
e

Allows:
Error DETECTION for
Hamming Distance = 2.

For Hamming distances greater than


2 an error gives a false correction.

Error CORRECTION for


Hamming Distance =1.

For Hamming distance of 2 there is


an error detected, but it can not be
corrected.

EE576

Dr. Kousa

Linear Block Codes

576

W
h
a
t

Multi-dimensional Codes
Code Space:

i
s

2-dimensional
5 element states
Circle packing makes more
efficient use of the code-space

t
h
e
M
a

EE576

Dr. Kousa

Linear Block Codes

577

W
h
a
t

Cannon Balls
Efficient Circle packing is the same
as efficient 2-d code spacing

Efficient Sphere packing is the same


as efficient 3-d code spacing

i
s
t
h
e
M
a
t

Efficient n-dimensional sphere packing is the same as n-code spacing

http://wikisource.org/wiki/Cannonball_stacking
http://mathworld.wolfram.com/SpherePacking.html

EE576

Dr. Kousa

Linear Block Codes

578

W
h
a
t

More on Codes

Hamming (11,7)
Golay Codes
Convolutional Codes
Reed-Solomon Error Correction
Turbo Codes
Digital Fountain Codes

i
s
t
h
e

An Example
We will
Encode a message
Add noise to the transmission
Detect the error
Repair the error

EE576

Dr. Kousa

Linear Block Codes

M
a
t
r
579

W
h
a
t

Encoding the message


To encode our message
we multiply this matrix

1
0
H
0

0
1
0
0

0
0
1
0

0
0
0
1

i
s

But why?
You can verify that:

0
1
1
1

1
0
1
1

1
1
0

t
h
e

Hamming[1 0 0 0]=[1 0 0 0 0 1 1]
Hamming[0 1 0 0]=[0 1 0 0 1 0 1]
Hamming[0 0 1 0]=[0 0 1 0 1 1 0]
Hamming[0 0 0 1]=[0 0 0 1 1 1 1]

M
a
t
r
i
x
?

By our message

code H message
Where multiplication is the logical AND
And addition is the logical XOR
EE576

Dr. Kousa

Linear Block Codes

580

Add noise
If our message is
Message = [0 1 1 0]
Our Multiplying yields
Code = [0 1 1 0 0 1 1]

1 0 0 0 0 1 1
0 1 0 0 1 0 1

0 1 1 0
0 0 1 0 1 1 0

0 0 0 1 1 1 1
0 1 0 0 0 0 1 1
1 0 1 0 0 1 0 1
1 0 0 1 0 1 1 0
0 0 0 0 1 1 1 1

Lets add an error,


so Pick a digit to mutate

0 1 0 0 1 0 1

0 0 1 0 1 1 0

0 1 1 0 0 1 1

Code => [0 1 0 0 0 1 1]

EE576

Dr. Kousa

Linear Block Codes

581

Testing the message

We receive the erroneous


string:
Code = [0 1 0 0 0 1 1]
We test it:
Decoder*CodeT
=[0 1 1]
And indeed it has an error

The matrix used to decode is:


0 0 0 1 1 1 1
Decoder 0 1 1 0 0 1 1
1 0 1 0 1 0 1

To test if a code is valid:


Does Decoder*CodeT
=[0 0 0]
Yes means its valid
No means it has error/s

EE576

Dr. Kousa

Linear Block Codes

582

Repairing the message

To repair the code we find the


collumn in the decoder matrix
whose elements are the row
results of the test vector
We then change

Decoder*codeT is
[ 0 1 1]
0 0 0 1 1 1 1
Decoder 0 1 1 0 0 1 1
1 0 1 0 1 0 1

This is the third element of our


code
Our repaired code is
[0 1 1 0 0 1 1]

Decoding the message


We trim our received code by 3 elements and we have our original
message.
[0 1 1 0 0 1 1] => [0 1 1 0]
EE576

Dr. Kousa

Linear Block Codes

583

Channel Coding in
IEEE802.16e
Student: Po-Sheng Wu
Advisor: David W. Lin

EE576

Dr. Kousa

Linear Block Codes

584

Outline

Overview
RS code
Convolution code
LDPC code
Future Work

Overview

EE576

Dr. Kousa

Linear Block Codes

585

RS code
The RS code in 802.16a is derived from a systematic
RS (N=255, K=239, T=8) code on GF(2^8)

EE576

Dr. Kousa

Linear Block Codes

586

RS code

EE576

Dr. Kousa

Linear Block Codes

587

RS code
This code then is shortened and punctured to enable
variable block size and variable error-correction
capability.
Shorten (n, k) (n-l, k-l)
Punctured (n, k) (n-l, k)
In general, the generator polynomial
in IEEE802.16a h=0

EE576

Dr. Kousa

Linear Block Codes

588

RS code

EE576

They are shortened to K data bytes and punctured to permit T


bytes to be corrected.
When a block is shortened to K, the first 239-K bytes of the
encoder input shall be zero
When a codeword is punctured to permit T bytes to be
corrected, only the first 2T of the total 16 parity bytes shall be
employed.
When shortened and punctured to (48,36,6) the first 203(23936) information bytes are assigned 0.
And only the first 12(2*6) bytes of R(X) will be employed in the
codeword.

Dr. Kousa

Linear Block Codes

589

Shortened and Punctured

EE576

Dr. Kousa

Linear Block Codes

590

RS code

EE576

Dr. Kousa

Linear Block Codes

591

RS code
Decoding : The Euclids (Berlekamp) algorithm is a
common decoding algorithm for RS code.
Four step:
-compute the syndrome value
-compute the error location polynomial
-compute the error location
-compute the error value

EE576

Dr. Kousa

Linear Block Codes

592

Convolution code
Each RS code is encoded by a binary convolution
encoder, which has native rate of , a constraint
length equal to 7.

EE576

Dr. Kousa

Linear Block Codes

593

Convolution code
1 means a transmitted bit and 0 denotes a
removed bit, note that the
has been changed
from that of the native convolution code with rate .

EE576

Dr. Kousa

Linear Block Codes

594

Convolution code
Decoding: Viterbi algorithm

EE576

Dr. Kousa

Linear Block Codes

595

Convolution code
The convolution code in IEEE802.16a need to be
terminated in a block, and thus become a block code.
Three method to achieve this termination
Direct truncation
Zero tail
Tail biting

EE576

Dr. Kousa

Linear Block Codes

596

RS-CC code
Outer code: RS code
Inner code: convolution code
Input data streams are divided into RS blocks, then each RS
block is encode by a tail-biting convolution code.
Between the convolution coder and modulator is a bit
interleaver.

EE576

Dr. Kousa

Linear Block Codes

597

LDPC code
low density parity checks matrix
LDPC codes also linear codes. The codeword can be
expressed as the null space of H, Hx=0
Low density enables efficient decoding
Better decoding performance to Turbo code
Close to the Shannon limit at long block length

EE576

Dr. Kousa

Linear Block Codes

598

LDPC code
n is the length of the code, m is the number of parity
check bit

EE576

Dr. Kousa

Linear Block Codes

599

LDPC code
Base model

EE576

Dr. Kousa

Linear Block Codes

600

LDPC code
if p(f,i,j) = -1
replace by z*z zero matrix
else
p(f,i,j) is the circular shift size
p (i, j ), p(i,j) 0

p f , i, j p(i,j)z f
, p(i,j)>0

z0

EE576

Dr. Kousa

Linear Block Codes

601

LDPC code
Encoding
[u p1 p2]

Decoding
Tanner Graph
Sum Product Algorithm
EE576

Dr. Kousa

Linear Block Codes

602

LDPC code
Tanner Graph

EE576

Dr. Kousa

Linear Block Codes

603

LDPC code

Sum Product Algorithm

EE576

Dr. Kousa

Linear Block Codes

604

LDPC code

EE576

Dr. Kousa

Linear Block Codes

605

LDPC code

Future Work

Realize these algorithm in computer


Find some decoding algorithm to speed up

EE576

Dr. Kousa

Linear Block Codes

606

Chapter 11

Data Link
Control
and
Protocols
EE576

Dr. Kousa

Linear Block Codes

607

11.1 Flow and Error Control


Flow Control
Flow control refers to a set of procedures used to restrict the amount of
data that the sender can send before waiting for acknowledgment.

Error Control
Error control in the data link layer is based on automatic repeat
request, which is the retransmission of data.

EE576

Dr. Kousa

Linear Block Codes

608

11.2 Stop-and-Wait ARQ

Operation
Bidirectional Transmission

EE576

Dr. Kousa

Linear Block Codes

609

11.1

EE576

Normal operation

Dr. Kousa

Linear Block Codes

610

11.2

EE576

Stop-and-Wait ARQ, lost frame

Dr. Kousa

Linear Block Codes

611

11.3

EE576

Stop-and-Wait ARQ, lost ACK frame

Dr. Kousa

Linear Block Codes

612

Note:
In Stop-and-Wait ARQ, numbering
frames prevents the retaining of
duplicate frames.

EE576

Dr. Kousa

Linear Block Codes

613

11.4

EE576

Stop-and-Wait ARQ, delayed ACK

Dr. Kousa

Linear Block Codes

614

Note:
Numbered acknowledgments are needed
if an acknowledgment is delayed and
the next frame is lost.

EE576

Dr. Kousa

Linear Block Codes

615

11.5

EE576

Piggybacking

Dr. Kousa

Linear Block Codes

616

11.3 Go-Back-N ARQ


Sequence Number
Sender and Receiver Sliding Window
Control Variables and Timers
Acknowledgment
Resending Frames
Operation
EE576

Dr. Kousa

Linear Block Codes

617

11.6 Sender sliding window

EE576

Dr. Kousa

Linear Block Codes

618

11.7

EE576

Receiver sliding window

Dr. Kousa

Linear Block Codes

619

11.8

EE576

Control variables

Dr. Kousa

Linear Block Codes

620

11.9

EE576

Go-Back-N ARQ, normal operation

Dr. Kousa

Linear Block Codes

621

11.10

EE576

Dr. Kousa

Go-Back-N ARQ, lost frame

Linear Block Codes

622

11.11 Go-Back-N ARQ: sender window


size

EE576

Dr. Kousa

Linear Block Codes

623

Note:
In Go-Back-N ARQ, the size of the
sender window must be less than 2m;
the size of the receiver window is
always 1.

EE576

Dr. Kousa

Linear Block Codes

624

11.4 Selective-Repeat ARQ


Sender and Receiver Windows
Operation
Sender Window Size
Bidirectional Transmission
Pipelining
EE576

Dr. Kousa

Linear Block Codes

625

11.12 Selective Repeat ARQ, sender and receiver


windows

EE576

Dr. Kousa

Linear Block Codes

626

11.13

EE576

Dr. Kousa

Selective Repeat ARQ, lost frame

Linear Block Codes

627

Note:
In Selective Repeat ARQ, the size of the
sender and receiver window must be at
most one-half of 2m.

EE576

Dr. Kousa

Linear Block Codes

628

11.14 Selective Repeat ARQ, sender


window size

EE576

Dr. Kousa

Linear Block Codes

629

Example 1
In a Stop-and-Wait ARQ system, the bandwidth of the line is 1 Mbps, and 1 bit
takes 20 ms to make a round trip. What is the bandwidth-delay product? If the
system data frames are 1000 bits in length, what is the utilization percentage of
the link?

Solution
The bandwidth-delay product is
1 106 20 10-3 = 20,000 bits
The system can send 20,000 bits during the time it takes for the data to go from
the sender to the receiver and then back again. However, the system sends only
1000 bits. We can say that the link utilization is only 1000/20,000, or 5%. For
this reason, for a link with high bandwidth or long delay, use of Stop-and-Wait
ARQ wastes the capacity of the link.
EE576

Dr. Kousa

Linear Block Codes

630

Example 2
What is the utilization percentage of the link in Example 1 if the link uses GoBack-N ARQ with a 15-frame sequence?

Solution
The bandwidth-delay product is still 20,000. The system can send up to 15
frames or 15,000 bits during a round trip. This means the utilization is
15,000/20,000, or 75 percent. Of course, if there are damaged frames, the
utilization percentage is much less because frames have to be resent.

EE576

Dr. Kousa

Linear Block Codes

631

11.5 HDLC
Configurations and Transfer Modes
Frames
Frame Format
Examples
Data Transparency
EE576

Dr. Kousa

Linear Block Codes

632

11.15

EE576

Dr. Kousa

NRM

Linear Block Codes

633

11.16

EE576

Dr. Kousa

ABM

Linear Block Codes

634

11.17

EE576

Dr. Kousa

HDLC frame

Linear Block Codes

635

11.18

EE576

Dr. Kousa

HDLC frame types

Linear Block Codes

636

11.19

EE576

Dr. Kousa

I-frame

Linear Block Codes

637

11.20

EE576

Dr. Kousa

S-frame control field in HDLC

Linear Block Codes

638

11.21

EE576

Dr. Kousa

U-frame control field in HDLC

Linear Block Codes

639

Table 11.1 U-frame control command and response


Command/response

EE576

Meaning

SNRM

Set normal response mode

SNRME

Set normal response mode (extended)

SABM

Set asynchronous balanced mode

SABME

Set asynchronous balanced mode (extended)

UP

Unnumbered poll

UI

Unnumbered information

UA

Unnumbered acknowledgment

RD

Request disconnect

DISC

Disconnect

DM

Disconnect mode

RIM

Request information mode

SIM

Set initialization mode

RSET

Reset

XID

Exchange ID

FRMR

Frame reject

Dr. Kousa

Linear Block Codes

640

Example 3
Figure 11.22 shows an exchange using piggybacking where is no
error. Station A begins the exchange of information with an I-frame
numbered 0 followed by another I-frame numbered 1. Station B
piggybacks its acknowledgment of both frames onto an I-frame of
its own. Station Bs first I-frame is also numbered 0 [N(S) field]
and contains a 2 in its N(R) field, acknowledging the receipt of As
frames 1 and 0 and indicating that it expects frame 2 to arrive next.
Station B transmits its second and third I-frames (numbered 1 and
2) before accepting further frames from station A. Its N(R)
information, therefore, has not changed: B frames 1 and 2 indicate
that station B is still expecting A frame 2 to arrive next.

EE576

Dr. Kousa

Linear Block Codes

641

11.22

EE576

Dr. Kousa

Example 3

Linear Block Codes

642

Example 4
InExample3,supposeframe1sentfromstationBto
stationAhasanerror.StationAinformsstationBto
resendframes1and2(thesystemisusingtheGoBack
Nmechanism).StationAsendsarejectsupervisoryframe
toannouncetheerrorinframe1.Figure11.23showsthe
exchange.

EE576

Dr. Kousa

Linear Block Codes

643

11.23

EE576

Dr. Kousa

Example 4

Linear Block Codes

644

Note:
Bit stuffing is the process of adding one
extra 0 whenever there are five
consecutive 1s in the data so that the
receiver does not mistake the
data for a flag.

EE576

Dr. Kousa

Linear Block Codes

645

11.24

EE576

Dr. Kousa

Bit stuffing and removal

Linear Block Codes

646

11.25

EE576

Dr. Kousa

Bit stuffing in HDLC

Linear Block Codes

647

Chapter 11
Data Link Control

EE576

Dr. Kousa

Linear Block Codes

11.648
Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

648

111FRAMING

The data link layer needs to pack bits into frames, so


that each frame is distinguishable from another. Our
postal system practices a type of framing. The simple
act of inserting a letter into an envelope separates one
piece of information from another; the envelope serves
as the delimiter.
Topics discussed in this section:
Fixed-Size Framing
Variable-Size Framing
11.649 Dr. Kousa
EE576

Linear Block Codes

649

Figure 11.1 A frame in a character-oriented protocol

11.650 Dr. Kousa


EE576

Linear Block Codes

650

Figure 11.2 Byte stuffing and unstuffing


Byte stuffing is the process of adding 1 extra byte
whenever there is a flag or escape character in the text.

11.651 Dr. Kousa


EE576

Linear Block Codes

651

Figure 11.3 A frame in a bit-oriented protocol

Bit stuffing is the process of adding one extra 0


whenever five consecutive 1s follow a 0 in the
data, so that the receiver does not mistake
the pattern 0111110 for a flag.

11.652 Dr. Kousa


EE576

Linear Block Codes

652

Figure 11.4 Bit stuffing and unstuffing

11.653 Dr. Kousa


EE576

Linear Block Codes

653

112FLOWANDERRORCONTROL

The most important responsibilities of the data link


layer are flow control and error control. Collectively,
these functions are known as data link control.

Topics discussed in this section:


Flow Control
Error Control

11.654 Dr. Kousa


EE576

Linear Block Codes

654

Note
Flow control refers to a set of procedures used to
restrict the amount of data that the sender can
send before waiting for acknowledgment.
Error control in the data link layer is based on
automatic repeat request, which is the
retransmission of data.

11.655 Dr. Kousa


EE576

Linear Block Codes

655

113PROTOCOLS

Now let us see how the data link layer can combine
framing, flow control, and error control to achieve the
delivery of data from one node to another.
The protocols are normally implemented in software by
using one of the common programming languages.
To make our discussions language-free, we have written
in pseudocode a version of each protocol that
concentrates mostly on the procedure instead of delving
into the details of language rules.
11.656 Dr. Kousa
EE576

Linear Block Codes

656

Figure 11.5 Taxonomy of protocols discussed in this chapter

11.657 Dr. Kousa


EE576

Linear Block Codes

657

114NOISELESSCHANNELS

Let us first assume we have an ideal channel in which


no frames are lost, duplicated, or corrupted. We
introduce two protocols for this type of channel.

Topics discussed in this section:


Simplest Protocol
Stop-and-Wait Protocol

11.658 Dr. Kousa


EE576

Linear Block Codes

658

Figure 11.6 The design of the simplest protocol with no flow or error control

11.659 Dr. Kousa


EE576

Linear Block Codes

659

Figure 11.7 Flow diagram for Example 11.1


Figure 11.7 shows an example
of communication using this
protocol. It is very simple. The
sender sends a sequence of
frames without even thinking
about the receiver. To send
three frames, three events
occur at the sender site and
three events at the receiver site.
Note that the data frames are
shown by tilted boxes; the
height of the box defines the
transmission time difference
between the first bit and the
last bit in the frame.
11.660 Dr. Kousa
EE576

Linear Block Codes

660

Figure 11.8 Design of Stop-and-Wait Protocol

11.661 Dr. Kousa


EE576

Linear Block Codes

661

Figure 11.9 Flow diagram for Example 11.2

Figure 11.9 shows an


example of communication
using this protocol. It is still
very simple. The sender
sends one frame and waits
for feedback from the
receiver. When the ACK
arrives, the sender sends the
next frame. Note that
sending two frames in the
protocol involves the sender
in four events and the
receiver in two events.

11.662 Dr. Kousa


EE576

Linear Block Codes

662

115NOISYCHANNELS

Although the Stop-and-Wait Protocol gives us an idea of


how to add flow control to its predecessor, noiseless
channels are nonexistent. We discuss three protocols in
this section that use error control.

Topics discussed in this section:


Stop-and-Wait Automatic Repeat Request
Go-Back-N Automatic Repeat Request
Selective Repeat Automatic Repeat Request
11.663 Dr. Kousa
EE576

Linear Block Codes

663

Note
Error correction in Stop-and-Wait ARQ is done by
keeping a copy of the sent frame and retransmitting of the
frame when the timer expires.
In Stop-and-Wait ARQ:
we use sequence numbers to number the frames.
The sequence numbers are based on modulo-2
arithmetic.
In Stop-and-Wait ARQ, the acknowledgment
number always announces in modulo-2 arithmetic the
sequence number of the next frame expected.
11.664 Dr. Kousa
EE576

Linear Block Codes

664

Figure 11.11 Flow diagram for an example of Stop-and-Wait ARQ.


Frame 0 is sent and
acknowledged. Frame 1
is lost and resent after
the time-out. The resent
frame
1
is
acknowledged and the
timer stops.
Frame 0 is sent and
acknowledged, but the
acknowledgment is lost.
The sender has no idea
if the frame or the
acknowledgment is lost,
so after the time-out, it
resends frame 0, which
is acknowledged.
11.665 Dr. Kousa
EE576

Linear Block Codes

665

Example 11.4

Assume that, in a Stop-and-Wait ARQ system, the bandwidth of the


line is 1 Mbps, and 1 bit takes 20 ms to make a round trip. What is
the bandwidth-delay product? If the system data frames are 1000 bits
in length, what is the utilization percentage of the link?
Solution
Thebandwidthdelayproductis
The system can send 20,000 bits during the time it takes for the data
to go from the sender to the receiver and then back again. However,
the system sends only 1000 bits. We can say that the link utilization is
only 1000/20,000, or 5 percent. For this reason, for a link with a
high bandwidth or long delay, the use of Stop-and-Wait ARQ wastes
the capacity of the link.
11.666 Dr. Kousa
EE576

Linear Block Codes

666

Example 11.5

What is the utilization percentage of the link in Example


11.4 if we have a protocol that can send up to 15 frames
before
stopping
and
worrying
about
the
acknowledgments?
Solution
The bandwidthdelay product is still 20,000 bits. The
system can send up to 15 frames or 15,000 bits during a
roundtrip.Thismeanstheutilizationis15,000/20,000,or
75 percent. Of course, if there are damaged frames, the
utilizationpercentageismuchlessbecauseframeshaveto
beresent.
11.667 Dr. Kousa
EE576

Linear Block Codes

667

Note
In the Go-Back-N Protocol, the sequence numbers are
modulo 2m,
where m is the size of the sequence number field in bits.

11.668 Dr. Kousa


EE576

Linear Block Codes

668

Figure 11.12 Send window for Go-Back-N ARQ

11.669 Dr. Kousa


EE576

Linear Block Codes

669

Note
The send window is an abstract concept defining an
imaginary box of size 2m1 with three variables: Sf, Sn, and
Ssize.
The send window can slide one or more slots when a
valid acknowledgment arrives.

11.670 Dr. Kousa


EE576

Linear Block Codes

670

Figure 11.13 Receive window for Go-Back-N ARQ

11.671 Dr. Kousa


EE576

Linear Block Codes

671

Note
The receive window is an abstract concept defining an
imaginary box
of size 1 with one single variable Rn.
The window slides
when a correct frame has arrived; sliding occurs one slot
at a time.

11.672 Dr. Kousa


EE576

Linear Block Codes

672

Figure 11.15 Window size for Go-Back-N ARQ

11.673 Dr. Kousa


EE576

Linear Block Codes

673

Note
In Go-Back-N ARQ, the size of the send window must be
less than 2m;
the size of the receiver window
is always 1.

11.674 Dr. Kousa


EE576

Linear Block Codes

674

Figure 11.16 Flow diagram for Example 11.6

This is an
example
of a case
where the
forward
channel is
reliable,
but the
reverse is
not. No
data
frames
11.675 Dr. Kousa
EE576

Linear Block Codes

675

Figure 11.17 Flow diagram for Example 11.7

Scenario
showing what
happens when
a frame is
lost.

11.676 Dr. Kousa


EE576

Linear Block Codes

676

Note
Stop-and-Wait ARQ is a special case of Go-Back-N ARQ
in which the size of the send window is 1.

11.677 Dr. Kousa


EE576

Linear Block Codes

677

Figure 11.18 Send window for Selective Repeat ARQ

11.678 Dr. Kousa


EE576

Linear Block Codes

678

Figure 11.19 Receive window for Selective Repeat ARQ

11.679 Dr. Kousa


EE576

Linear Block Codes

679

Figure 11.21 Selective Repeat ARQ, window size

11.680 Dr. Kousa


EE576

Linear Block Codes

680

Note
In Selective Repeat ARQ, the size of the sender and
receiver window
must be at most one-half of 2m.

11.681 Dr. Kousa


EE576

Linear Block Codes

681

Figure 11.22 Delivery of data in Selective Repeat ARQ

11.682 Dr. Kousa


EE576

Linear Block Codes

682

Figure 11.23 Flow diagram for Example 11.8


Scenario
showing
how
Selective Repeat
behaves when a
frame is lost.

11.683 Dr. Kousa


EE576

Linear Block Codes

683

116HDLC

High-level Data Link Control (HDLC) is a bit-oriented


protocol for communication over point-to-point and
multipoint links. It implements the ARQ mechanisms we
discussed in this chapter.

Topics discussed in this section:


Configurations and Transfer Modes
Frames
Control Field
11.684 Dr. Kousa
EE576

Linear Block Codes

684

Figure 11.25 Normal response mode

Figure 11.26 Asynchronous balanced mode

11.685 Dr. Kousa


EE576

Linear Block Codes

685

Figure 11.27 HDLC frames

11.686 Dr. Kousa


EE576

Control field
format for the
different frame
types

Linear Block Codes

686

Table 11.1 U-frame control command and response

11.687 Dr. Kousa


EE576

Linear Block Codes

687

Figure 11.31 Example of piggybacking with error


Figure 11.31 shows an exchange in which a
frame is lost. Node B sends three data
frames (0, 1, and 2), but frame 1 is lost.
When node A receives frame 2, it discards it
and sends a REJ frame for frame 1. Note
that the protocol being used is Go-Back-N
with the special use of an REJ frame as a
NAK frame. The NAK frame does two things
here: It confirms the receipt of frame 0 and
declares that frame 1 and any following
frames must be resent. Node B, after
receiving the REJ frame, resends frames 1
and 2. Node A acknowledges the receipt by
sending an RR frame (ACK) with
acknowledgment number 3.
11.688 Dr. Kousa
EE576

Linear Block Codes

688

117POINTTOPOINTPROTOCOL

Although HDLC is a general protocol that can be used


for both point-to-point and multipoint configurations,
one of the most common protocols for point-to-point
access is the Point-to-Point Protocol (PPP). PPP is a
byte-oriented protocol.
Topics discussed in this section:
Framing
Transition Phases
Multiplexing
Multilink PPP
11.689 Dr. Kousa
EE576

Linear Block Codes

689

Figure 11.32 PPP frame format

PPP is a byte-oriented protocol using byte stuffing with the


escape byte 01111101.

11.690 Dr. Kousa


EE576

Linear Block Codes

690

Figure 11.33 Transition phases

11.691 Dr. Kousa


EE576

Linear Block Codes

691

Figure 11.35 LCP packet encapsulated in a frame

11.692 Dr. Kousa


EE576

Linear Block Codes

692

Table 11.2 LCP packets

11.693 Dr. Kousa


EE576

Linear Block Codes

693

Table 11.3 Common options

11.694 Dr. Kousa


EE576

Linear Block Codes

694

Figure 11.36 PAP packets encapsulated in a PPP frame

11.695 Dr. Kousa


EE576

Linear Block Codes

695

Figure 11.37 CHAP packets encapsulated in a PPP frame

11.696 Dr. Kousa


EE576

Linear Block Codes

696

Figure 11.38 IPCP packet encapsulated in PPP frame


Code value
for IPCP
packets

11.697 Dr. Kousa


EE576

Linear Block Codes

697

A Survey of Advanced
FEC Systems
Eric Jacobsen
Minister of Algorithms, Intel Labs
Communication Technology Laboratory/
Radio Communications Laboratory
July 29, 2004
With a lot of material from Bo Xia, CTL/RCL

www.intel.com/labs

Communication and Interconnect Technology Lab

Outline
What is Forward Error Correction?
The Shannon Capacity formula and what it means
A simple Coding Tutorial

A Brief History of FEC


Modern Approaches to Advanced FEC
Concatenated Codes
Turbo Codes
Turbo Product Codes
Low Density Parity Check Codes

www.intel.com/labs

699

Communication and Interconnect Technology Lab

Information Theory Refresh


The Shannon Capacity Equation

C = W log2(1 + P / N)
Channel
Capacity
(bps)

Channel
Bandwidth
(Hz)

Transmit
Power

Noise
Power

2 fundamental ways to increase data rate


C is the highest data rate that can be transmitted error free under
the specified conditions of W, P, and N. It is assumed that P
is the only signal in the memoryless channel and N is AWGN.

www.intel.com/labs

700

Communication and Interconnect Technology Lab

A simple example
A system transmits messages of two bits each through a channel
that corrupts each bit with probability Pe.
Tx Data = { 00, 01, 10, 11 }

Rx Data = { 00, 01, 10, 11 }

The problem is that it is impossible to tell at the receiver whether the


two-bit symbol received was the symbol transmitted, or whether it
was corrupted by the channel.
Tx Data = 01

Rx Data = 00

In this case a single bit error has corrupted the received symbol, but
it is still a valid symbol in the list of possible symbols. The most
fundamental coding trick is just to expand the number of bits
transmitted so that the receiver can determine the most likely
transmitted symbol just by finding the valid codeword with the
minimum Hamming distance to the received symbol.

www.intel.com/labs

701

Communication and Interconnect Technology Lab

Continuing the Simple


Example
A one-to-one mapping of symbol to codeword is produced:
Symbol:Codeword
00
: 0010
01
: 0101
10
: 1001
11
: 1110

The result is a systematic block code


with Code Rate R = and a minimum
Hamming distance between codewords
of dmin = 2.

A single-bit error can be detected and corrected at the receiver by


finding the codeword with the closest Hamming distance. The
most likely transmitted symbol will always be associated with the
closest codeword, even in the presence of multiple bit errors.
This capability comes at the expense of transmitting more bits,
usually referred to as parity, overhead, or redundancy bits.

www.intel.com/labs

702

Communication and Interconnect Technology Lab

Coding Gain
The difference in performance between an uncoded and a coded
system, considering the additional overhead required by the code,
is called the Coding Gain. In order to normalize the power required
to transmit a single bit of information (not a coded bit), Eb/No is used
as a common metric, where Eb is the energy per information bit, and
No is the noise power in a unit-Hertz bandwidth.

Uncoded
Symbols
Coded
Symbols
with R =

The uncoded symbols require a certain


amount of energy to transmit, in this
case over period Tb.

The coded symbols at R = can be


transmitted within the same period if
the transmission rate is doubled. Using
Time No instead of N normalizes the noise
considering the differing signal
bandwidths.
www.intel.com/labs

Tb

703

Communication and Interconnect Technology Lab


Coding Gain and Distance to Channel Capacity Example
C, R = 3/4

C, R = 9/10

0.1
1.62

Uncoded
Matched-Filter
Bound
Performance

3.2

BER = Pe

0.01

1 10

1 10

1 10

1 10

d = ~1.4dB

Coding Gain = ~5.95dB

d = ~2.58dB
1 10

Coding Gain = ~6.35dB

R = 3/4 w/RS
R = 9/10 w/RS
VitRs R = 3/4
Uncoded QPSK

6
Eb/No (dB)

Capacity for R = 3/4

10

11

These curves
Compare the
performance of
two Turbo
Codes with a
concatenated
Viterbi-RS
system. The
TC with R =
9/10 appears to
be inferior to
the R = VitRS system, but
is actually
operating
closer to
capacity.

www.intel.com/labs

704

Communication and Interconnect Technology Lab

FEC Historical Pedigree


1950

1970

1960

Shannons Paper
1948
Hamming
defines basic
binary codes
BCH codes
Proposed
Reed and Solomon
define ECC
Technique

Gallagers Thesis
On LDPCs
Viterbis Paper
On Decoding
Convolutional Codes

Early practical
implementations
of RS codes for tape
and disk drives
Berlekamp and Massey
rediscover Euclids
polynomial technique
and enable practical
algebraic decoding

Forney suggests
concatenated codes

www.intel.com/labs

705

Communication and Interconnect Technology Lab

FEC Historical Pedigree II


1980

1990

2000

Ungerboecks
TCM Paper - 1982
RS codes appear
in CD players
First integrated
Viterbi decoders
(late 1980s)
TCM Heavily
Adopted into
Standards

Berrous Turbo Code


Paper - 1993
Turbo Codes
Adopted into
Standards
(DVB-RCS, 3GPP, etc.)

LDPC beats
Turbo Codes
For DVB-S2
Standard - 2003
Renewed interest
in LDPCs due to TC
Research

www.intel.com/labs

706

Communication and Interconnect Technology Lab


Block Codes
Generally, a block code is any code defined with a finite codeword length.
Systematic Block Code
Data Field

Parity

If the codeword is constructed by


appending redundancy to the
payload Data Field, it is called a
systematic code.

Codeword
The parity portion can be actual parity bits, or generated by some other means, like
a polynomial function or a generator matrix. The decoding algorithms differ greatly.
The Code Rate, R, can be adjusted by shortening the data field (using zero padding)
or by puncturing the parity field.
Examples of block codes: BCH, Hamming, Reed-Solomon, Turbo Codes,
Turbo Product Codes, LDPCs
Essentially all iteratively-decoded codes are block codes.

www.intel.com/labs

707

Communication and Interconnect Technology Lab


Convolutional Codes
Convolutional codes are generated using a shift register to apply a polynomial to a
stream of data. The resulting code can be systematic if the data is transmitted in
addition to the redundancy, but it often isnt.
This is the convolutional encoder for
The p = 133/171 Polynomial that is in
very wide use. This code has a
Constraint Length of k = 7. Some
low-data-rate systems use k = 9 for
a more powerful code.
This code is naturally R = , but
deleting selected output bits, or
puncturing the code, can be done
to increase the code rate.
Convolutional codes are typically decoded using the Viterbi algorithm, which increases in
complexity exponentially with the constraint length. Alternatively a
sequential decoding algorithm can be used, which requires a much longer constraint length
for similar performance.

www.intel.com/labs

Diagram from [1]

708

Communication and Interconnect Technology Lab


Convolutional Codes - II
This is the code-trellis, or state diagram of a k = 2
Convolutional Code. Each end node represents a code
state, and the branches represent codewords selected
when a one or a zero is shifted into the encoder.
The correcting power of the code comes from the
sparseness of the trellis. Since not all transitions from
any one state to any other state are allowed, a stateestimating decoder that looks at the data sequence can
estimate the input data bits from the state relationships.
The Viterbi decoder is a Maximum Likelihood
Sequence Estimator, that estimates the encoder
state using the sequence of transmitted codewords.
This provides a powerful decoding strategy, but
when it makes a mistake it can lose track of the
sequence and generate a stream of errors until
it reestablishes code lock.
www.intel.com/labs

Diagrams from [1]

709

Communication and Interconnect Technology Lab


Concatenated Codes
A very common and effective code is the concatenation of an inner convolutional
code with an outer block code, typically a Reed-Solomon code. The convolutional
code is well-suited for channels with random errors, and the Reed-Solomon code is
well suited to correct the bursty output errors common with a Viterbi decoder. An
interleaver can be used to spread the Viterbi output error bursts across multiple RS
codewords.
Data

RS
Encoder

Interleaver

Conv.
Encoder

Viterbi
Decoder

Channel

Inner Code

Outer Code
DeInterleaver

RS
Decoder

Data

www.intel.com/labs

710

Communication and Interconnect Technology Lab


Concatenating Convolutional
Codes
Parallel and serial
Data

CC
Encoder1

Interleaver

CC
Encoder2

Channel

Serial Concatenation
DeInterleaver

Viterbi/APP
Decoder

Channel

Viterbi/APP
Decoder

Data
CC
Encoder1
Interleaver

DeInterleaver

CC
Encoder2

Viterbi/APP
Decoder

Data

Data
Combiner

Viterbi/APP
Decoder

www.intel.com/labs

711

Communication and Interconnect Technology Lab


Iterative Decoding of CCCs
Rx Data

Viterbi/APP
Decoder

Interleaver

DeInterleaver

Viterbi/APP
Decoder

Data

Turbo Codes add coding diversity by encoding the same data twice through
concatenation. Soft-output decoders are used, which can provide reliability update
information about the data estimates to the each other, which can be used during a
subsequent decoding pass.
The two decoders, each working on a different codeword, can iterate and continue
to pass reliability update information to each other in order to improve the probability
of converging on the correct solution. Once some stopping criterion has been met,
the final data estimate is provided for use.
These Turbo Codes provided the first known means of achieving decoding
performance close to the theoretical Shannon capacity.

www.intel.com/labs

712

Communication and Interconnect Technology Lab

MAP/APP decoders
Maximum A Posteriori/A Posteriori Probability
Two names for the same thing
Basically runs the Viterbi algorithm across the data sequence in both
directions
~Doubles complexity

Becomes a bit estimator instead of a sequence estimator

Optimal for Convolutional Turbo Codes


Need two passes of MAP/APP per iteration
Essentially 4x computational complexity over a single-pass Viterbi

Soft-Output Viterbi Algorithm (SOVA) is sometimes substituted as a


suboptimal simplification compromise

www.intel.com/labs

713

Communication and Interconnect Technology Lab

Turbo Code Performance

www.intel.com/labs

714

Communication and Interconnect Technology Lab


Turbo Code Performance II
0.1

The TC system clearly operates much


closer to capacity. Much of the
observed distance to capacity is due
to implementation loss in the modem.

1.629

2.864

0.01

1 10

1 10

1 10

1 10

1 10

1 10

BER

The performance curves shown here


were end-to-end measured
performance in practical modems. The
black lines are a PCCC Turbo Code, and
The blue lines are for a concatenated
Viterbi-RS decoder. The vertical
dashed lines show QPSK capacity for
R = and R = 7/8. The capacity for
QPSK at R = is 0.2dB.

4
5
Eb/No (dB)

Uncoded
Vit-RS R = 1/2
Vit-RS R = 3/4
Vit-RS R = 7/8
Turbo Code R = 1/2
Turbo Code R = 3/4
Turbo Code R = 7/8

www.intel.com/labs

715

Communication and Interconnect Technology Lab


Tricky Turbo Codes
Repeat-Accumulate codes use simple repetition followed by a differential encoder
(the accumulator). This enables iterative decoding with extremely simple codes.
These types of codes work well in erasure channels.
Repeat
Section
1:2

Accumulate
Section
Interleaver

+
R=1
Inner Code

R = 1/2
Outer Code

Since the differential encoder has R = 1, the final code rate is determined by the
amount of repetition used.

www.intel.com/labs

716

Communication and Interconnect Technology Lab


Turbo Product Codes

2-Dimensional
Data Field

Parity

Parity

Vertical Hamming Codes

Horizontal Hamming Codes

Parity
Parity

The so-called product codes are codes


Created on the independent dimensions
Of a matrix. A common implementation
Arranges the data in a 2-dimensional array,
and then applies a hamming code to each
row and column as shown.
The decoder then iterates between decoding
the horizontal and vertical codes.

Since the constituent codes are Hamming codes, which can be decoded simply, the
decoder complexity is much less than Turbo Codes. The performance is close to capacity
for code rates around R = 0.7-0.8, but is not great for low code rates or short blocks. TPCs
have enjoyed commercial success in streaming satellite applications.

www.intel.com/labs

717

Communication and Interconnect Technology Lab

Low Density Parity Check


Codes

Iterative decoding of simple parity check codes

First developed by Gallager, with iterative decoding, in 1962!


Published examples of good performance with short blocks
Kou, Lin, Fossorier, Trans IT, Nov. 2001

Near-capacity performance with long blocks


Very near! - Chung, et al, On the design of low-density parity-check codes within
0.0045dB of the Shannon limit, IEEE Comm. Lett., Feb. 2001

Complexity Issues, especially in encoder


Implementation Challenges encoder, decoder memory

www.intel.com/labs

718

Communication and Interconnect Technology Lab

LDPC Bipartite Graph


Check Nodes

Edges

Variable Nodes
(Codeword bits)
This is an example bipartite graph for an irregular LDPC code.

www.intel.com/labs

719

Communication and Interconnect Technology Lab


Iteration Processing
1st half iteration, compute s,s, and rs for each edge.
i+1 = maxx(i,qi)

i = maxx(i+1,qi)

Edges

Check Nodes
(one per parity bit)
ri = maxx(i,i+1)
ri

qi
mVn
Variable Nodes
(one per code bit)

mV = mV0 + rs

qi = mV ri

2nd half iteration, compute mV, qs for each


variable node.

www.intel.com/labs

720

Communication and Interconnect Technology Lab


LDPC Performance Example

LDPC Performance can


Be very close to capacity.
The closest performance
To the theoretical limit
ever was with an LDPC,
and within 0.0045dB of
capacity.
The code shown here is
a high-rate code and
is operating within a few
tenths of a dB of capacity.
Turbo Codes tend to work
best at low code rates and
not so well at high code rates.
LDPCs work very well at high
code rates and low code rates.
Figure is from [2]

www.intel.com/labs

721

Communication and Interconnect Technology Lab

Current State-of-the-Art
Block Codes
Reed-Solomon widely used in CD-ROM, communications standards.
Fundamental building block of basic ECC

Convolutional Codes
K = 7 CC is very widely adopted across many communications standards
K = 9 appears in some limited low-rate applications (cellular telephones)
Often concatenated with RS for streaming applications (satellite, cable, DTV)

Turbo Codes
Limited use due to complexity and latency cellular and DVB-RCS
TPCs used in satellite applications reduced complexity

LDPCs
Recently adopted in DVB-S2, ADSL, being considered in 802.11n, 802.16e
Complexity concerns, especially memory expect broader consideration

www.intel.com/labs

722

Cyclic Codes for Error Detection


W. W. Peterson and D. T. Brown
by
Maheshwar R Geereddy

EE576

Dr. Kousa

Linear Block Codes

723

Definition
A code is called cyclic if [xnx0x1...xn-1] is a
codeword whenever [x0x1...xn-1xn] is also a codeword.

Notations

k = Number of binary digits in the message


before encoding
n = Number of binary digits in the encoded
message
n k = number of check bits

EE576

Dr. Kousa

Linear Block Codes

724

b = length of a burst of errors.


G (X) = message polynomial
P (X) = generator polynomial
R (X) = remainder on dividing X nk G (X) by P(X).
F (X) = encoded message polynomial
E (X) = error polynomial
H (X) = Received encoded message polynomial
H (X) = F (X) + E (X)

EE576

Dr. Kousa

Linear Block Codes

725

Polynomial Representation of Binary


Information
It is convenient to think of binary digits as
coefficients of a polynomial in the dummy variable
X.
Polynomial is written low-order-to-high-order.
Polynomials are treated according to the laws of
ordinary algebra with an exception addition is to
be done modulo two.

EE576

Dr. Kousa

Linear Block Codes

726

Algebraic Description of Cyclic Codes


A cyclic code is defined in terms of a generator
polynomial P(X) of degree n-k.
If P(X) has X as a factor then every code
polynomial has X as a factor and therefore
zero-order coefficient equal to zero.
Only codes for which P(X) is not divisible by X
are considered.
EE576

Dr. Kousa

Linear Block Codes

727

Encoded Message Polynomial F(X)


Computer X n k G (X)
R(X) = X n k G (X) / P(X)
Add the remainder to the X n k G (X)
F(X) = X n k G (X) + R(X)
X n k G (X) = Q(X) P(X) + R(X)
Also F(X) = Q(X) P(X)

EE576

Dr. Kousa

Linear Block Codes

728

Principles of Error Detection and Error


Correction
An encoded message containing errors can be
represented by
H (X) = F (X) + E (X)
H (X) = Received encoded message polynomial
F (X) = encoded message polynomial
E (X) = error polynomial

EE576

Dr. Kousa

Linear Block Codes

729

Principles of Error Detection and Error


Correction Contd
To detect error, divide the received, possible
erroneous message H(X) by P(X) and test the
remainder.
If the remainder is nonzero an error has been
detected.
If the remainder is zero, either no error or an
undetectable error has occurred

EE576

Dr. Kousa

Linear Block Codes

730

DETECTION OF SINGLE ERRORS


Theorem 1: A cyclic code generated by an polynomial
P(X) with more than one term detects all single errors.
Proof:
A single error in the ith position of an encoded message
corresponds to an error polynomial X i.
For detection of single errors, it is necessary that P(X) does
not divide X i.
Obviously no polynomial with more than one term divides X I.

EE576

Dr. Kousa

Linear Block Codes

731

DETECTION OF SINGLE ERRORS


Contd.

Theorem 2: Every polynomial divisible by 1 + X has


an even number of terms.
Proof:

Also if P(X) contains a factor 1 + X any odd numbers


of errors will be detected.

EE576

Dr. Kousa

Linear Block Codes

732

Double and Triple Error Detecting


Codes (Hamming Codes)
Theorem 3: A code generated by the polynomial P(X)
detects all single and double errors if the length n of
the code is no greater than the exponent e to which
P(X) belongs.
Detecting double errors requires that P(X) does not divisible
by
X i + X j for any i, j < n

EE576

Dr. Kousa

Linear Block Codes

733

Double and Triple Error Detecting


Codes Contd
Theorem 4: A code generated by P (X) = (1 + X)
P1(X) detects all single, double, and triple errors if the
length n of the code is no greater than the exponent e
to which P1(X) belongs.
Single and triple errors are detected by presence of factor 1
+ X as proved in Theorem 2.
Double errors are detected because P1(X) belong to the
exponent
e >= n as proved in Theorem 3
Q.E.D

EE576

Dr. Kousa

Linear Block Codes

734

Detection of a Burst-Error

A burst error of length b will be defined as any pattern of errors for


which the number of symbols between first and last errors including
these errors, is b.

Theorem 5: Any cyclic code generated by a polynomial of degree n-k


detects any burst error of length n-k or less.
Any burst polynomial can be factored as E(X) = Xi E1(X)
E1(X) is of degree b-1
Burst can be detected if P(X) does not divide E(X)
Since P(X) is assumed not to have X as a factor, it could divide
E(X) only if it could divide E1(X).

But b < = n k
There fore P(X) is of higher degree than E1(X) which implies that P(X)
could not divide E1(X)
Q.E.D
Theorem 6: The fraction of bursts of length b > n-k that are undetected
is
2(nk)
if b > n k + 1
2(nk1) if b = n k + 1

EE576

Dr. Kousa

Linear Block Codes

735

Detection of Two Bursts of Errors


(Abram son and Fire Codes)
Theorem 7: The cyclic code generated by P (X) = (1 + X) P1 (X)
detects any combination of two burst-errors of length two or less
if the length of the code, n, is no go greater than e, the exponent
to which P1 (X) belongs.
Proof
There are four types of error patterns

EE576

E(X) = X i + X j
E(X) = (X i + Xi+1) + X j
E(X) = X j (X j + Xj+1)
E(X) = (X i + Xi+1) + (X j + Xj+1)

Dr. Kousa

Linear Block Codes

736

Other Cyclic Codes


There are several important cyclic codes which have not been
discussed in this paper.
BCH codes (developed by Bose, Chaudhuri, and
Hocquenghem) are a very important type of cyclic codes.
Reed-Solomon codes are a special type of BCH codes that
are commonly used in compact disc players.

Implementation
Briefly, to encode a message, G (X), n-k zeros are annexed (I.e.
multiplication of Xn-1G (X) is performed) and then Xn-1G (X) is
divided by the polynomial P (X) of degree n-k. The remainder is
then subtracted from Xn-1G (X). (It replaces the n-k zeroes).
This encoded message is divisible by P (X) for checking out
errors

EE576

Dr. Kousa

Linear Block Codes

737

Implementation Contd

It can be seen that modulo 2 arithmetic has simplified the division


considerably.
Here we do not require the quotient, so the division to find the remainder
can be described as follows.
1) Align the coefficient of the highest degree terms of the divisor and
dividend and subtract (same as addition)
2) Align the coefficient of the highest degree terms of the divisor and
difference and subtract again
3) Repeat the process until the difference has the lower degree than the
divisor
The hardware to implement this algorithm is a shift register and a
collection of modulo two adders.
The number of shift register positions is equal to the degree of the
divisor, P (X), and the dividend is shifted through high order first and left
to right.

EE576

Dr. Kousa

Linear Block Codes

738

Implementation Contd
As the first one (the coefficient of the high order term of
the dividend) shifts off the end we subtract the divisor by
the following procedure:
1. In the subtraction the high-order terms of the divisor and dividend
always cancel. As the higher order term of the dividend is shifted
off the end of the register, this part of the subtraction is done
automatically.
2. Modulo two adders are placed so that when a one shifts off the
end of the register, the divisor is subtracted from the contents of
the register. The register than contains a difference that is shifted
until another comes off the end and then the process is repeated.
This continues until the entire dividend is shifted into the register.

EE576

Dr. Kousa

Linear Block Codes

739

EE576

Dr. Kousa

Linear Block Codes

740

Input 100010001101011
0 -> 10 00 1
0 -> 11 10 1
0 -> 11 01 1
1 -> 11 00 0
1 -> 11 10 0
0 -> 11 11 0
1 -> 01 11 1
0 -> 00 01 0
1 -> 00 00 1
1 -> 00 10 1

EE576

Dr. Kousa

Linear Block Codes

741

EE576

Dr. Kousa

Linear Block Codes

742

EE576

Dr. Kousa

Linear Block Codes

743

Implementation Contd

To minimize the hardware it is desirable to use the same register for


both encoding and error detection.
If circuit of fig : 3 is used for error detection, the remainder on dividing X
n-k H(X) by P(X) instead of remainder on dividing H(X) by P(X)
This makes no difference, because if H(X) is not evenly divisible by
P(X) than obviously X n-k H(X) will not be divisible either.
Error Correction: It is a much more difficult task than error detection.
It can be shown that each different correctable error pattern must give a
different remainder after division buy P(X).
There fore error correction can be done.

Conclusion

Cyclic codes for error detection provides high efficiency and the ease of
implementation.
It provides standardization like CRC-8 and CRC-32

EE576

Dr. Kousa

Linear Block Codes

744

The Viterbi Algorithm


Application of Dynamic Programming-the Principle of
Optimality
-Search of Citation Index -213 references since 1998
Applications
Telecommunications

Convolutional codes-Trellis codes


Inter-symbol interference in Digital Transmission
Continuous phase transmission
Magnetic Recording-Partial Response SignalingDivers others

EE576

Dr. Kousa

Image restoration
Rainfall prediction
Gene sequencing
Character recognition

Linear Block Codes

745

Milestones

Viterbi (1967) decoding convolutional codes


Omura (1968) VA optimal
Kobayashi (1971) Magnetic recording
Forney (1973) Classic survey recognizing the
generality of the VA
Rabiner (1989) Influential survey paper of hidden
Markov chains

EE576

Dr. Kousa

Linear Block Codes

746

Example-Principle of Optimality
Find optimal path to each
bridge

Professor X chooses an
optimum
Peris path on his trip to lunch
h

Publis
h

1.2
.5

.
7

1.2

Optimal: 6 adds
Brute force:8
adds

.
8

EE576

Faculty
Club

1.
2

.
5
EE Bld

.
2

Dr. Kousa

.
5

N bridges
Optimal: 4(N+1) adds
Brute force: (N-1)2N
adds

.
3

1.0

.8
S
Linear Block Codes

.
8
747

Digital Transmission with


Convolutional Codes

Information
Source

a1 , a2 ,..., aN
A

Convolutional
Encoder

c1 , c2 ,..., cN
BSC
p
p

Information
Sink

EE576

Dr. Kousa

a%1 , a%2 ,..., a%N

Viterbi
Algorithm

Linear Block Codes

b1 , b2 ,..., bN
BN

748

Maximum a Posteriori (MAP)


Estimate
Define
D ( B N , A N ) @Hamming distance between sequences

Maximum Aposteriori Probability


max P (b1 , b2 ,..., bN / a1 , a2 ,..., aN ) max p D ( A

a1 , a2 ,..., aN

a1 , a2 ,..., a N

,BN )

(1 p) N D ( A

,BN )

p bit error probability

Equivalently
min D( A N , B N ) log( p /(1 p)

a1 , a2 ,..., a N

Brute force = Exponential


Growth with N
EE576

Dr. Kousa

Linear Block Codes

749

Convolutional codes-Encoding a
sequence
Example(3,1) code

(output,input)
efficiency=input/output

Initial state - s1 s2 0

Output

Input
T

110100

111 100 010 110 011 001 000

S1

S2
State

EE576

Dr. Kousa

Linear Block Codes

01 i
0 2 i s1
0 3 i s1 s2
Initial state - s1 s2 0
750

Markov chain for Convolutional code


0-000
input -output

00
1 -111

0 -001
Fig.2.14

state
01
0 -010

0 -011
1 -110

10
EE576

Dr. Kousa

1 -100

Linear Block Codes

11

1 -101
751

Trellis Representation
State output
0 input

s1s2
00

000
111

01
10
11

Next state

1 input

00
01

001
110
011
100
010
101
EE576

Dr. Kousa

10
11

Linear Block Codes

752

Iteration for Optimization


min D( A N , B N )

a1 , a2 ,..., aN

min

s1 , s2 ,..., s N
Shift register contents
N

min D( A , B ) min
N

s1 , s2 ,..., s N

s1 , s2 ,..., s N

min ( D( A , B )
N

s1 , s2 ,..., s N
s1 , s2 ,..., s N

d (a , b )-

sN

min

( D( A

s1 , s2 ,..., s N 1/ S N

EE576

sN

Dr. Kousa

Linear Block Codes

BSC

N 1

( D( A

min ( D( A N , B N ) min( d ( a N , bN )

s1 , s2 ,..., s N

i 1

s1 , s2 ,..., s N 1 , s N

min ( D( A , B ) min
N

min

D( A N , B N )

N 1

memorylessness

,B
,B

min

N 1

N 1

s1 , s2 ,..., s N 1 / S N

) d (a N , bN ))

) d ( a N , bN ))

D( A N 1 , B N 1 ))

753

Key step!
min

s1 , s2 ,..., sN 2 / S N 1 ,S N

min

s1 , s2 ,..., s N 1 / s N

D( A N 2 , B N 2 ))
Redundant

D( A

N 1

,B

min (d (aN 1 , bN 1 )

s N 1 / s N Incremental distance

N 1

min

s1 , s2 ,..., sN 2 / S N 1

D( A N 2 , B N 2 ))

)
min

s1 , s2 ,..., s N 2 / s N 1

D( A

N 2

,B

N 2

Accumulated distance

Linear growth in N

EE576

Dr. Kousa

Linear Block Codes

))

754

Deciding Previous State


min

s1 , s2 ,..., si / si 1

D( Ai , B i )

min( d ( ai , bi )
si / si 1

State i-1

min

s1 , s2 ,..., si 1 / si

D( Ai 1 , B i 1 ))
d (ai , bi )

bi 010
D( Ai 1 , B i 1 ) 00
1
4
ai 000
Search previous states
2
10
2
ai 001
EE576

Dr. Kousa

Linear Block Codes

State i

00
4

755

Viterbi Algorithm-shortest path


to detect sequence
First step
s0
s1
s2
s3
Trace though successive states

shortest path-Hamming distance to s0


Trellis codes-Euclidean distance
Optimum Sequence Detection
EE576

Dr. Kousa

Linear Block Codes

756

Inter-symbol Interference
z (t )
N

a p(t iT )
i 1

Transmitter

Channel

Equalizer

Decisions

VA

z (t ) ai h(t iT ) n(t )-Received signal


i 1

ri j @ h(t iT )h(t jT )dt


0

ri j 0; i j m Finite memory channel


EE576

Dr. Kousa

Linear Block Codes

757

AWGN Channel-MAP Estimate

min z (t ) ai h(t iT ) dt a1 , a2 ,..., aN


i 1

0
Euclidean distance between received and possible signals
N

Simplification

min 2 a i Z i ai a j ri j
a1 , a2 ,..., aN
i 1
i 1 j 1

where

Z i y (t )h(t iT )dt -Output of Matched Filter


EE576

Dr. Kousa

Linear Block Codes

758

Viterbi Algorithm for ISI


Define:s k {ak m 1 ,..., ak }-states
Memory m

k
k
k

D( Z1 ,..., Z k , sk m 1 ,..., sk ) @ 2 a i Z i ai a j ri j Accumulated distance


i 1
i 1 j 1

d ( Zk ; sk 1 , sk ) @2ak Z k 2ak

k 1

i k m

ai rk i ak2 r0 Incremental distance

State = number of symbols in memory

min

s1 , s2 ,..., sk 1 / sk

D( Z1 ,..., Z k 1 , sk m 1 ,..., sk 1 )

min ( d (Zk ; sk 1 , sk )

sk 1 / sk

EE576

Dr. Kousa

min

s1 , s2 ,..., sk 2 / sk 1

D( Z1 ,..., Z k 2 , sk m 1 ,..., sk 2 ))

Linear Block Codes

759

Magnetic Recording
Magnetization pattern

m(t ) (ak 1)u (t kT ) 1(t )


k 0

Magnetic flux passes over heads


Differentiation of pulses
Controlled ISI
Same model applies to
Partial Response
signaling
Output

d m(t )
e(t )
* h(t )

dt

2 xk h(t kT ) where xk ak ak 1
k 0

EE576

Dr. Kousa

Nyquist pulse

Linear Block Codes

Sample

760

Continuous Phase FSK


Digital Input Sequence
a1 , a2 ,..., aN
Tranmitted Signal
yk cos( (ak )t xk ); kT t (k 1)T
Constraint-Continuous Phase

(ak 1 )t xk 1 (ak )t xk ; mod 2


Example-Binary signaling

odd number cycles


Whole cycles

0.8
0.3
-0.2

0.2

0.4

0.6

Dr. Kousa

Signaling interval

0.8

-0.7
-1.2

EE576

Linear Block Codes

0;even no. ones


xk
1;odd no. ones
761

Merges and State Reduction


Optimal paths through trellis

All paths merge

Force merges to reduce complexity

Computations order of (No states)2


Carry only high probability states
EE576

Dr. Kousa

Linear Block Codes

762

Input Pixel

Effect of Blurring

Optical output signal


Blurring Analogous to ISI

s (i, j )

a(i l , j m)

l L m L
EE576

Input pixel

h(l , m) n(i, j )

Optical channel

where L optical blur width


Dr. Kousa

Linear Block Codes

AWGN

763

Row Scan

VA for optimal row sequence

Known state transitions


And Decision Feed back
Utilized for state reduction

EE576

Dr. Kousa

Linear Block Codes

764

Hidden Markov Chain

Data suggests Markovian structure


Estimate initial state probabilities
Estimate transition probabilities
VA used for estimation of Probabilities
Iteration

EE576

Dr. Kousa

Linear Block Codes

765

Rainfall Prediction
Rainy
wet

Rainy
dry

Showery
wet

Showery
dry

No rain

Rainfall observations
EE576

Dr. Kousa

Linear Block Codes

766

DNA Sequencing

DNA-double helix
Sequences of four nucleotides, A,T,C and G
Pairing between strands
Bonding

A T and C G

Genes
Made up of Cordons, i.e. triplets of adjacent nucleotides
Overlapping of genes
Nucleotide sequence
CGGATTC

Gene 1
Gene 2

Cordon A in three genes

Gene 3
EE576

Dr. Kousa

Linear Block Codes

767

Hidden Markov Chain


Tracking genesM

S-start first cordon of gene


P1-4- +1,,+4 from start
Gene
E-stop
H-gap
M1-4 -1,,-4 from start
M3

P1

M2

P2

P3

Initial and
Transition M4
Probabilities known

P4

H
EE576

Dr. Kousa

E
Linear Block Codes

768

Recognizing Handwritten Chinese Characters


Text-line images
Estimate stroke width
Set up m X n grid
Estimate initial and transition probabilities
Detect possible segmentation paths by VA

Next Slide
Results
EE576

Dr. Kousa

Linear Block Codes

769

Example Segmenting Handwritten


Characters
Eliminating
Redundant
Paths

All possible
segmentation
paths

Removal of
Overlapping
paths

EE576

Dr. Kousa

Discarding
near paths

Linear Block Codes

770

You might also like