You are on page 1of 49

by Assoc. Prof.

Thuong Le-Tien 1
Chapter 9: Channel Coding
in CDMA Systems
Lectured by Assoc Prof Thuong Le-Tien
October 2013
Chapter Outline
by Assoc. Prof. Thuong Le-Tien
Error Detection and Correction
Repetition and Parity-Check Codes Interleaving Code
Vectors and Hamming Distance FEC Systems ARQ
Systems
Linear Block Codes
Matrix Representation of Block Codes Syndrome
Decoding Cyclic Codes M-ary Codes
Convolutional Codes
Convolutional Encoding Free Distance and Coding Gain
Decoding Methods Turbo Codes
1. Error detection and correction
by Assoc. Prof. Thuong Le-Tien 3
Coding for error detection, without correction, is
simpler than error-correction coding.
When a two-way channel exists between source
and destination, the receiver can request
retransmission of information containing detected
errors. This error-control strategy, called Automatic
Repeat Request (ARQ), particularly suits data
communication systems such as computer networks.
However, when retransmission is impossible or
impractical, error control must take the form of
Forward Error Correction (FEC) using an error-
correcting code.
by Assoc. Prof. Thuong Le-Tien 4
Repetition and Parity Check Codes
If transmission errors occur randomly and independently
with probability P
e
=, then the binomial frequency function
gives the probability of i errors in an n-bit codeword as
by Assoc. Prof. Thuong Le-Tien 5
Consider, a triple-repetition code with codewords 000 and
111. All other received words, such as 001 or 101, clearly
indicate the presence of errors. For error detection without
correction, we say that any word other than 000 or 111 is a
detected error. Single and double errors in a word are thereby
detected, but triple errors result in an undetected word error with
probability
P
we
=P(3,3)-
3
For error correction, we use majority-rule decoding based on
the assumption that at least two of the three bits are correct.
Thus, 001 and 101 are decoded as 000 and 111, respectively.
This rule corrects words with single errors, but double or triple
errors result in a decoding error with probability
P
we
=P(2,3)+P(3,3)= 3
2
-2
3
Since P
e
= would be the error probability without coding,
by Assoc. Prof. Thuong Le-Tien 6
Square array for error correction by parity checking
Figure 1-1
7
Code Vectors and Hamming Distance
Notice that the triple-repetition code vectors have greater
separation than the parity-code vectors. This separation, measured in
terms of the Hamming distance, has direct bearing on the error-
control power of a code. The Hamming distance d(X,Y) between two
vectors X and Y is defined to equal the number of different elements.
Detect up to l errors per word d
min
l + 1
Correct up to t errors per word d
min
2t + 1
Correct up to t errors and detect l > t errors d
min
t + l + 1
by Assoc. Prof. Thuong Le-Tien 8
FEC system
Message bits come from an information source at rate r
b
. The encoder takes
blocks of k message bits and constructs an (n, k) block code with code rate R
c
=
k/n < 1. The bit rate on the chanel therefore must be greater than r
b
, namely
r = (n/k) r
b
= r
b
/R
c
The code has d
min
= 2t + 1 n - k + 1, and the decoder operates strictly in an
error-correction mode.
by Assoc. Prof. Thuong Le-Tien 9
ARQ system
Each codeword constructed by the encoder is stored temporarily
and transmitted to the destination where the decoder looks for errors.
The decoder issues a positive acknowledgement (ACK) if no errors
are detected, or a negative acknowledgement (NAK) if errors are
detected. A negative acknowledgment causes the input controller to re
transmit the appropriate word from those stored by the input buffer. A
particular word may be transmitted just once, or it may be transmitted
two or more times, depending on the occurrence of transmission errors.
by Assoc. Prof. Thuong Le-Tien 10
ARQ schemes (a) stop-and-wait (b) go-back (c) selective repeat
by Assoc. Prof. Thuong Le-Tien 11
2. Linear Block Codes
Matrix Representation of Block Codes
An (n, k) block code consists of n-bit vectors, each vector
corresponding to a unique block of k < n message bits. Since there are
2
k
different k-bit message blocks and 2
n
possible n-bit vectors, the
fundamental strategy of block coding is to choose the 2
k
code vectors
such that the minimum distance is as large as possible.
A systematic block code consists of vectors whose first k elements (or
last k elements) are identical to the message bits, the remaining n k
elements being check bits. A code vector then takes the form
X = (m
1
m
1
m
k
c
1
c
1
c
q
)
q = n k X = (M | C)
in which M is a k-bit message vector and C is a q-bit check vector.
Partitioned notation lends itself to the matrix representation of block
codes.
X=MG
by Assoc. Prof. Thuong Le-Tien 12
1 1 2 2
...
i i i k ki
c m p m p m p =
11 12 1
21 22 2
1 2
[5 ]
q
q
kq k k
p p p
P p p p b
p
p p
C MP
| |
|
|
|
=
|
|
|
\ .
=

This binary matrix multiplication follows the usual rules with mod-2
addition instead of conventional addition. Hence, the jth element of C
is computed using the jth column of P, and
The matrix G is a k n generator matrix
where I
k
is the k k identity matrix and P is a k q submatrix of
binary digits represented by
by Assoc. Prof. Thuong Le-Tien 13
1
2 1
c
q
k q
R
n
= =

1 0 0 0 1 0 1
0 1 0 0 1 1 1
0 0 1 0 1 1 0
0 0 0 1 0 1 1
G
| |
|
|
|
=
|
|
|
\ .
Hamming Codes
A Hamming code is an (n, k) linear block code with q 3 check bits and
n = 2
q
1 k = n q
and thus R
c
1 if q >> 1. Independent of q, the minimum distance is fixed at
d
min
= 3
so a Hamming code can be used for single-error correction or double-error
detection. To construct a systematic Hamming code, you simply let the k
rows of the P submatrix consist of all q-bit words with two or more 1s,
arranged in any order.
For example, consider a systematic Hamming code with q = 3, so n = 2
3
-1 = 7 and k
= 7 - 3 = 4. According to the previously stated rule, an appropriate generator matrix is
by Assoc. Prof. Thuong Le-Tien 14
Encoder for (7,4) Hamming code
1 1 2 3
2 2 3 4
3 1 2 4
0
0
0
c m m m
c m m m
c m m m
=
=
=
The last three columns constitute the P submatrix whose rows
include all 3-bit words that have two or more 1s. Given a block
of message bits M = (m
1
m
2
m
3
m
4
), the check bits are
determined from the set of equations.
by Assoc. Prof. Thuong Le-Tien 15
Table 13.2-1 lists the resulting 2
4
= 16 codewords and their weights.
The smallest nonzero weight equals 3, confirming that d
min
= 3.
by Assoc. Prof. Thuong Le-Tien 16
Syndrome Decoding
Y stand for the received vector when a particular code vector X has been
transmitted. Any transmission errors will result in Y X. The decoder detects or
corrects errors in Y using stored information about code.
More practical decoding methods for codes with large k involve parity-check
information derived from the codes P submatrix. Associated with any systematic
linear (n, k) block code is a q n matrix H called the parity-check matrix. This
matrix is defined by
where H
T
denotes the transpose of H and I
q
is the q q identity matrix. Relative
to error detection, the parity-check matrix has the crucial property
XH
T
= (0 0 0) [9]
provided that X belongs to the set of code vectors. However, when Y is not a code
vector, the product YH
T
contains at least one nonzero element.
Therefore, given H
T
and a received vector Y, error detection can be based on
S = YH
T
[10]
a q- bit vector called the syndrome
by Assoc. Prof. Thuong Le-Tien 17
Error correction necessarily entails more circuitry but it, too, can be based
on the syndrome. We develop the decoding method by introducing an n-bit
error vector E whose nonzero elements mark the positions of transmission
errors in Y. For instance, if X = ( 1 0 1 1 0) and Y = (1 0 0 1 1) then E = ( 0 0 1
0 1). In general,
Y = X + E X = Y + E
S = (X + E)H
T
= XH
T
+ EH
T
= EH
T
Which reveals that the syndrome depends entirely on the error pattern, not the
specific transmitted vector.
However, there are only 2
q
different syndromes generated by the 2
n
possible n-bit error vectors, including the no-error case. Consequently, a given
syndrome does not uniquely determine E. Or, putting this another way, we can
correct just 2
q
-1 patterns with one or more errors, and the remaining patterns
are uncorrectable. We should therefore design the decoder to correct the 2
q
-1
most likely error patterns namely those patterns with the fewest errors, since
single errors are more probable than double errors, and so forth. This strategy,
known as maximum-likelihood decoding, is optimum in the sense that it
minimizes the word-error probability. Maximum-likelihood decoding
corresponds to choosing the code vector that has the smallest Hamming
distance from the received vector.
by Assoc. Prof. Thuong Le-Tien 18
To carry out maximum-likelihood decoding, you must first compute the
syndromes generated by the 2
q
1 most probable error vectors. The table-
lookup decoder diagramed in Fig. 13.2-2 then operates as follows. The
decoder calculates S from the received vector Y and looks up the assumed
errors vector stored in the table. The sum Y + generated by exclusive-OR
gates finally constitutes the decoded word. If there are no errors, or if the
errors are uncorrectable, then S = (0 0 0) so Y + = Y. The check bits in
the last q elements of Y + may be omitted if they are of no further interest.
Example:
by Assoc. Prof. Thuong Le-Tien 19
1 1 1 0 1 0 0
0 1 1 1 0 1 0
1 1 0 1 0 0 1
T
q
H P I
| |
|
(
= =
|

|
\ .
Lets apply table-lookup decoding to a (7, 4) Hamming code used for single-
error correction. From Eq. (8) and the P submatrix given in Example 13.2-1, we
obtain the 3 7 parity-check matrix.
There are 2
3
1 = 7 correctable single-error patterns, and the corresponding
syndromes listed in Table 13.2-2 follow directly from the columns of H. To
accommodate this table the decoder needs to store only (q + n) 2
q
= 80 bits.
But suppose a received word happens to have two errors, such that E = (1 0
0 0 0 1 0). The decoder calculates S = YH
T
= EH
T
= (1 1 1) and the syndrome
table gives the assumed single-error pattern = (0 1 0 0 0 0 0). The decoded
output word Y + therefore contain three errors, the two transmission errors
plus the erroneous correction added by the decoder.
If multiple transmission errors per word are sufficiently infrequent, we need
not be concerned about the occasional extra errors committed by the decoder. If
multiple errors are frequent, a more powerful code would be required. For
instance, an extended Hamming code has an additional check bit that provides
double-error detection along with single-error correction.
by Assoc. Prof. Thuong Le-Tien 20
21
Cyclic Codes
The code for a forward-error-correction system must be capable of
correcting t 1 errors per word. It should also have a reasonably
efficient code rate R = k/n. These two parameters are related by the
inequality
which follows from Eq. (13) with q = n k = n(1 - R
c
).This inequality
underscores the fact that if we want R
c
1, we must use codewords
with n >> 1 and k >> 1. However, the hardware requirements for
encoding and decoding long codewords may be prohibitive unless we
impose further structural conditions on the code. Cyclic codes are a
subclass of linear block codes with a cyclic structure that leads to more
practical implementation. Thus, block codes used in FEC systems are
almost always cyclic codes.
A second shift produces X = (x
n-3
x
1
x
0
x
n-1
x
n-2
) , and so forth.
2
0
1
1 log
t
c
i
n
R
i n
=
(
| |
>
( |
\ .

The polynomial p
n
+ 1 and its factors play major roles in cyclic codes.
Specifically, an (n, k) cyclic code is defined by a generator
polynomial of the form
by Assoc. Prof. Thuong Le-Tien 22
1 2
1 2 1 0
1 2
1 2 1 0
1 2
2 1 0 1
( )
( )
'( )
n n
n n
n n
n n
n
n n
X p x p x p x p x
pX p x p x p x p x p
X p x p x p x p x


= + + + +
= + + + +
= + + + +

1 1
1
( ) '( )
'( ) ( ) ( 1)
n
n n
n
n
pX p X p x p x
X p pX p x p

+ = +
= + +
1
1 1
1
1 1 0
1
1 1 0
( ) 1 [19]
( ) ( ) ( )
( )
( )
q q
q
M
k
k
q
q
G p p g p g p
X p Q p G p
M p m p m p m
C p c p c p c

= + + + +
=
= + + +
= + + +

by Assoc. Prof. Thuong Le-Tien 23


( ) ( ) ( )
q
X p p M p C p = +
( ) ( )
( )
( ) ( )
q
M
p M p C p
Q p
G p G p
= +
( )
( )
( )
q
p M p
C p rem
G p
(
=
(

( )
( )
( )
Y p
S p rem
G p
(
=
(

Syndrome calculation at the receiver is equally simple. Given a received
vector Y, the syndrome is determined from
24
Example
Consider the cyclic (7, 4) Hamming code generated by G(p) = p
3
+ 0 +
p + 1. Well use long division to calculate the check-bit polynomial
C(p) when M = (1 1 0 0). We first write the message-bit
polynomial
M(p) = p
3
+ p
2
+ 0 + 0
So p
q
M(p) = p
3
M(p) = p
6
+ p
5
+ 0 + 0 + 0 + 0 +0. Next, we divide
G(p) into p
q
M(p), keeping in mind that subtraction is the same as
addition in mod-2
X(p)= p
3
M(p) + C(p) = p
6
+ p
5
+ 0 + 0 + 0 + p +0
X = (1 1 0 0 | 0 1 0)
by Assoc. Prof. Thuong Le-Tien 25
3. Convolutional Codes
1 1 0
0
(mod-2)
j j L L j j
L
j i i
i
x m g m g m g
m g

=
=
=


Convolutional codes have a structure that effectively extends over the
entire transmitted bit stream, rather than being limited to codeword
blocks. The convolutional structure is especially well suited to space
and satellite communication systems that require simple encoders and
achieve high performance by sophisticated decoding methods. Our
treatment of this important family of codes consists of selected
examples that introduce the salient features of convolutional encoding
and decoding .
by Assoc. Prof. Thuong Le-Tien 26
Convolutional encoder with n=2, k=1, and L=2
' "
2 1 2
' " ' " ' "
1 1 2 2 3 3

j j j j j j j
x m m m x m m
X x x x x x x

= =
=
The output bit rate is therefore 2r
b
and the code rate is R
c
= - like
an (n, k) block code with R
c
= k/n = .
by Assoc. Prof. Thuong Le-Tien 27
Code tree for 2,1,2) encoder
by Assoc. Prof. Thuong Le-Tien 28
(a) Code Trellis (b) State Diagram for (2, 1, 2)
Encoder (c) Illustrative sequence
by Assoc. Prof. Thuong Le-Tien 29
Termination of (2, 1, 2) code trellis
Each branch has been labeled with the number of 1s in the encoded bits
Free Distance and Coding Gain
The free distance of a convolutional code is then defined to be
The value of serves as a measure of error-control power
by Assoc. Prof. Thuong Le-Tien 30
The exponent of D equals the branch weight
The exponent of I equals the corresponding number of nonzero message bits
(a)Modified state diagram for [2, 1, 2] encoder; (b)
equivalent block diagram
' " 2 0 2
11 and 0, it is labeled with
j j j
x x m D I D = = =
by Assoc. Prof. Thuong Le-Tien 31
2
2
[6a]

b a c c b d
d b d c c
W D IW IW W DW DW
W DIW DIW W D W
= + = +
= + =
5
5 6 2 7 3
5 4
5
( , )
1 2
2 4 [7]
2
d d d
d
D I
T D I
DI
D I D I D I
D I


=
=

= + + +
=


Our modified state diagram now looks like a signal-flow graph of
the type sometimes used to analyze feedback systems. Specifically, if
we treat the nodes as summing junctions and the DI terms as branch
gains, the above Fig. represents the set of algebraic state equations.
The encoders generating function T(D, I) can now be defined by
the input-output equation.
by Assoc. Prof. Thuong Le-Tien 32
( , ) ( , )
j
d i
d d i l
T D I A d i D I

= =
=

As a generalization of Eq.(7), the generating function for an arbitrary
convolutional code takes the form
Here, A(d,i) denotes the number of different input-output paths through the
modified state diagram that have weight d and are generated by messages
containing i nonzero bits.
by Assoc. Prof. Thuong Le-Tien 33
2 ( ) , 1
1 ( , )
be
D l l
T D I
P
k I
o o = =
c
s
c
/ 2
( )
2 1
f f
d d f
be
M d
P
k
o o ~
1
( ) ( , )
f f
i
M d iA d i

=
=

If transmission errors occur with equal and independent probability per bit, then
the probability of a decoded message-bit error is upper-bounded by
[9]
When is sufficiently small, series expansion of T(D,I) yields the approximation
[10]
Where
The quantity M(d
f
) simply equals the total number of nonzero message bits over
all minimum-weight input-output paths in the modified state diagram.
by Assoc. Prof. Thuong Le-Tien 34
( )
b c
R
b c
e R

t o
2 / 1
4

~
( / 2)
/ 4
( )2
(4 )
f
c f b
f
d
R d f
be d
c b
M d
P e
k R

t

~
1/2
1
(4 )
b
ube
b
P e

t

~
Equation (10) supports our earlier assertion that the error-control power of
a convolutional code depends upon its free distance. For a performance
comparison with uncoded transmission, well make the usual assumption of
Gaussian white noise and (S/N)
R
= 2R
c

b
10 so Eq.(10), Sect. 13.1, gives
the transmission error probability
The decoded error probability then becomes
[11]
Whereas uncoded transmission would yield
[12]
by Assoc. Prof. Thuong Le-Tien 35
DECODING METHOD
Suppose that our (2, 1, 2) encoder is used at the transmitter, and
the received sequence starts with Y = 11 01 11. Figure show the first
three branches of the valid paths emanating from the initial node a
0
in
the code trellis. The number in parentheses beneath each branch is the
branch metric, obtained by counting the differences between the
encoded bits and the corresponding bits in Y. The circled number at
the right-hand end of each branch is the running path metric, obtained
by summing branch metrics from a
0
.For instance, the metric of the
path a
0
b
1
c
2
d
3
is 0 + 2 +2 = 4.
by Assoc. Prof. Thuong Le-Tien 36
Illustration of the Viterbi Algorithm for
Maximum-Likelihood Decoding
by Assoc. Prof. Thuong Le-Tien 37
Turbo code
by Assoc. Prof. Thuong Le-Tien 38
Turbo codes, or parallel concatenated codes (PCC) are a
relatively new class of convolutional codes first introduced
in 1993 by Berrou et al., Berrou (1996), Hagenauer et
al.(1996), and Johannesson and Zigangirov (1999).
They have enabled channel capacities to near reach the
Shannon limit.
Shannons theorem for channel capacity assumes random
coding with the BER approaching zero as the codes block
or constraint length approaches infinity
by Assoc. Prof. Thuong Le-Tien 39
Turbo Encoder
The RSC is Recursive Systematic Convolutional encoder with rate .
Both RSC produce parity check bits then overall rate is 1/3. However it
can be reduced to using the process of puncturing by eliminating
the odd parity check bits of the first RSC and the even parity check bits
of the second RSC
by Assoc. Prof. Thuong Le-Tien 40
For the particular encoder in the figure, the polynomial describing the
feedback connections is 1+D
3
+D
4
=10011=23
8
and the polynomial for
the output is
1+D+D
2
+D
4
=11101=35
8
.
Hence, the literature often refers this to as G
1
=23, G
2
=35 or simply a
(23,35) encoder.
RSC encoder with R=1/2, G
1
=23, G
2
=35, L=2
by Assoc. Prof. Thuong Le-Tien 41
Turbo Decoder: Consist of two Maximum a Posterior (MAP) decoders
and feedback path. The first decoder takes the information from the
received signal and calculate the A Posterior Probability (APP) value.
This value is then used as the APP value for the second decoder.
Turbo Decoder
by Assoc. Prof. Thuong Le-Tien 42
Instead of using the Viterbi algorithm, the MAP decoder uses a
modified form of the BCJR (Bahl, Cocke, Jelinek, and Raviv, 1972)
algorithm that take into account the recursive character of the RSV
codes and computes a log-likelihood ratio to estimate the APP for
each bit.
The results by Berrou et al. are impressive. When encoding using
rate R=1/2, G
1
=37 and G
2
=21, 65,537 interleaving, and 18
iterations, they were able to achieve a BER of 1/100000 and
E
b
/N
0
=0.7dB.
The main disadvantage of turbo codes with their relatively large
code words and iterative decoding process is their long latency. A
system with 65,537 interleaving and 18 iterations may have too
long a latency for voice telephony
Reed Solomon Code (RS Code)
* RS codes are nonbinary cyclic codes with code symbols
from A Galois field. They were discovered in 1960 by I.
Reed and G. Solomon. The work was done when they were
at MIT laboratory.
* In the decades since their discovery, RS codes have
enjoyed Countless applications from compact discs an digital
TV in living Room to spacecraft and satellite in outer space.
* The most important RS codes are codes with symbols from
GF(2
m
).The minimum distance of an (n,k) RS code is n-k+1.
Codes of this kind are called maximum-distance-separable
codes
RS Codes with Symbols from GF(2
m
)
Let be a primitive element in GF (2
m
).
For any positive integer t 2
m
1, there exists a t-
symbol-error-correcting RS code with symbols from
GF(2
m
) and the following parameters :
n = 2
m
1
n-k = 2t
k = 2
m
1 2t
d
min
= t + 1 = n k + 1
A systematic RS code word and
some RS code parameters
The generator polynomial is
Where
Note that g(x) has ,
2
,,
2t
as roots.
Example. The following code is a (255,223) RS code.
It is NASA standard code for satellite and space communication
m = 8, t = 16
n = 255
k = n - 2t = 223
d
min
= 33
2 2
2 2 1 2
0 1 2 2 1
( ) ( )( ) ( )

t
t t
t
g x x x x
g g x g x g x x
o o o

= + + +
= + + + + +

(2 )
m
i
g GF e
Encoding of RS code
Let m(x) = m
0
+ m
1
x + + m
k-1
x
k-1
be the message polynomial to be
encoded
Where and k = n 2t
Dividing x
2t
m(x) by g(x), we have
x2t m(x) = a(x)g(x)+b(x)
Where
b(x) = b
0
+ b
1
x + + b
2t-1
x
2t-1
is the remainder
Then b(x) + x
2t
m(x) is the codeword polynomial for the message
m(x).
(2 )
m
i
m GF e
The encoding circuit is shown below (Lin/Costello)
RS Codes for Binary Data
Every element in GF(2
m
) can be represented uniquely by a binary m-
tuple, called a m-bit byte
Suppose an (n, k) RS code with symbols from GF(2m) is
used for encoding binary data. A message of km bits is
first divided into k m-bit bytes. Each m-bit byte is
regarded as a symbol in GF(2
m
).
The k-byte message is then encoded into n-byte
codeword based on the RS encoding rule.
By doing this, we actually expand a RS code with symbols
from GF(2m) into a binary (nm, km) linear code, called a
binary RS code.
Binary RS codes are very effective in correcting bursts of
bit errors as long as no more than t bytes are affected.

You might also like