Professional Documents
Culture Documents
Block 3
Channel Coding
bn cn rn n Channel bn
Channel Channel error
encoder corrector
Channel
error
detector
Channel model: RTx
discrete inputs,
discrete (hard, rn) or continuous (soft, n) outputs,
memoryless. 2
Fundamentals of error control
Enabling detection/correction:
Adding redundancy to the information: for every k bits,
transmit n, n>k.
3) For any Pb, rates greater than R(Pb) are not achievable.
3
Fundamentals of error control
Added redundancy is structured redundancy.
This relies on sound algebraic & geometrical basis.
Our approach:
Algebra over the Galois Field of order 2, GF(2)={0,1}.
GF(2) is a proper field, GF(2)m is a vector field of dim. m.
Dot product :logical AND. Sum +(-) : logical XOR.
Scalar product: b, d GF(2)m
bdT=b1d1+...+bmdm
Product by scalars: a GF(2), b GF(2)m
ab=(ab1...abm)
It is also possible to define a matrix algebra over GF(2).
4
Fundamentals of error control
(0110) (1010)
(1110)
5
Fundamentals of error control
A given encoder produces n bit outputs for each k bit inputs:
R=k/n < 1 is the rate of the code.
The information rate decreases by R by the use of a code.
R'b=RRb < Rb (bit/s)
Moreover, if used jointly with a modulation with spectral
efficiency =Rb/B (bit/s/Hz), the efficiency decreases by R
'=R < (bit/s/Hz)
In terms of Pb, the achievable Eb/N0 region in AWGN is lower
bounded by:
Eb 1 '
N0
(dB)10log ( ( 2
10 ' 1 ) )
6
Fundamentals of error control
Achievable rates, capacity and limits.
Source: http://www.comtechefdata.com/technologies/fec/ldpc
7
Fundamentals of error control
How a channel code can improve Pb (BER in statistical terms).
10
Linear block codes
Arranging in matrix form, an LBC C(n,k) can be specified by
G={gij}i=1,...,k, j=1,...,n, c=bG, b GF(2)k.
bi b j
d min ( C (n ,k )) nk +1
14
Linear block codes
The channel model corresponds to a BSC (binary symmetric channel)
c=(c1...cn) r=(r1...rn)
Channel
BSC(p)
1-p
0 p 0
p
1 1
1-p
p=P(ciri ) is the bit error probability of the modulation in AWGN.
15
Linear block codes
The received word is r=c+e, where P(ei=1)=p.
e is the error vector introduced by the noisy channel
w(e) is the number of errors in r wrt original word c
P(w(e)=t)=pt(1-p)n-t, because the channel is memoryless.
r C(n,k) s=0.
16
Linear block codes
Two possibilities at the receiver side:
a) Error detection (ARQ schemes):
If s0, there are errors, so ask for retransmission.
b) Error correction (FEC schemes):
Decode an estimated C(n,k), so that dH(,r) is the minimum
over all codewords in C(n,k) (closest neighbor decoding).
is the most probable word under the assumption that p is small
(otherwise, the decoding fails).
(1011)
1 r1 2
(0110)
e1
OK (1010)
c (1110)
r2
e2
17
Linear block codes
Detection and correction capabilities (worst case) of an LBC with
dmin(C(n,k)).
a) It can detect error events e with binary weight up to
w(e)|max,det=d=dmin(C(n,k))-1
18
Linear block codes
The minimum distance dmin(C(n,k)) is a property of the set of
codewords in C(n,k), independent from the encoding (G).
i =d min
24
Convolutional codes
A binary convolutional code (CC) is another kind of linear
channel code class.
The encoding can be described in terms of a finite state
machine (FSM).
A CC can eventually produce sequences of infinite length.
A CC encoder has memory. General structure:
not
mandatory
Backward logic (feedback)
k input streams
MEMORY: ml bits for l-th input
not n output streams
mandatory
Forward logic (coded bits)
Systematic output 25
Convolutional codes
The memory is organized as a shift register.
Number of positions for input l: memory ml.
to backward
logic
l-th input stream 1 2 3 4 ml
input at instant i
(l )
to forward
di (l )
d i1
(l )
d i2
(l )
d i3
(l )
d i4
(l )
d im l
logic 26
Convolutional codes
Both forward and backward logic is boolean logic.
Very easy: each operation adds up (XOR) a number of memory
positions, from each of the k inputs.
inputs from all the k registers at instant i
c = g
( j)
i
( j)
l , qi d (l )
q
j-th output at instant i
l =1 q=i
( j)
g l,p , p=0,. .. , ml , is 1 when the p-th register position for the
l-th input is added to get the j-th output.
27
Convolutional codes
Parameters of a CC so far:
k input streams
n output streams
k shift registers with length ml each, l=1,...,k
A CC is denoted as (n,k,).
Its rate is R=k/n, where k and n usually take small
values.
28
Convolutional codes
The backward / forward logic may be specified in the form
of generator sequences.
Theses sequences are the impulse responses of each
output j wrt each input l.
( j) ( j) ( j)
g =( g
l l ,0 , ... , g l ,ml )
Observe that:
g(l j )=( 1,0,... ,0 ) connects the l-th input directly to the
j-th output
( j)
gl =( 0,... ,1(q th ) ,... ,0 ) just delays the l-th input to
the j-th output q time steps.
29
Convolutional codes
Given the presence of the shift register, the generator
sequences are better denoted as generator polynomials
ml
( j) ( j) ( j) ( j) ( j) q
g =( g
l l ,0 , ... , g l ,ml )g l ( D)= g D l ,q
q=0
We can write then
( j) (j)
g =( 1,0,... ,0 ) g (D)=1
l l
( j) th ( j) q
g =( 0,... ,1(q ) ,... ,0 ) g ( D)= D
l l
( )
(1) (2) (n)
g ( D) g ( D) g (D)
1 1 1
(1) (2) (n)
G( D)= g ( D)
2 g ( D)
2 g (D) 2
(1) (2) (n)
g k ( D) g k ( D) g k (D)
31
Convolutional codes
If each input has a feedback logic given as
(0) ml (0) q
g ( D)=
l q= 0 g D
l ,q
the code is denoted as
(1) (2) (n)
( )
g ( D)
1 g (D)
1 g ( D)
0
(0) (0)
(0)
g (D)
1 g (D)
1 g (D)
1
(1) (2) (n)
g ( D)
2 g (D)
2 g ( D)
2
G( D)= g (D) (0) (0)
(0)
2 g (D)
2 g (D)
2
(1) (2) (n)
g k ( D) g k (D) g k ( D)
(0) (0)
g (D)
k g (D)
k g (0)
k (D)
32
Convolutional codes
We can generalize the concept of parity-check matrix H(D).
An (n,k,) CC is fully specified by G(D) or H(D).
Based on the matrix description, there are a good deal linear
tools for design, analysis and evaluation of a given CC.
A regular CC can be described as a (canonical) all-feedforward
CC and through an equivalent feedback (recursive) CC.
Note that a recursive CC is related to an IIR filter.
Even though k and n could be very small, a CC has a very rich
algebraic structure.
This is closely related to the constraint length of the CC.
Each output bit is related to the present and past inputs via
powerful algebraic methods.
33
Convolutional codes
Given G(D), a CC can be classified as:
Systematic and feedforward (NSC).
Systematic and recursive (RSC).
Non-systematic and feedforward.
Non-systematic and recursive.
RSC is a popular class of CC, because it provides an
infinite output for a finite-weight input (IIR behavior).
Each NSC can be converted straightforwardly to a RSC
with similar error correcting properties.
CC encoders are easy to implement with standard
hardware: shift registers + combinational logic.
34
Convolutional codes
We do not need to look into the algebraic details of G(D)
and H(D) to study:
Coding
Decoding
Error correcting capabilities
A CC encoder is a FSM!
ss=s(i-1)
s=1,...,2 se=s(i)
e=1,...,2
input bi=(bi,1...bi,k)
output ci=(ci,1...ci,n)
36
Convolutional codes
The trellis illustrates the encoding process in 2 axis:
X-axis: time / Y-axis: states
Example for a (2,1,3) CC: input 0 input 1
s1 output 00 s1
s2 output 0 s2
1
s3 s3
s4 s4
s5 s5
s6 s6
s7 s7
s8 s8
i-1 i i+1
For a finite-size input data sequence, a CC can be forced to finish at a
known state (often 0) by adding terminating (dummy) bits.
37
Note that one section (e.g. i-1 i) fully specifies the CC.
Convolutional codes
The trellis illustrates the encoding process in 2 axis: Memory:
same input,
X-axis: time / Y-axis: states different outputs
Example for a (2,1,3) CC: input 0 input 1
s1 output 00 s1
s2 output 0 s2
1
s3 s3
s4 s4
s5 s5
s6 s6
s7 s7
s8 s8
i-1 i i+1
For a finite-size input data sequence, a CC can be forced to finish at a
known state (often 0) by adding terminating (dummy) bits.
38
Note that one section (e.g. i-1 i) fully specifies the CC.
Convolutional codes
The trellis description allows us
To build the encoder
To build the decoder
To get the properties of the code
The encoder:
ss=s(i-1)
s=1,...,2 se=s(i)
input bi=(bi,1...bi,k) e=1,...,2
output ci=(ci,1...ci,n)
Registers
Combinational
k logic n
CLK H(D)G(D)
39
Convolutional codes
The decoder is far more complicated
Long sequences
Memory: dependence with past states
In fact, CC were already well known before there existed a
practical good method to decode them: the Viterbi algorithm.
It is a Maximum Likelihood Sequence Estimation (MLSE)
algorithm with many applications.
Problem: for a length N>>n sequence at the receiver side
There are 22Nk/n paths through the trellis to match with
the received data.
Even if the coder starting state is known (often 0), there are
still 2Nk/n paths to walk through in a brute force approach.
40
Convolutional codes
Viterbi algorithm setup.
Key facts:
input bi output ci(s(i-1),bi) The encoding corresponds to a Markov
start s(i-1) end s(i)(s(i-1),bi) chain model:P(s(i))=P(s(i)|s(i-1))P(s(i-1)).
s1 Total likelihood P(r|b) can be factorized as
s2 a product of probabilities.
bi
s3 (i1) (i)
Given s s , P(ri|s(i),s(i-1)) depends
s4 only on the channel kind (AWGN, BSC...).
s5
Transition from s(i-1) to s(i) (linked in the
s6 trellis) depends on the probability of bi:
s7 P(s(i)|s(i-1))=2-k if the source is iid.
s8
i-1 i
P(s(i)|s(i-1))=0 if they are not linked in the
trellis (finite state machine: deterministic).
received data ri
41
Convolutional codes
The total likelihood can be recursively calculated as:
N /n
P ( rb ) = P ( r is ,s
(i) (i 1)
)P s s P s )
( (i) (i1)
) ( (i1)
i=1
In the BSC(p), the observation metric would be:
w ( r i+c i ) n w ( r i +c i )
P ( ris ,s )= P ( r ici ) = p
(i ) (i1)
(1 p)
Maximum likelihood (ML) criterion:
b=arg
{
max [ P ( rb ) ]
b }
42
Convolutional codes
We know that the brute force approach to ML criterion is at least
O(2Nk/n).
V = P ( s =s j ) ;
(0) (0)
j
V j =P ( r is =s j ,s =smax ) max { P ( s =s js }
(i) (i) (i1) (i) (i1) (i1)
=sl )V l
(i 1)
s = smax
MAX s6 s6
s7 s7
s8 s8 44
i-1 i i+1
Convolutional codes
Probability of the
most probable
The algorithm recursive rule is state sequence
corresponding to the i-1
V = P ( s =s j ) ; previous observations
(0) (0)
j
V j =P ( r is =s j ,s =smax ) max { P ( s =s js }
(i) (i) (i1) (i) (i1) (i1)
=sl )V l
(i 1)
s = smax
MAX s6 s6
s7 s7
s8 s8 45
i-1 i i+1
Convolutional codes
Probability of the
most probable
The algorithm recursive rule is state sequence
corresponding to the i-1
V = P ( s =s j ) ; previous observations
(0) (0)
j
V j =P ( r is =s j ,s =smax ) max { P ( s =s js }
(i) (i) (i1) (i) (i1) (i1)
=sl )V l
(i 1)
s = smax
MAX s6 s6
s7 s7
s8 s8 46
i-1 i i+1
Convolutional codes
Note that we have considered the algorithm when the
demodulator yields hard outputs
ri is a vector of n estimated bits (BSC(p) equivalent channel).
In AWGN, we can do better to decode a CC
We can provide soft (probabilistic) estimations for the
observation metric.
For an iid source, we can easily get an observation transition
metric based on the probability of each bi,l=0,1, l=1,...,k,
associated to a possible transition.
There is a gain of around 2 dB in Eb/N0.
LBC decoders can also accept soft inputs (non syndrome-based
decoders).
We will examine an example of soft decoding of CC in the lab.
47
Convolutional codes
We are now familiar with the encoder and the decoder
Encoder: FSM (registers, combinational logic).
Decoder: Viterbi algorithm (for practical reasons,
suboptimal adaptations are usually employed).
First...
CC are mainly intended for FEC, not for ARQ schemes.
In a long sequence (=CC codeword), the probability of
having at least one error is very high...
And... are we going to retransmit the whole sequence?
48
Convolutional codes
Given that we truncate the sequence to N bits and CC is linear
We may analyze the system as an equivalent (N,Nk/n)
LBC.
But... equivalent matrices G and H would not be practical.
b+e
b+e
b+e
50
i i+1 i+2 i+3
Convolutional codes
Examining the minimal length loops and taking into
account this uniform error property we can get dmin of a CC.
For a CC forced to end at 0 state for a finite input data
sequence, dmin is called dfree.
51
Convolutional codes
With a fairly amount of algebra, related to FSM, modified
encoder state diagrams and so on, it is possible to get an upper
bound for optimal MLSE decoding.
BPSK in
P b B derfc
d
( dR
Eb
N0 ) AWGN
53
Turbo codes
Canonically, turbo codes (TC) are parallel concatenated
convolutional codes (PCCC).
k input streams n=n1+n2 output streams
CC1
b c=c1c 2
? Rate R=k/(n1+n2)
CC2
56
Turbo codes
What's in a SISO?
r
SISO
(for a CC)
P ( bi =br )
1
P ( bi =b )=
2
0 1
0 1
Probability density function of bi
Note that the SISO works on a bit
by bit basis, but produces a
sequence of APP's. 57
Turbo codes
What's in a SISO? Soft demodulated values
from channel
r
SISO
(for a CC)
P ( bi =br )
1
P ( bi =b )=
2
0 1
0 1
Probability density function of bi
Note that the SISO works on a bit
by bit basis, but produces a
sequence of APP's. 58
Turbo codes
What's in a SISO? Soft demodulated values
from channel
r
A priori probabilities SISO
(APR) (for a CC)
P ( bi =br )
1
P ( bi =b )=
2
0 1
0 1
Probability density function of bi
Note that the SISO works on a bit
by bit basis, but produces a
sequence of APP's. 59
Turbo codes
What's in a SISO? Soft demodulated values
from channel
r
A priori probabilities SISO
(APR) (for a CC)
P ( bi =br )
1
P ( bi =b )= A posteriori probabilities
2 (APP) updated with
channel information
0 1
0 1
Probability density function of bi
Note that the SISO works on a bit
by bit basis, but produces a
sequence of APP's. 60
Turbo codes
The algorithm inside the SISO is some suboptimal version of
the MAP BCJR algorithm.
BCJR computes the APP values through a forward-backward
dynamics it works over finite length data blocks, not over
(potentially) infinite length sequences (like pure Ccs).
BCJR works on a trellis: recall transition metrics, transition
probabilities and so on.
(0) (N )
Assume the block length is N: trellis starts s
at s
, ends at .
i ( j ) = P ( s =s j , r 1, , r i )
(i)
i ( j ) = P ( r i +1 , , r N s =s j )
(i)
i j , k = P ( r i , s =s js =sk )
( ) (i) (i1)
61
Turbo codes
The algorithm inside the SISO is some suboptimal version of
the MAP BCJR algorithm.
BCJR computes the APP values through a forward-backward
dynamics it works over finite length data blocks, not over
(potentially) infinite length sequences (like pure Ccs).
BCJR works on a trellis: recall transition metrics, transition
probabilities and so on.
(0) (N )
Assume the block length is N: trellis starts s
at s
, ends at .
i ( j ) = P ( s =s j , r 1, , r i )
(i)
FORWARD term
i ( j ) = P ( r i +1 , , r N s =s j )
(i)
i j , k = P ( r i , s =s js =sk )
( ) (i) (i1)
62
Turbo codes
The algorithm inside the SISO is some suboptimal version of
the MAP BCJR algorithm.
BCJR computes the APP values through a forward-backward
dynamics it works over finite length data blocks, not over
(potentially) infinite length sequences (like pure Ccs).
BCJR works on a trellis: recall transition metrics, transition
probabilities and so on.
(0) (N )
Assume the block length is N: trellis starts s
at s
, ends at .
i ( j ) = P ( s =s j , r 1, , r i )
(i)
FORWARD term
i ( j ) = P ( r i +1 , , r N s =s j )
(i)
BACKWARD term
i j , k = P ( r i , s =s js =sk )
( ) (i) (i1)
63
Turbo codes
The algorithm inside the SISO is some suboptimal version of
the MAP BCJR algorithm.
BCJR computes the APP values through a forward-backward
dynamics it works over finite length data blocks, not over
(potentially) infinite length sequences (like pure Ccs).
BCJR works on a trellis: recall transition metrics, transition
probabilities and so on.
(0) (N )
Assume the block length is N: trellis starts s
at s
, ends at .
i ( j ) = P ( s =s j , r 1, , r i )
(i)
FORWARD term
i ( j ) = P ( r i +1 , , r N s =s j )
(i)
BACKWARD term
i j , k = P ( r i , s =s js =sk )
( ) (i) (i1)
TRANSITION
64
Turbo codes
The algorithm inside the SISO is some suboptimal version of
the MAP BCJR algorithm.
BCJR computes the APP values through a forward-backward
dynamics it works over finite length data blocks, not over
(potentially) infinite length sequences (like pure Ccs).
BCJR works on a trellis: recall transition metrics, transition
probabilities and so on.
(0) (N )
Assume the block length is N: trellis starts s
at s
, ends at .
i ( j ) = P ( s =s j , r 1, , r i )
(i)
FORWARD term
Remember,
n components
i ( j ) = P ( r i +1 , , r N s =s j )
(i)
for an (n,k,) CC BACKWARD term
i j , k = P ( r i , s =s js =sk )
( ) (i) (i1)
TRANSITION
65
Turbo codes
BCJR algorithm in action:
Forward step i=1,...,N:
2
0 ( j ) = P ( s =s j ) ; i ( j )= i1 ( k ) i ( k , j )
(0)
k =1
k =1
Compute the joint probability sequence i=1,...,N:
P (s =s j , s =sk ,r ) =i ( k ) i ( j , k )i1 ( j )
(i1) (i)
66
Turbo codes
Finally, the APP's can be calculated as:
1
P ( bi =br )=
p(r ) s
(i 1)
P (s
(i1) (i )
=s j ,s =s k ,r )
s(i )
b i =b
P (s =s j , s =sk ,r )
( )
(i1) (i)
P ( bi =1r )
( )
bi =1
s(i 1) s(i )
log =log b i =1
> 0
P ( bi =0r ) (i1)
P (s
(i)
=s j , s =sk , r ) < 0
(i 1) (i )
b i= 0
s s
b i =0
67
Turbo codes
Finally, the APP's can be calculated as:
1
P ( bi =br )=
p(r ) s
(i 1)
P (s
(i1) (i )
=s j ,s =s k ,r )
s(i )
b i =b
Its module is the
reliability of the
decision
Decision criterion based on these APP's:
P (s =s j , s =sk ,r )
( )
(i1) (i)
P ( bi =1r )
( )
bi =1
s(i 1) s(i )
log =log b i =1
> 0
P ( bi =0r ) (i1)
P (s
(i)
=s j , s =sk , r ) < 0
(i 1) (i )
b i= 0
s s
b i =0
68
Turbo codes
How do we get i(j,k)?
This probability takes into account
The restrictions of the trellis (CC).
The estimations from the channel.
i ( j ,k ) = P ( r i ,s(i) =s js(i1) =s k ) =
69
Turbo codes
How do we get i(j,k)?
This probability takes into account
The restrictions of the trellis (CC).
The estimations from the channel.
i ( j ,k ) = P ( r i ,s(i) =s js(i1) =s k ) =
i ( j ,k ) = P ( r i ,s(i) =s js(i1) =s k ) =
n
( r i , l c i , l )
2 =0 if transition is not possible
l =1 =1/2 if transition is possible
1 2
2
(binary trellis)
e
2 n/ 2
in AWGN
(2 ) for unmodulated ci,l 71
Turbo codes
Idea: what about feeding APP values as APR values for
other decoder whose coder had the same inputs?
r2
From CC1 SISO SISO
(for CC2)
P ( bi =br 2 )
P ( bi =br 1 )
0 1
0 1
72
Turbo codes
Idea: what about feeding APP values as APR values for
other decoder whose coder had the same inputs?
r2
From CC1 SISO SISO
(for CC2)
P ( bi =br 2 )
P ( bi =br 1 )
0 1
0 1
This will happen
under some
conditions
73
Turbo codes
APP's from first SISO used as APR's for second SISO
increase updated APP's reliability iff
APR's are uncorrelated wrt channel estimations for
second decoder.
This is achieved by permuting input data for each
encoder.
INTERLEAVER CC2
(permutor) d
75
Turbo codes
The interleaver preserves the data (b), but changes its
position within the second stream (d).
Note that this compels the TC to work with blocks of
N=size() bits.
The decoder has to know the specific interleaver used
at the encoder.
b1 b 2 b 3 b 4 bN
d (i ) =b i
d ( 2) d (N ) d (3 ) d (1 ) d ( 4)
76
Turbo codes
The mentioned process is applied iteratively (l=1,...).
Iterative decoder this may be a drawback, since it adds
latency (delay).
r2
from channel
r1 SISO 1 SISO 2
APP1(l) APR2(l)
APP2(l)
APR1(l+1)
1
r2
from channel
r1 SISO 1 SISO 2
APP1(l) APR2(l)
APP2(l)
APR1(l+1)
1
r2
from channel
r1 SISO 1 SISO 2
APP1(l) APR2(l)
APP2(l)
APR1(l+1)
1
r2
from channel
r1 SISO 1 SISO 2
APP1(l) APR2(l)
APP2(l)
APR1(l+1)
1
r2
from channel
r1 SISO 1 SISO 2
APP1(l) APR2(l)
APP2(l)
APR1(l+1)
1
r2
from channel
r1 SISO 1 SISO 2
Initial APR1(l=0)
APP1(l) APR2(l)
is taken with APP2(l)
P(bi=b)=1/2 APR1(l+1)
1
Pb floor
>
w minM min
N
erfc ( d min R
Eb
N0 )
84
Turbo codes
The location of the waterfall region can be analyzed by the
so-called density evolution method
Based on the exchange of mutual information between SISO's.
Pb floor
>
w minM min
N
erfc ( d min R
Eb
N0 )
Hamming weight
of the error
with minimum
distance
85
Turbo codes
The location of the waterfall region can be analyzed by the
so-called density evolution method
Based on the exchange of mutual information between SISO's.
Pb floor
>
w minM min
N
erfc ( d min R
Eb
N0 )
Hamming weight
Error
of the error
multiplicity
with minimum
(low value!!)
distance
86
Turbo codes
The location of the waterfall region can be analyzed by the
so-called density evolution method
Based on the exchange of mutual information between SISO's.
Pb floor
>
w minM min
N
erfc ( d min R
Eb
N0 )
Hamming weight
Error
of the error
Interleaver gain multiplicity
with minimum
(only if recursive (low value!!)
distance
CC's!!) 87
Turbo codes
Examples of 3G TC. Note that TC's are intended for FEC...
88
Low Density Parity Check Codes
LDPC codes are just another kind of channel codes derived
from less complex ones.
While TC's were initially an extension of CC systems,
LDPC codes are an extension of the concept of binary
LBC, but they are not exactly our known LBC.
[ ]
11110 000 00 00 000 00 00 0
0 000 111100 00 000 00 00 0
0 000 00 001111 000 00 00 0
0 000 00 00 000 011110 00 0
0 000 00 00 000 00 00 01111
10 0010 00 100 0100 00 00 0
01 000 100 010 000 00 100 0
H= 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0
0 0010 00 00 0100 010 001 0
0 000 00 010 0010 00 100 01
10 000 100 00 0100 00 010 0
01 000 010 001 000 010 00 0
0 0100 00 100 0010 00 001 0
0 0010 00 010 000 100 100 0
0 000 100 001 000 010 00 01
91
Low Density Parity Check Codes
Example of a (4,3)-regular LPDC parity check matrix
[ ]
11110 000 00 00 000 00 00 0
1520 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0
0 000 00 001111 000 00 00 0
0 000 00 00 000 011110 00 0
0 000 00 00 000 00 00 01111
10 0010 00 100 0100 00 00 0
01 000 100 010 000 00 100 0
H= 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0
0 0010 00 00 0100 010 001 0
0 000 00 010 0010 00 100 01
10 000 100 00 0100 00 010 0
01 000 010 001 000 010 00 0
0 0100 00 100 0010 00 001 0
0 0010 00 010 000 100 100 0
0 000 100 001 000 010 00 01
92
Low Density Parity Check Codes
Example of a (4,3)-regular LPDC parity check matrix
This H defines a
[ ]
11110 000 00 00 000 00 00 0
1520 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 (20,7) LBC!!!
0 000 00 001111 000 00 00 0
0 000 00 00 000 011110 00 0
0 000 00 00 000 00 00 01111
10 0010 00 100 0100 00 00 0
01 000 100 010 000 00 100 0
H= 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0
0 0010 00 00 0100 010 001 0
0 000 00 010 0010 00 100 01
10 000 100 00 0100 00 010 0
01 000 010 001 000 010 00 0
0 0100 00 100 0010 00 001 0
0 0010 00 010 000 100 100 0
0 000 100 001 000 010 00 01
93
Low Density Parity Check Codes
Example of a (4,3)-regular LPDC parity check matrix
This H defines a
[ ]
11110 000 00 00 000 00 00 0
1520 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 (20,7) LBC!!!
0 000 00 001111 000 00 00 0
0 000 00 00 000 011110 00 0
0 000 00 00 000 00 00 01111
10 0010 00 100 0100 00 00 0
01 000 100 010 000 00 100 0 r=4/20=3/15=0.2
H= 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0
0 0010 00 00 0100 010 001 0
0 000 00 010 0010 00 100 01
10 000 100 00 0100 00 010 0
01 000 010 001 000 010 00 0
0 0100 00 100 0010 00 001 0
0 0010 00 010 000 100 100 0
0 000 100 001 000 010 00 01
94
Low Density Parity Check Codes
Example of a (4,3)-regular LPDC parity check matrix
This H defines a
[ ]
11110 000 00 00 000 00 00 0
1520 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 (20,7) LBC!!!
0 000 00 001111 000 00 00 0
0 000 00 00 000 011110 00 0
0 000 00 00 000 00 00 01111
10 0010 00 100 0100 00 00 0
01 000 100 010 000 00 100 0 r=4/20=3/15=0.2
H= 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0
0 0010 00 00 0100 010 001 0
0 000 00 010 0010 00 100 01
Sparse!
10 000 100 00 0100 00 010 0
01 000 010 001 000 010 00 0
0 0100 00 100 0010 00 001 0
0 0010 00 010 000 100 100 0
0 000 100 001 000 010 00 01
95
Low Density Parity Check Codes
Example of a (4,3)-regular LPDC parity check matrix
This H defines a
[ ]
11110 000 00 00 000 00 00 0
1520 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 (20,7) LBC!!!
0 000 00 001111 000 00 00 0
0 000 00 00 000 011110 00 0
0 000 00 00 000 00 00 01111
10 0010 00 100 0100 00 00 0
01 000 100 010 000 00 100 0 r=4/20=3/15=0.2
H= 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0
0 0010 00 00 0100 010 001 0
0 000 00 010 0010 00 100 01
Sparse!
10 000 100 00 0100 00 010 0
01 000 010 001 000 010 00 0
0 0100 00 100 0010 00 001 0
0 0010 00 010 000 100 100 0
0 000 100 001 000 010 00 01
=0,1
96
Low Density Parity Check Codes
Note that the J rows of H are not necessarily linearly
independent over GF(2).
To determine the dimension k of the code, it is mandatory
to find the row rank of H = n-k < J.
That's the reason why in the previous example H defined a
(20,7) LBC instead of a (20,5) LBC as could be expected!
The construction of large H for LDPC with high rates and good
properties is a complex subject.
Some methods relay on smaller Hi used as building blocks,
plus random permutations or combinatorial manipulations;
resulting matrices with bad properties are discarded.
Other methods relay on finite geometries and lot of
algebra. 97
Low Density Parity Check Codes
LDPC codes yield performances equal or even better than
TC's, but without the problem of their relatively high error
floor.
Both LDPC codes and TC's are capacity approaching
codes.
+ + + + + + +
s1 s2 s3 s4 s5 s6 s7
It is a bipartite graph with interesting properties for decoding.
A variable node is connected to a check node iff the
corresponding code bit is checked by the corresponding
parity sum equation.
102
Low Density Parity Check Codes
Tanner graphs. Example for a (7,3) LBC.
Variable nodes or
c1 c2 c3 c4 c5 c6 c7 code-bit vertices
+ + + + + + +
s1 s2 s3 s4 s5 s6 s7
It is a bipartite graph with interesting properties for decoding.
A variable node is connected to a check node iff the
corresponding code bit is checked by the corresponding
parity sum equation.
103
Low Density Parity Check Codes
Tanner graphs. Example for a (7,3) LBC.
Variable nodes or
c1 c2 c3 c4 c5 c6 c7 code-bit vertices
+ + + + + + + Check nodes or
s1 s2 s3 s4 s5 s6 s7 check-sum vertices
It is a bipartite graph with interesting properties for decoding.
A variable node is connected to a check node iff the
corresponding code bit is checked by the corresponding
parity sum equation.
104
Low Density Parity Check Codes
Tanner graphs. Example for a (7,3) LBC.
Variable nodes or
c1 c2 c3 c4 c5 c6 c7 code-bit vertices
+ + + + + + + Check nodes or
s1 s2 s3 s4 s5 s6 s7 check-sum vertices
It is a bipartite graph with interesting properties for decoding.
A variable node is connected to a check node iff the
corresponding code bit is checked by the corresponding
parity sum equation.
105
Low Density Parity Check Codes
Based on the Tanner graph of an LDPC code, it is possible
to make iterative soft decoding (SPA).
SPA is performed by belief propagation (which is an
instance of a message passing algorithm).
c1 c2 c3 c4 c5 c6 c7
+ + + + + + +
s1 s2 s3 s4 s5 s6 s7
106
Low Density Parity Check Codes
Based on the Tanner graph of an LDPC code, it is possible
to make iterative soft decoding (SPA).
SPA is performed by belief propagation (which is an
instance of a message passing algorithm).
c1 c2 c3 c4 c5 c6 c7 Messages (soft values)
are passed to and from
related variable and
check nodes
+ + + + + + +
s1 s2 s3 s4 s5 s6 s7
107
Low Density Parity Check Codes
Based on the Tanner graph of an LDPC code, it is possible
to make iterative soft decoding (SPA).
SPA is performed by belief propagation (which is an
instance of a message passing algorithm).
c1 c2 c3 c4 c5 c6 c7 Messages (soft values)
are passed to and from
related variable and
check nodes
N(s7)
P ( ci )= P ( c ' )
c ' :c ' i =c i
(l )
i j
(l)
c s ( ci =c ) =i , jP ( ci =c i ) (l 1)
s c ( c i =c )
k i
sk N ( c i )
sk s j
(l )
s j c i ( ci =c ) = P ( s j =0ci =c , c ) (l)
ck s j ( c ' k =c k )
c ck N (s j ) c ' k N ( s j)
114
c k c i
Low Density Parity Check Codes
If we get P(ci | ), we have an estimation of the codeword sent .
The decoding aims at calculating this through the marginalization
P ( ci )= P ( c ' )
c ' :c ' i =c i
(l )
s j c i ( ci =c ) = P ( s j =0ci =c , c ) (l)
ck s j ( c ' k =c k )
c ck N (s j ) c ' k N ( s j)
115
c k c i
Low Density Parity Check Codes
If we get P(ci | ), we have an estimation of the codeword sent .
The decoding aims at calculating this through the marginalization
P ( ci )= P ( c ' )
c ' :c ' i =c i
(l )
s j c i ( ci =c ) = P ( s j =0ci =c , c ) (l)
ck s j ( c ' k =c k )
c ck N (s j ) c ' k N ( s j)
116
c k c i
Low Density Parity Check Codes
Note that:
(l )
i,j is a normalization constant.
(l) (l)
P ( c i=c ) =i P ( c i=ci ) (l)
s c ( c i=c ) : APP value.
j i
s j N (c i )
(l) (l)
P ( c i=c ) =i P ( c i=ci ) (l)
s c ( c i=c ) : APP value.
j i
s j N (c i )
119
Low Density Parity Check Codes
LDPC BER performance examples (DVBS2 standard).
Short n=16200
120
Low Density Parity Check Codes
LDPC BER performance examples (DVBS2 standard).
Short n=16200
Long n=64800
121
Coded modulations
We have considered up to this point channel coding and
decoding isolated from the modulation process.
Codewords feed any kind of modulator.
Symbols go through a channel (medium).
The info recovered from received modulated symbols
is fed to the suitable channel decoder
As hard decisions.
As soft values (probabilistic estimations).
The abstractions of BSC(p) (hard demodulation) or
soft values from AWGN ( exp[-|ri-sj|2/(22)] ) -and
the like for other cases- are enough for such an
approach.
Note that there are other important channel kinds not
122
considered so far.
Coded modulations
Coded modulations are systems where channel coding
and modulation are treated as a whole.
Joint coding/modulation.
Joint decoding/demodulation.
output mk
s1 s1
output m
s2 j
s2
s3 s3
s4 s4
s5 s5
s6 s6
s7 s7
s8 s8
i-1 i i+1
124
Coded modulations
If the modulation symbol mapper is well matched to the CC
trellis, and the decoder is accordingly designed to take
advantage of it,
TCM provides high spectral efficiency.
TCM can be robust in AWGN channels, and against fading and
multipath effects.
In the 80's, TCM become the standard for telephone line data
modems.
No other system could provide better performance over the
twisted pair cable before the introduction of DMT and ADSL.
However, the flexibility of providing separated channel coding
and modulation subsystems is still preferred nowadays.
Under the concept of Adaptive Modulation & Coding (ACM).
125
Coded modulations
Other possibility of coded modulation, evolved from TCM and
from the concatenated coding & iterative decoding framework is
Bit-Interleaved Coded Modulation (BICM).
What about if we provide an interleaver between the channel
coder (normally a CC) and the modulation symbol mapper?
CC
128
References
S. Lin, D. Costello, ERROR CONTROL CODING, Prentice
Hall, 2004.
S. B. Wicker, ERROR CONTROL SYSTEMS FOR
DIGITAL COMMUNICATION AND STORAGE, Prentice
Hall, 1995.
129