You are on page 1of 22

Mathemati

al Analysis of an Iterative

1
Algorithm for Low-Density Code De oding

Kamil Sh. Zigangirov and Mi hael Lentmaier

Department of Information Te hnology

Lund University, Box 118, SE{22100 Lund, Sweden

Email: fkamil,mi haelgit.lth.se

A two-phase iterative de oding algorithm for low-density (LD) odes,


suggested by the authors of the paper, is analyzed for transmission over
the binary symmetri hannel (BSC). A lower bound on the maximal error
probability p of the BSC, for whi h the de oding error probability of iterative
de oding goes to zero when the ode length goes to in nity, is derived.

x1 Introdu tion
Low-density (LD) parity- he k blo k odes were invented by Gallager in the
early 60s [1℄. A generalization of Gallager's odes to low-density onvolu-
tional odes was developed in the papers [2, 3℄. The main merit of these
odes is that they are suitable for iterative de oding at low omplexity. Even
though there exists a large number of publi ations devoted to the analysis of
iterative de oding of low-density odes, as a rule on lusions are essentially
1 This work was supported in part by Swedish Resear h Coun il for Engineering S ien es

(Grant 98-216).

1
based on results of omputer simulations. One of the ex eptions is the well-
known paper by Zyablov and Pinsker [4℄, who gave a theoreti al proof that it
is possible to orre t all error ombinations, provided that the number of er-
rors does not ex eed some value that is growing linearly with the blo klength
N.
A binary homogeneous low-density blo k (J; K )- ode of length N is de-
ned by its L  N parity- he k matrix H , where L = NJ=K , L < N , is an
integer, with elements 0 and 1, having J ones in ea h olumn and K ones in
ea h row. A ve tor v = v0 ; v1 ; : : : ; vN 1 , vn 2 GF (2), is alled a ode word,
if its produ t (over the binary eld GF (2)) with the transposed parity- he k
matrix H T is equal to the zero ve tor 0 of dimension L, i.e. v H T = 0. If the
rank of the matrix H equals r, r  L, the ode rate is R = 1 r=N . Gallager
[1℄ gave a onstru tive method for building su h odes and also introdu ed
a statisti al ensemble of low-density odes and developed iterative methods
for their de oding. In this paper a new two-phase iterative algorithm to de-
ode low-density odes is developed, whi h an be analyzed when N ! 1.
For simpli ity we restri t ourselves to the analysis of transmission over the
BSC, although a generalization to an additive white Gaussian noise hannel
is straightforward.
The de oding of low-density odes is hara terized by two parameters: bit
error probability Pb and de oding omplexity Cb related to one bit. As was
shown in [1℄, for maximum likelihood de oding the blo k error probability
(and, onsequently, Pb ) is de reasing exponentially with N , if the probability
of error p of the BSC does not ex eed some value ps . The de oding omplexity
Cb grows in this ase exponentially with N . The value ps is less than the
Shannon limit psh , de ned by the Shannon theorem on the hannel apa ity.
For iterative de oding algorithms for low-density odes the bit de oding

2
omplexity is a linear fun tion of the number of iterations I , i.e. Cb = O(I ).
We demand, that the bit error probability Pb goes to zero at least exponen-
tially with I , when the blo k length N and the number of iterations I go to
in nity, if the error probability p of the hannel is less than some > 0. For
given J and K , the value of depends on both ode stru ture and de oding
algorithm. The supremum of over all (J; K )- odes and de oding algorithms
we will all iterative low-density limit and designate as p0 = p0 (J; K ). In this
paper we derive a lower bound p0 on p0 .
Low-density odes an be des ribed both by bipartite graphs due to Tan-
ner [5℄ as well as by tree-like Gallager graphs, as introdu ed in the next
paragraph.

x2 Graph representation of low-density odes


For any ode symbol vn , n = 0; 1; : : : ; N 1, we an build a tree-like graph.
To des ribe this graph we introdu e a spe ial terminology. Ea h node within
an even level of the tree represents one of the N symbols of the ode word
and ea h node within an odd level one of the L parity- he k equations.
We all the set of nodes in even levels of the tree \ lan". The root
node (zero level), orresponding to the symbol we started to build the tree
from, we denote by \ lan head". From the root node arise J edges whi h,
together with the nodes they are leading to, orrespond to the J parity- he k
equations that in lude the lan head. The nodes of this rst level we will all
\families" or \marriages" of the lan head. From ea h node of the rst level
arise (K 1) edges. They orrespond, together with the nodes of the se ond
level they are leading to, to the (K 1) symbols (ex luding the lan head)
that are in luded in the orresponding parity- he k equation. These nodes in

3
the se ond level we all \ hildren" or \dire t des endants" of the lan head
from the orresponding marriage. Hen e, the lan head has J families and
in ea h family there are K 1 hildren.
Ea h of the dire t des endants of the lan head has J 1 families (mar-
riages), i.e. from ea h of the nodes of the se ond level arise J 1 edges,
leading to J 1 nodes in the third level of the tree. They orrespond to
J 1 parity- he k equations that in lude this dire t des endant of the lan
head but not the lan head himself. Ea h family in ludes K 1 hildren, i.e.
from ea h node of the third level arise K 1 edges that are leading to K 1
nodes of the fourth level, asso iated with the lan head's des endant of the
se ond generation, and so on.
Applying this tree generation pro edure to all ode symbols, we an rep-
resent any low-density ode by N trees, orresponding to the N di erent
symbols of a ode word. It is easy to see, that the number of nodes on the
2lth level, i.e. the number of des endants of the lth generation, is equal to
J (K 1)[(J 1)(K 1)℄l 1 .
A lan is alled \nondegenerated" up to the lth generation, if all symbol
nodes of the tree up to the lth generation are di erent. We all a lan degener-
ated in the lth generation, if it is nondegenerated up to the (l 1)th generation
and there exists at least one node in the lth generation whi h simultaneously
is in luded in two families. A ode is alled \l0 -nondegenerated", if the lans
of all ode symbols are nondegenerated up to the l0 th generation and there
exists at least one lan, degenerated in the (l0 + 1)th generation.

Theorem 1 ([1℄) Let a low-density (J; K )- ode of blo k length N be l0 -


nondegenerated. Then
log N
l0 < ; (1)
log(K 1)(J 1)

4
and there exists at least one l0 -nondegenerated ode for whi h
log N
l0 > 1 ; (2)
2 log(K 1)(J 1)
where the onstant 1 does not depend on N .

Theorem 2 ([6℄) There exists an l0 -nondegenerated low-density (J; K )- ode


of blo k length N , for whi h
2 log N
l0 > 2 ; (3)
3 log(K 1)(J 1)
where the onstant 2 does not depend on N .

Let us label the symbol nodes of the tree by 0 and 1, depending on the
value of the orresponding ode symbol. Symbol nodes labeled by 1 we will
all \male" nodes and those labeled by 0 \female" nodes. From the stru ture
of low-density odes follows then the family rule: Male an only have an odd
number of sons, and female an only have an even number of sons.

x3 A two-phase iterative de oding algorithm


Consider iterative de oding of an l0 -nondegenerated low-density ode where
the number of iterations I equals to l0 . We des ribe a two-phase de oding
algorithm. In the rst phase (the rst I1 iterations) the de oder performs
soft de oding, in the se ond phase hard majority de oding.
Consider iterative de oding of an arbitrary ode symbol vn , n = 0; 1; : : : ,
N 1. Let V (vn ) denote the set of des endants of the lan head vn up to the
l0 th generation. The de oder al ulates in the rst de oding phase likelihood
fun tions zm for the symbols vm 2 V (vn ), starting from symbols of the l0 th
generation. By de nition, the likelihood fun tion of symbol vm is equal to

5
the sum of intrinsi information m and extrinsi information m :

zm =  m +  m : (4)

The intrinsi information is de ned by


P (vm = 0jrm )
m = loga ; (5)
P (vm = 1jrm )
where P (vm = 0jrm ) denotes the ( onditional) a posteriori probability that
vm = 0 given the re eived symbol rm , and hen e P (vm = 1jrm ) = 1 P (vm =
0jrm ). For distin tness, although it does not play a part, we hoose as base
of the logarithm the value a = q=p, q = 1 p; then m takes the values
1. Furthermore we suppose, that all possible ombinations of transmitted
symbols satisfying the family rule are equal probable.
Now we des ribe a re urrent pro edure to al ulate the extrinsi informa-
tion m for symbols of the lth generation of the vn - lan, l = l0 ; l0 1; : : : ; l0
I1 . Let Ml denote the set of indi es of symbols within the lth generation. By
de nition, m = 0 for m 2 Ml0 (i.e. zm = m ). The extrinsi information m
of symbol vm an then be represented as the sum of (J 1) partial extrinsi
informations m(j ) al ulated over its families,

m = m(1) + m(2) +    + m(J 1) : (6)

To de ne m(j ) we introdu e the statisti


K 1 !
Y (j ) ) (j ) j; jz (j ) j; : : : ; jz (j ) j) ;
!m(j ) = sgn(zm;k  min(jzm; 1 m;2 m;K 1 (7)
k =1
(j ) is the log-likelihood ratio for the kth dire t des endant in the j th
where zm;k
family of vm .
Let P (!m(j )jvm = 0) and P (!m(j )jvm = 1) be the onditional probabilities
of values of the random variable !m(j ) onditioned on vm = 0 and vm = 1,

6
respe tively. Then, by de nition,
P (!m(j )jvm = 0)
m(j ) = loga ; (8)
P (!m(j )jvm = 1)
and J 1
X
zm = m + m(j ) : (9)
j =1
The following investigation (see x 5) shows, that m(j ) bears suÆ ient infor-
mation on symbol vm . In the rst iteration !m(j ) takes one of the two possible
values 1, su h that

P (!m(j ) = 1jvm = 0) = P (!m(j ) = 1jvm = 1) =


 
K 1 K 3 2 1 + (q p)K 1
=q K 1
+ q p + = ; (10)
2 2
P (!m(j ) = 1jvm = 0) = P (!m(j ) = 1jvm = 1) =
   
K 1 K 2 K 1 K 4 3 1 (q p)K 1
= q p+ q p + = :
1 3 2
(11)
The substitution of (10) and (11) into (8) gives the value of the partial extrin-
si information m(j ) for m 2 Ml0 1 as fun tion of fzm;k
(j ) , k = 1; 2; : : : ; K 1g.

In the following iteration steps the algorithm operates analogously, namely,


the de ision statisti zm for symbol vm , m 2 Ml , l = l0 2; l0 3; : : : ; l0 I1 ,
is al ulated a ording to (9), where the extrinsi information m(j ) is obtained
a ording to (7) and (8) using the values from the set fzm;k (j ) ; k = 1; 2; : : : ; K

1g. Obviously, the statisti s !m(j ) an, during the iterations, take a large
spe trum of values. However, knowing the probabilities P (!m(j )jvm = 0) and
P (!m(j )jvm = 1) in one iteration step, we an nd the appropriate probabilities
for the following step. We will dis uss this problem in x 4.
Consider now the se ond phase of the algorithm. First the de oder makes
a hard de ision on v^m with respe t to the symbols of the (l I1 )th generation

7
using the rule
8
>
>
>
>
>
0; if zm > 0 ;
<
v^m = 1; if zm < 0 ; (12)
>>
>
>
>
:0; 1, with probability 1=2 ea h, if zm = 0 :

We note, that the number of steps I1 in the rst de oding phase should be
hosen su h, that the hard de ision error probability "I1 = P (^vm 6= vm ) for
symbols of the (l0 I1 )th generation does not ex eed some riti al level " r ,
de ned in x 5.
The majority rule used in the se ond de oding phase is the following. Let
(j )
v^m;k be the hard de ision for the kth des endant in the j th family of symbol
vm , m 2 Ml , l0 I1 > l  1, and
8
> PK 1 (j )
<0; if =1 v^m;k is even ;
v^m(j ) = k
(13)
>
:1; otherwise :
Then the de oder makes a de ision on symbol vm a ording to
8
>
> PJ 1 (j )
J 1
>0;
> =1 v^m < 2
if
>
<
j

v^m = 1; if PJj =11 v^m(j ) > J 2 1 (14)


>
>
>
> PJ 1 (j )
>
:rm ; if
j =1 v ^m = J 2 1 :
Sin e the lan head vn has J , not J 1, families as his des endants, in the
last l0 th iteration step the value J 1 in formula (14) should be repla ed by
J.

8
x4 Finding the probability distribution of the
de ision statisti s in the rst phase of the
algorithm
In this paragraph we des ribe a sequential algorithm for nding the proba-
bility distribution of the statisti s zm , !m(j ) and m(j ) in the di erent steps of
the iterative de oding pro ess. We note, that in any iteration step we have
8
>
<q , for m = 1 ;
P (mjvm = 0) = P ( m jvm = 1) def
= (m ) = (15)
>
:p, for m = 1 ;
P (zm jvm = 0) = P ( zm jvm = 1) ; (16)
P (!m(j )jvm = 0) = P ( !m(j ) jvm = 1) : (17)

Then it is suÆ ient to nd the onditional distributions of the statisti s for


vm = 0 only. Sin e the distributions of zm , !m(j ) and m(j ) do not depend on
m, j and k, but only on the iteration number i, we will skip the indi es m,
j and k and label the probability distributions by i.
Let fi (z ) denote the probability that, in the ith iteration step, the z -
statisti of symbol vm = 0 takes the value z , and let
X
Fi (z ) = fi (x) ; z < 0 (18)

x z
X
Q i (z ) = fi (x) ; z > 0 ; (19)

x z

be the probability distribution fun tion and omplementary probability dis-


tribution fun tion for the z -statisti in the ith iteration. Furthermore, let
i (! ) be the probability that the statisti !m(j ) of symbol vm = 0 takes the

9
value ! in the ith iteration, and let
X
i (! ) = i ( ) ; ! < 0 (20)

 !
X
i (! ) = i ( ) ; ! > 0 ; (21)

 !

be the distribution fun tion and omplementary distribution fun tion of the
! -statisti in the ith iteration. Finally, let i( ) denote the probability, that
in iteration i the value of the extrinsi information m(j ) is  .
From (8), (10) and (11) follows, that for i = 1 the statisti m(j ) takes one of
K 1 K 1
the values def = loga 11+((qq pp))K 1 and with probabilities 1 ( ) = 1+(q 2p)
K 1
and 1 ( ) = 1 (q 2p) , respe tively. The distributions of z , ! and  in the
following iterations an be al ulated re ursively. From (7), (8), (9), (16) and
(17) we obtain the following formulas, onne ting the probability fun tions
fi (), i () and i ():
8
>
<q , if z = 1 ;
f0 (z ) = (22)
>
:p, if z = 1 ;

1 
i (! ) = Qi 1 ( ! ) + Fi 1 (! ) K 1
2 

Qi 1 ( ! ) Fi 1 (! ) K 1 ; ! < 0 ; (23)
1 
i (! ) = Qi 1 (! ) + Fi 1 ( ! ) K 1
2
K 1 
+ Qi 1 (! ) Fi 1 ( ! ) ; !>0 ; (24)
8  
>
>
>
>
>
limÆ!0 i (! ) i (! Æ) ; ! < 0 ; Æ > 0 ;
<  
i (! ) = limÆ!0 i (! ) i (! + Æ ) ; !>0 ; Æ>0 ; (25)
>
>>
>  
>
:limÆ!0 1 i ( Æ ) i (Æ ) ; ! = 0 ; Æ > 0 :

10
X
i ( ) =  i (! ) ; (26)
! 2

where
 is the set of values ! who satisfy
 (! )
 = loga i ; (27)
i ( ! )
fi (z ) = (z )  i(z )      i (z ) ; i = 1; 2; : : : ; I1 ;
| {z }
(28)
(J 1) times
where (z ) is given by (15), and the operator  denotes onvolution of two
fun tions, i.e.
X
g (z )  h(z ) = g (x)h(z x) : (29)
x
Equations (22), (26) and (28) use the dis reteness of the random variables 
and ! , the other formulas do not use this property. The hard de ision error
probability "I1 after the last iteration step I1 of the rst phase is
1
"I1 = fI1 (0) + lim FI1 ( Æ ) : (30)
2 Æ !0

The solution to the system of re urrent equations (22){(30) gives the ex-
a t bit error probability after the I1 th iteration of the rst de oding phase of
an l0 -nondegenerated low-density ode. It is not diÆ ult to nd numeri al
solutions of the system (22){(30) for di erent values of the rossover prob-
ability p and iterations I1 . In Figure 1 { Figure 3 the probabilities "I1 are
presented as fun tion of I1 for di erent low-density odes and di erent values
of p. In the next paragraph we onsider mathemati al aspe ts of the analysis
of the two-phase de oding algorithm.

x5 Some theoreti al results and their dis us-


sion
We start the analysis of the two-phase de oding algorithm for low-density
(J; K )- odes from the se ond phase. Let "i denote the hard de ision er-

11
ror probability on the ith iteration step, i = I1 ; I1 + 1; : : : ; l0 1, and let
"(ij ) = P (^vm 6= vm ), m 2 Ml0 i be the de ision error probability on symbol
vm , based on the j th family of the symbol. Then the probability "(ij ) is up-
perbounded by the additive bound "(ij ) <  , where  =  ("i 1) = (K 1)"i 1 .
The error probability "i for hard majority de ision on a symbol in the ith
iteration step, i = I1 + 1; I1 + 2; : : : ; l0 1 is upperbounded by the fun tion
g ("i 1 ), where
8
>PJ 1 J 1 j
>
>
>
> j = J2 j
 (1  )J 1 j
; if J even ;
<
PJ 1 J 1
 j
g (") = j = J +1
 (1  )J 1 j + (31)
>
> 2 j
>
>  J 1
>
: +p JJ 11  (1 ) 2 ; if J odd :
2
Here  =  ("). Below we limit our onsiderations to the ase J  3. Then
g 0 (")j"=0 < 1, if p = 1=2 and ex ept the root " = 0 the equation " = g (") has
another root " r = " r(J; K; p), 0 < " r < 1=2.
In Figure 4 a sket h of the fun tion g (") is given, analogously to Figure 4.3
of Gallager's book [1℄. An analysis of the se ond phase of de oding, analogous
to the one given by Gallager, shows that if l0 ! 1 (and, onsequently,
N ! 1) the de oding error probability Pb for l0 -nondegenerated low-density
odes is going to zero, if " r(J; K; p)  "I1 .

Lemma 1 Consider a sequen e of l0 -nondegenerated (J; K )- odes, J  3, of


length N , where N ! 1, whi h satisfy the onditions (3) of Theorem 2 and
are used over a BSC with error probability p. Let l0 > I1 , where I1 is some
positive integer, and let the two-phase algorithm, des ribed in x 3, be used for
de oding, having a total number of iterations I = l0 and I1 iterations in the
rst phase. Furthermore, let the (hard) de ision error probability "I1 after
the last step of the rst de oding phase not ex eed " r(J; K; p). Then the bit
error probability Pb for the given sequen e of odes goes to zero.

12
R = 2=3 R = 1=2 R = 2=5 R = 1=3 R = 1=4 R = 1=5
(J; K ) (3; 9) (3; 6) (4; 8) (5; 10) (3; 5) (4; 6) (3; 4) (4; 5)
p 0:044 0:082 | 0:066 0:111 | 0:164 |
" r 0:0050 0:0086 0:0070 0:0155 0:0089 0:0139 0:0026 0:0221

Table 1: Probabilities " r (J; K; p) for some values J ,K and p.

In Table 1 the probabilities " r (J; K; p) are presented for some values J ,K and
p (for even J the value " r does not depend on p). Now we analyze the rst
phase of the algorithm. It is des ribed by the system of re urrent equations
(22){(30), de ned by the parameters J , K , p and by the number of iterations
I1 . The solution of the system is the hard de ision error probability "I1 after
the last step of the rst phase of the algorithm.

Lemma 2 Let us suppose, that for given J , K and p there exists some
value I1 , su h that the solution "I1 of the system (22){(30) does not ex-
eed " r(J; K; p). Then there exists a sequen e of low-density (J; K )- odes of
blo klength N , used for transmission over a BSC with error probability p,
su h that Pb ! 0 when N ! 1.
In reality, the bit error probability for J  3 is not only going to zero
but de reasing with l0 = I not weaker than exponentially. In fa t, for J = 3
from (31) we obtain
 
"i < 2p(K 1)"i 1 1 (K 1)"i 1 + (K 1)"i 1 2 ; (32)

from whi h follows


i I1 i I1
"i < 2p(K 1) "I1 + o 2p(K 1) ; (33)

i.e. Pb is de reasing exponentially with l0 if p < 2(K1 1) .

13
R = 2=3 R = 1=2 R = 2=5 R = 1=3 R = 1=4 R = 1=5
psh 0:062 0:11 0:146 0:174 0:215 0:243
(J; K ) (3; 9) (3; 6) (4; 8) (5; 10) (3; 5) (4; 6) (3; 4) (4; 5)
pg 0:04 0:061 0:075 0:106
p0 0:044 0:082 0:075 0:066 0:111 0:114 0:164 0:150
p00 0:084 0:076 0:068 0:113 0:116 0:167

Table 2: Values of pg , p = p0 and p0 0 for some (J; K )- odes, and orrespond-


ing values of psh .

For J > 3 the probability Pb de reases with growing l0 = I at least double


exponentially. In fa t, for J = 4 from (31) we obtain
 
"i < 3 (K 1)"i 1 2 + (K 1)"i 1 3 ; (34)

from whi h follows that for suÆ iently small "I1


 l0 1
1)"I1 2
I
Pb < 3(K : (35)

For J > 4 the proof is analogous.


By numeri al solutions of the system (22){(30) for given J and K and the
analysis in (32){(35) we an obtain values p = p (J; K ), for whi h Pb goes
to zero not weaker than exponentially with l0 , when N ! 1. Some values
of p (J; K ) = p0 are given in Table 2. We all an error probability p = p
of a BSC, for whi h there exists a low-density (J; K )- ode and an iterative
de oding algorithm, su h that Pb goes to zero with the number of iterations
I not weaker than exponentially, (J; K )-permissible probability.
Lemma 3 If the error probability p of a BSC satis es the inequality p < p ,
where p is a (J; K )-permissible probability, then p is also (J; K )-permissible.

To prove the lemma we onsider a de oding algorithm, in whi h the sym-


bols re eived from the BSC rst are inverted with some probability P , whi h

14
satis es
p = p(1 P ) + (1 p)P : (36)
Then to the resulting \re eived" sequen e an iterative de oding algorithm
is applied, onstru ted for the hannel with error probability p , for whi h
the de oding error probability is de reasing not weaker than exponentially.
From Theorems 1{2 and Lemmas 1{3 follows

Theorem 3 A permissible probability p of a low-density (J; K )- ode is a


lower bound on the iterative low-density limit p0 = p0 (J; K ).

In Table 2 permissible probabilities p = p0 of low-density odes are given


for the two-phase iterative de oding algorithm, des ribed in x 3. These val-
ues are lower bounds on the iterative low-density limits p0 (J; K ). The lower
bound p = p0 on p0 ould be slightly improved, if we would use the original
Gallager algorithm [1℄ in the rst de oding phase and apply the analysis of
Ri hardson and Urbanke [7℄. Corresponding lower bounds are designated by
p00 . It is easy to see that the di eren es between p0 0 and p0 are very small.
From this follows that the statisti z ontains almost the same amount of in-
formation as Gallager's more ompli ated statisti . For omparison Table 2
also shows Shannon limit values psh and permissible values pg of the hannel
error probability for a majority iterative de oding algorithm due to Gallager
[1℄. Consider again Figure 2 and Figure 3. It is easy to see, that for p = p
the logarithm of the bit error probability Pb is de reasing with the number
of iterations I1 mu h faster than linearly, even in the rst phase of the algo-
rithm. For J = 2 (see Figure 1) the de oding error probability is de reasing
exponentially with I1 . Up to now we did not nd an analyti al solution to
the system (22){(30), but, as we mentioned earlier, numeri al solutions for
any p and relatively large I1 (of order 100 and higher) are not diÆ ult to

15
obtain. Parti ularly, for the diagrams in Figure 2 and Figure 3 for J = 3
and J = 4 we rea hed bit error probabilities of order 10 20 , whi h de nitely
satis es any pra ti al demands on de oding error probability. Therefore,
the two-phase algorithm, des ribed in x 3, has pure theoreti al sense, and in
pra ti e one-phase algorithms an be used.
In on lusion we note, that for onsiderable values l0 , as follows from
inequalities (1)-(3), l0 -nondegeneration of low-density odes an be rea hed
only for very large N . However, as simulation results show, for onvergen e
of iterative algorithms it is not ne essary to operate with nondegenerated
low-density odes. Pra ti ally interesting values of de oding bit error prob-
abilities an be rea hed already for blo k lengths of order several thousands,
even though for strong appli ation of the methods onsidered in this paper it
is ne essary to operate with blo k lengths of mu h higher order. In following
publi ations the authors suppose to generalize the results to the ase of low-
density onvolutional odes and to the ase of low-density ode transmission
over the additive white Gaussian noise hannel.

Referen es
[1℄ R. G. Gallager, Low-Density Parity-Che k Codes, M.I.T. Press, Cam-
bridge, Massa husetts, 1963.

[2℄ A. Jimenez and K. Sh. Zigangirov, \Periodi Time-Varying Convolutional


Codes with Low-Density Parity-Che k Matri es", IEEE Trans. on In-
form. Theory, vol. IT-45, no. 5, pp. 2181-2190, Sept. 1999.

[3℄ K. Engdahl and K. Sh. Zigangirov, \On the Theory of Low-Density Con-
volutional Codes I", Probl. Pereda h. Inform., vol. 35, no. 4, pp. 12-27,
O t-Nov-De 1999.

16
[4℄ V. V. Zyablov and M. S. Pinsker, \Estimation of the error- orre tion
omplexity for Gallager low-density odes", Probl. Pereda h. Inform., vol.
11, no. 1, pp. 23-36, Jan-Mar 1975.

[5℄ R. M. Tanner, \A Re ursive Approa h to Low Complexity Codes", IEEE


Trans. Inform. Theory, vol. 27, no. 9, pp. 533{547, Sep 1981.

[6℄ G. A. Margulis, \Expli it Group-Theoreti Constru tions for Combinato-


rial Designs with Appli ations to Expanders and Con entrators", Probl.
Pereda h. Inform., vol. 24, no. 1, pp. 51-60, 1988.

[7℄ T. Ri hardson and R. Urbanke, \The Capa ity of Low-Density Par-


ity Che k Codes under Message-Passing De oding", submitted to IEEE
Trans. Inform. Theory.

17
List of gure aptions
Figure 1: The hard de ision error probability "I1 as fun tion of I1 for low-
density (2; 4)- odes.

Figure 2: The hard de ision error probability "I1 as fun tion of I1 for low-
density (3; 6)- odes.

Figure 3: The hard de ision error probability "I1 as fun tion of I1 for low-
density (4; 8)- odes.

Figure 4: Sket h of the fun tion g (").

18
−5 p = 0:0286
10 p = 0:0290
p = 0:0296
p = 0:0400
"I1

PSfrag repla ements


−10
10

−15
10
0 100 200 300 400 500
I1

Figure 1:

19
0
10

p = 0:0800
10
−5 p = 0:0820
p = 0:0824
p = 0:0826

−10
"I1

10
PSfrag repla ements

−15
10

−20
10
0 50 100 150 200
I1

Figure 2:

20
0
10

p = 0:0745
10
−5 p = 0:0749
p = 0:0750
p = 0:0751

−10
"I1

10
PSfrag repla ements

−15
10

−20
10
0 50 100 150 200
I1

Figure 3:

21
PSfrag repla ements
"i+1

"I1 +3 "I1 +2 "I1 +1 "I1 " r


"i

Figure 4:

22

You might also like