Professional Documents
Culture Documents
3, MARCH 2004
417
I. INTRODUCTION
OW-density parity-check (LDPC) codes and iterative decoding algorithms were proposed by Gallager [12] four
decades ago, but were relatively ignored until the introduction
of turbo codes by Berrou et al. [2] in 1993. In fact, there is a
close resemblance between LDPC and turbo codes. Both are
characterized by a LDPC matrix, and both can be decoded iteratively using similar algorithms. Turbo-like codes have generated a revolution in coding theory, by providing codes that
have capacity approaching transmission rates with practical iterative decoding algorithms. Yet turbo-like codes are usually
restricted to single-user discrete-memoryless binary-input symmetric-output channels.
Several methods for the adaptation of turbo-like codes to
more general channel models have been suggested. Wachsmann et al. [31] have presented a multilevel coding scheme
using turbo codes as components. Robertson and Wrts [25]
and Divsalar and Pollara [9] have presented turbo decoding
methods that replace binary constituent codes with trellis-coded
modulation (TCM) codes. Kavcic, Ma, Mitzenmacher, and
Manuscript received November 11, 2002; revised November 14, 2003. This
work was supported by the Israel Science Foundation under Grant 22/01-1 and
by a fellowship from The Yitzhak and Chaya Weinstein Research Institute for
Signal Processing at Tel-Aviv University. The material in this paper was presented in part at the IEEE International Symposium on Information Theory,
Yokohama, Japan, June/July 2003.
The authors are with the Department of Electrical EngineeringSystems, Tel-Aviv University, Tel-Aviv, RamatAviv 69978, Israel (e-mail:
abn@eng.tau.ac.il; burstyn@eng.tau.ac.il).
Communicated by R. Koetter, Associate Editor for Coding Theory.
Digital Object Identifier 10.1109/TIT.2004.824917
418
We present tools for the analysis of nonbinary codes and derive upper bounds on the ML decoding error probability of the
three code structures. We then show that these configurations are
good in the sense that for regular ensembles with sufficiently
large connectivity, a typical code in the ensemble enables reliable communication at rates arbitrarily close to channel capacity, on an arbitrary discrete-memoryless channel. Finally, we
discuss the practical iterative decoding algorithms of the various
code structures and demonstrate their effectiveness.
Independently of our work, Erez and Miller [10] have recently examined the performance, under ML decoding, of standard -ary LDPC codes over modulo-additive channels, in the
context of lattice codes for additive channels. In this case, due
to the inherent symmetry of modulo-additive channels, neither
cosets nor quantization mapping are required.
Our work is organized as follows. In Section II, we discuss
BQC-LDPC codes; in Section III, we discuss MQC-LDPC
codes; and in Section IV, we discuss GQC-LDPC codes.
Section V discusses the iterative decoding of these structures. It
also presents simulation results and a comparison with existing
methods for transmission over the additive white Gaussian
noise (AWGN) channel. Section VI concludes the paper.
II. BINARY QUANTIZED COSET (BQC) CODES
We begin with a formal definition of coset-ensembles. Our
definition is slightly different from the one used by Gallager
[13] and is more suitable for our analysis (see [26]).
Definition 1: Let be an ensemble of equal block-length
binary codes. The ensemble coset- (denoted ) is defined as
the output of the following two steps.
1. Generate the ensemble from by including, for each
code in , codes generated by all possible permutations
on the order of bits in the codewords.
2. Generate ensemble from by including, for each code
in , codes
of the form
( denotes bitwise modulo- addition) for all possible vectors
.
Given a code
, we refer to as the coset vector and
as the underlying code.
Note: The above definition is given in a general context.
However, step 1 can be dropped when generating the coset-ensemble of LDPC codes, because LDPC code ensembles already
contain all codes obtained from a permutation of the order of
bits.
We proceed to formally define the concept of quantization.
Definition 2: Let
the form
A quantization
(1)
The quantization of a code is the code obtained from applying
the quantization to each of the codewords. Likewise, the quantization of an ensemble is an ensemble obtained by applying the
quantization to each of the ensembles codes. A BQC ensemble
is obtained from a binary code-ensemble by applying a quantization to the corresponding coset-ensemble.
It is useful to model BQC-LDPC encoding as a sequence of
operations, as shown in Fig. 1. An incoming message is encoded
into a codeword of the underlying LDPC code . The vector is
then added, and the quantization mapping applied. Finally, the
resulting codeword is transmitted over the channel.
We now introduce the following notation, which is a generalization of similar notation taken from literature covering binary
codes. For convenience, we formulate our results for the discrete-output case. The conversion to continuous output is immediate.
(2)
denotes the transition probabilities of a given
where
channel. Using the CauchySchwartz inequality, it is easy
and
for all
. This last
to verify that
inequality becomes strict for nondegenerate quantizations and
channels, as explained in Appendix I-A. We assume these
requirements of nondegeneracy throughout the paper.
We now define
(3)
The analysis of the ML decoding properties of BQC codes is
based on the following theorem, which relates the decoding
error to the underlying codes spectrum.
Theorem 1: Let
be the transition probability assignment of a discrete memoryless channel with an input alphabet
. Let
be a rate below channel capacity (the
419
(5)
(8)
and
is given by
(6)
, and
420
Fig. 2. Upper bounds on the minimum required SNR for successful ML decoding of some BQC-LDPC codes over an AWGN channel. The solid line is the
Shannon limit.
Fig. 2 compares our bound on the threshold of several BQCLDPC code ensembles, as a function of the signal-to-noise ratio
(SNR), over the AWGN channel. The quantizations used are
(10)
Given a type
421
, we define
where
is the number of codewords in each code and is the
codeword length.
The importance of the random-coding spectrum is given by
the following theorem.
be the transition probability assignTheorem 3: Let
ment of a discrete memoryless channel with an input alphabet
. Let
be a rate below channel capacity (the
an
rate being measured in -ary symbols per channel use),
arbitrary distribution, and
a quantization associated with
over the alphabet
. Let be an ensemble of
and rate at least , and
linear modulo- codes of length
the ensemble average of the number of words of type in . Let
be the MQC ensemble corresponding to .
be a set of types. Then the ensemble average error
Let
under ML decoding of , , satisfies
(12)
where is defined as in (11),
given by
(not to
in (2)) defined by
(11)
where
denotes the transition probabilities of a given
is a quantization. Using the CauchySchwartz
channel, and
inequality, it is easy to verify that
and
for all
. As in the case of BQC codes, the last inequality becomes
strict for nondegenerate quantizations and channels (defined as
with ).
in Appendix I-A, replacing
is
(13)
, where
.
The proof of this theorem is provided in Appendix II-A.
The normalized ensemble spectrum of -ary codes is defined in a manner similar to that of binary codes
(14)
where denotes a -dimensional vector of rational numbers sat. Throughout the rest of this paper, we
isfying
adopt the convention that the base of the
function is always .
The normalized spectrum of the random coding ensemble is
given by
where
denotes the entropy function, and is the code rate
(in -ary symbols per channel use). The normalized spectrum of
modulo- LDPC codes is given by the following theorem.
422
J for = 0:2.
(19)
where
is given by
where
is given by
(21)
, and
and
is given by
(16)
alone satisfying
be fixed,
, and assume
, dependent on and alone,
be an
(22)
is the minimum distance of a randomly selected
where
-regular modulo- code of length .1
(17)
be a given rational positive number and
be the
Let
random-coding normalized spectrum corresponding to the rate
. Then for any
there exists a number
such that
(18)
for all
, and all ,
satisfying
and
!1
Fig. 4.
423
Comparison of the normalized spectrums of the ternary (3; 6)-regular LDPC code ensemble and the ternary rate-1=2 random-coding ensemble.
Theorem 7: Let
be the transition probability assignment of a discrete memoryless channel with an input alphabet
. Assume is prime. Let
be some rational
,
, and
be defined as in
positive number, and let
be two arbitrary numbers. Then
Theorem 3. Let
for and
large enough there exists a
-regular expur(containing all but a diminishgated MQC-LDPC ensemble
-regular MQC-LDPC codes,
ingly small proportion of the
and design rate , satisas shown in Theorem 6) of length
fying
, such that the ML decoding error probability
over
satisfies
(24)
The proof of this theorem is provided in Appendix II-G.
Applying the same arguments as with BQC-LDPC codes,
we obtain that MQC-LDPC codes can be designed for reliable
transmission (under ML decoding) at any rate below capacity.
Furthermore, at rates approaching capacity, (24) is dominated
by the random-coding error exponent.
Producing bounds for individual MQC-LDPC code ensembles, similar to those provided in Figs. 2 and 5, is difficult due
to the numerical complexity of evaluating (13), for large , over
sets
given by (17).
Note: In our constructions, we have restricted ourselves to
prime values of . In Appendix II-H we show that this restriction
is necessary, at least for values of that are multiples of .
IV. QUANTIZED COSET LDPC CODES OVER GF
codes to a finite field GF .2 In our definition of moduloLDPC codes in Section III, the enlargement of the code alphabet size did not extend to the parity-check matrix. The paritycheck matrix was designed to contain binary digits alone (with
rare exceptions involving parallel edges). We shall see in Appendix II-H that for nonprime , this construction results in
codes that are bounded away from the random-coding spectrum. The ideas presented in Appendix II-H are easily extended
for
. We therefore define
to ensembles over GF
the GF
parity-check matrix differently, employing elements
field.
from the entire GF
-regular graphs for LDPC codes over GF
Bipartite
are constructed in the same way as described in Sections II and
III, with the following addition: At each edge
, a random,
GF
is selected. A
uniformly distributed label
word is a codeword if at each check node the following
equation holds:
where
is the set of variable nodes adjacent to .
The mapping from the bipartite graph to the parity-check
in the matrix, cormatrix proceeds as follows. Element
responding to the th check node and the th variable node, is
sum of all labels
corresponding to edges
set to the GF
connecting the two nodes. As before, the rate of each code is
-ary
lower-bounded by the design rate
symbols per channel use.
(GQC)
where p is prime
424
Fig. 5. Upper bounds on the minimum required SNR for successful ML decoding of some GQC-LDPC codes over an AWGN channel. The solid line is the
Shannon limit.
GF
coset-ensembles and quantization mappings are defined in the same way as with modulo- codes. Thus, we obtain
GF
quantized coset (GQC) code-ensembles.
As with MQC codes, analysis of ML decoding properties of
GQC LDPC codes involves the types rather than the weights
of codewords. Nevertheless, the analysis is simplified using the
following lemma.
and
be two equal-weight words.
Lemma 2: Let
The probabilities of the words belonging to a randomly selected
-regular ensemble satisfy
code from a GF ,
(26)
The proof of this lemma relies on the following two observations. First, by the symmetry of the construction of the LDPC
ensemble, any reordering of a words symbols has no effect on
the probability of the word belonging to a randomly selected
code. Second, if we replace a nonzero symbol by another ,
with a unique code
such
we can match any code
if and only if the original
that the modified word belongs to
word belongs to . The code
is constructed by modifying
the labels on the edges adjacent to the corresponding variable
(evaluated over GF ).
node using
We have now obtained that the probability of a word belonging to a randomly selected code is dependent on its weight
alone, and not on its type. We use this fact in the following theorem, to produce a convenient expression for the normalized
spectrum of LDPC codes over GF .
(25)
where
and
is given by
(27)
The proof is similar to the case of MQC-LDPC codes, setting
in (25) to upper-bound
. In
Lemma 1, (20) is replaced by
where
and
425
is given by
426
TABLE I
LEFT EDGE DISTRIBUTION FOR A RATE-2 BQC-LDPC CODE WITHIN 1.1 dB
OF THE SHANNON LIMIT. THE RIGHT EDGE DISTRIBUTION IS GIVEN BY
= 0:25, = 0:75
TABLE II
LEFT EDGE DISTRIBUTION FOR A RATE-2 MQC-LDPC CODE WITHIN 0.9 dB
OF THE SHANNON LIMIT. THE RIGHT EDGE DISTRIBUTION IS GIVEN
BY = 0:5, = 0:5
Applying the notation of (10), we obtain that a simple assignment of values in an ascending order renders a good quantization
given
The MQC equivalent of (30) are the values
by (11). As with BQC-LDPC codes, MQC-LDPC codes favor
(as indicated
quantizations rendering an irregular set
by simulation results).
In [24] and [6], methods for the design of edge distributions
are presented that use singleton error probabilities produced by
density evolution and iterative application of linear programming. In this work, we have used a similar method, replacing
the probabilities produced by density evolution with results of
Monte Carlo simulations. An additional improvement was obtained by replacing singleton error probabilities by the func( denoting -ary entropy).
tional
Table II presents the edge distributions of a rate (measured again in bits per channel use) MQC-LDPC code for the
, and -PAM
AWGN channel. The code alphabet size is
, with the quantization listed
signaling was used
are listed in ascending order, i.e.,
below. The values of
. The quantization values were selected
TABLE III
LEFT EDGE DISTRIBUTION FOR A RATE-2 GQC-LDPC CODE WITHIN 0.65 dB
OF THE SHANNON LIMIT. THE RIGHT EDGE DISTRIBUTION IS GIVEN
BY = 0:5, = 0:5
427
that multiple bits (each from a separate binary code) are combined to produce channel symbols. However, the division of
the code into independent subcodes and the hard decisions performed during multistage decoding, are potentially suboptimal.
Robertson and Wrts [25], using turbo codes with TCM constituent codes, report several results that include reliable transmission within 0.8 dB of the Shannon limit at rate 1 bit per dimension.
Kavcic , Ma, Mitzenmacher, and Varnica [16], [19], and [29]
present reliable transmission at 0.74 dB of the Shannon limit at
rate 2.5 bits per dimension. Their method has several similarities to BQC-LDPC codes. As in multilevel schemes, multiple
bits from the outer codes are combined and mapped into channel
symbols in a manner similar to BQC quantizations. Quantization mapping can be viewed as a special case of a one-state
parallel branches connecting every
MIR trellis code, having
two subsequent states of the trellis. Furthermore, the irregular
LDPC construction method of [19] and [29] is similar to the
method of Section V-A. In [19] and [29], the edge distributions for the outer LDPC codes are optimized separately, and
the resulting codes are interleaved into one LDPC code. Similarly, BQC-LDPC construction distinguishes between variable
nodes of different symbol node position. However, during the
construction of the BQC-LDPC bipartite graph, no distinction is
made between variable nodes in the sense that all variable nodes
are allowed to connect to the same check nodes (see Section II).
This is in contrast to the interleaved LDPC codes of [19] and
[29], where variable nodes originating from different subcodes
are connected to distinct sets of check nodes.
VI. CONCLUSION
The codes presented in this paper provide a simple approach
to the problem of applying LDPC codes to arbitrary discrete
memoryless channels. Quantization mapping (based on ideas by
Gallager [13] and McEliece [20]) has enabled the adaptation of
LDPC codes to channels where the capacity-achieving source
distribution is nonuniform. It is thus a valuable method of overcoming the shaping gap to capacity. The addition of a random
coset vector (based on ideas by Gallager [13] and Kavcic et al.
[15]) is a crucial ingredient for rigorous analysis.
Our focus in this paper has been on ML decoding. We have
shown that all three code configurations presented, under ML
decoding, are capable of approaching capacity arbitrarily close.
Our analysis of MQC and GQC codes has relied on generalizations of concepts useful with binary codes, like code spectrum
and expurgated ensembles.
We have also demonstrated the performance of practical iterative decoding. The simple structure of the codes presented lends
itself to the design of conceptually simple decoders, which are
based entirely on the well-established concepts of iterative belief-propagation (e.g., [20], [22], [12]). We have presented simulation results of promising performance within 0.65 dB of the
Shannon limit at a rate of 2 bits per channel use. We are currently working on an analysis of iterative decoding of quantized
coset LDPC codes [1].
428
APPENDIX I
PROOFS FOR SECTION II
A. Proof of
for
Quantizations and Channels
for Nondegenerate
and
) as
where is the spectrum of code . Clearly, this bound
, abandoning the assumpcan equally be applied to
tion of a fixed . The average spectrum of ensemble
is equal to that of our original ensemble , and therefore
we obtain
was transmitted
(33)
2) The methods used to bound the second element of (31)
are similar to those used in Appendix II-A. The bound
obtained is
(34)
Combining (31), (33), and (34) we obtain our desired result of (4).
(32)
C. Proof of Theorem 2
Let be defined as in (7). Let be some arbitrary number
smaller than that will be determined later. Let
be the un, obtained by expurderlying expurgated LDPC ensemble of
.
gating codes with minimum distance less than or equal to
We use Theorem 1 to bound
.
Defining
429
APPENDIX II
PROOFS FOR SECTION III
A. Proof of Theorem 3
The proof of this theorem is a generalization of the proofs
available in [21] and [26], from binary-input symmetric output
to arbitrary channels.
for some
All codes in have the form
code of . We can therefore write
where
was transmitted
We now examine both elements of the above sum.
1) Given that all codes in the expurgated ensemble have min, we obtain that
imum distance greater than
for all
. We therefore turn to examine .
A desirable effect of expurgation is that it also reduces
the number of words having large . Assume and are
two codewords of a code
, such that their weight
. The weight of the codeword
satisfies
satisfies
, contradicting the concannot
struction of the expurgated ensemble. Hence,
contain more than one codeword having
. Letting
denote the spectrum of code
, we have
such
, and
(37)
Combining (35)(37) we obtain our desired result of (9).
otherwise.
We define
as
and
(and, equivalently,
type
and
was transmitted
(38)
was transmitted
where
is defined as shown in the equation at the
bottom of the page. Therefore,
430
Letting
and
we obtain
(41)
where
is defined as the probability, for a fixed
and index , assuming
, that a codeword of
, where
should
the form
. The random
be more likely than the transmitted
space here consists of a random selection of the code
from . Therefore,
type
type
where
type
Letting
be an arbitrary number, we have by
the union bound (extended as in [13])
(40)
Letting
, we have
431
denotes the
ties. We define the following random variables:
deparent code from which was generated. Likewise,
, and denotes the parent of .
notes the parent of
Let be an arbitrary nonzero word. We now examine the
codewords of
otherwise
where
is the number of codewords in
is of rate at least , we have that
.
We now examine
where
. Recalling that
and, hence,
and obtain
and let
be the respective codeof . Then
If is in , we obtain from (13) the desired result of (44), and
thus complete the proof of the lemma.
(44)
C. Proof of Theorem 4
To prove this theorem, we first quote the following notation
and theorem of Burshtein and Miller [4] (see also [14]). Given
, the coefficient of
is dea multinomial
432
noted3
. The theorem is stated for the
two-dimensional case but is equally valid for higher dimensions (we present a truncated version of the theorem actually
discussed in [4]).
Theorem 10: Let
be some rational number and let
be a function such that
is a multinomial with
and
be some rational
nonnegative coefficients. Let
be the series of all indexes such that
numbers and let
is an integer and
. Then
(45)
and
(50)
(46)
We now use this theorem to prove Theorem 4. The proof follows the lines of a similar proof for binary codes in [4].
We first calculate the asymptotic spectrum for satisfying
for all .
Given a type and
(51)
Using (49) we have
(52)
Using (50) and Theorem 10 we have
let
be an indicator random variable equal to if the th
word of type (by some arbitrary ordering of type- words) is a
codeword of a drawn code, and otherwise. Then
(53)
Therefore,
Combining (48), (51), (52), and (53) we obtain
(47)
where
is given by the second equation at the
bottom of the page. Following the line of previous development,
we obtain
Using
We now bound
We now obtain (16). The development follows the lines of a
similar development in [12, Theorem 5.1]. Given , we define
for
433
For
and
(61)
We separately treat both elements of the above sum
(54)
Therefore, we obtain the expression for
and subsequently
(55) (both at the bottom of the page). Combining (54) with (55)
we obtain (16).
D. Proof of Theorem 5
(62)
,
, and
and using
Using
have no
the fact that is prime, we obtain that and
common divisors. Therefore,
. Defining as
(63)
therefore,
(57)
(58)
Incorporating this notation into (16), we now have
(59)
(64)
Let
Using
and
(55)
434
(65)
(67)
where , like , is some positive constant smaller than , dependent on and but independent of and .
Recalling (59)
(68)
using (60) and (65) we obtain
Combining (66), (67), and (68) we obtain
(69)
and, therefore,
now obtain
approaches
with
uniformly on
. We
Using arguments similar to those used to obtain (56), we
obtain
(70)
(66)
Recalling that
is the type of word and given that
is the number of nonzero elements in , we
. We now bound
for values of
obtain
satisfying
,
, and
. To do this, we use methods similar to those
used in Appendix II-D. Employing the notation of (57)
for
and (58), we bound
We now have
435
(71)
Combining (59), (60), and (71), we obtain
(72)
Combining (72) with (70), recalling (21), we obtain our
desired result of (20).
(75)
2) We now examine values of
From (20) we have
satisfying
F. Proof of Theorem 6
The proof of this theorem follows in lines similar to those
used in the proofs of Theorems 2 and 3 of [21].
Using a union bound, we obtain
(76)
, and obtain, recalling (21) and
We now restrict
using
(73)
where and are determined later. We now proceed to bound
both elements of the above sum.
1) Requiring
and invoking (19) we obtain
(74)
Using
The function
the range
. Defining
in the range
Recalling
and
we obtain
We now define
Therefore,
The inner contents of the braces are positive for all and
approach
as
. Therefore, the above value is
and
we have
positive. Recalling
(77)
Combining (73) with (75) and (77) we obtain our desired result
of (22).
436
G. Proof of Theorem 7
Let be defined as in Theorem 6. Let be some number
smaller than that will be determined later. Let
be the un, obtained by expurderlying expurgated LDPC ensemble of
.
gating codes with minimum distance less than or equal to
We use Theorem 3 to bound
.
Assigning
(81)
(78)
Letting
have
for all
,
.
, we
Finally, selecting
so that
we obtain
(80)
2)
can be written as
where
Let
be a positive integer, and let
be a nonprime
number. We now show that there exist values of such that no
exists as in Theorem 5. Moreover, expurgation is impossible
for those values of .
To show this, we examine the subgroup of the modulogroup, formed by
. This subgroup is isomorphic to the
group modulo- (the binary field). Given a modulo- LDPC
code , we denote by
the subcode produced by codewords
alone. We now examine the
containing symbols from
ensemble of subcodes .
The binary asymptotic normalized spectrum is defined, for
as
defined
APPENDIX III
PROOFS FOR SECTION IV
, and
replaced by
Using (48), (51) (with
and
), (83) and (84), and applying Theorem 10
as in the proof of Theorem 4, we arrive at (25). We now show
is given by (26).
that
, defined such that the coWe first examine
efficient of
is the number of words of type
(the index corresponds to the th element of
is zero. A useful expression for
GF ) whose sum over GF
is
(85)
can be modeled as -dimensional vectors
Elements of GF
(see [3]). The sum of two GF
elements
over
corresponds to the sum of the corresponding vectors, evaluated
as the modulo- sum of each of the vector components. Thus,
is equivalent to adding
-diadding elements over GF
mensional vectors. Letting
A. Proof of Theorem 8
The proof of this theorem follows in the lines of the proof of
Theorem 4.
Equation (48) is carried over from the modulo- case. Let
be a word of type
. Recalling Lemma 2, we can assume
is a binary word of weight
without loss of generality that
.
Each assignment of edges and labels infers a symbol coloring on each of the check node sockets (the th color corresponding to the th element of GF ), according to the adjacent
variable nodes value multiplied by the label of the connecting
of the sockets are assigned colors of the set
edge. Exactly
. All color assignments satisfying the above requirement are equally probable. The total number of colorings
is
(83)
Given an assignment of graph edges, is a codeword if at each
sum of the values of its sockets is zero.
check node, the GF
The number of valid assignments is determined using the enumerating function
(84)
where
is the weight-enumerating function of -length
satisfying the requirement that the sum of
words over GF
all symbols must equal over GF .
437
and
and
.
by
where
and
are evaluated modulo- . This last equation is clearly the output of the two-dimensional cyclic convowith itself, evaluated at zero. In the general case
lution of
(85), we have the -dimensional cyclic convolution of the funcwith itself times, evaluated at zero.
tion
Using the -dimensional DFT of size to evaluate the convolution, we obtain
IDFT DFT
We now examine
in (86) at the top of
the following page. Consider the elements
438
(86)
[14] I. J. Good, Saddle point methods for the multinomial distribution, Ann.
Math. Statist., vol. 28, pp. 861881, 1957.
[15] A. Kavcic , X. Ma, and M. Mitzenmacher, Binary intersymbol interference channels: Gallager codes, density evolution, and code performance bounds, IEEE Trans. Inform. Theory, vol. 49, pp. 16361652,
July 2003.
[16] A. Kavcic , X. Ma, M. Mitzenmacher, and N. Varnica, Capacity approaching signal constellations for channels with memory, in Proc.
Allerton Conf., Monticello, IL, Oct. 2001, pp. 311320.
[17] S. Litsyn and V. Shevelev, On ensembles of low-density parity-check
codes: Asymptotic distance distributions, IEEE Trans. Inform. Theory,
vol. 48, pp. 887908, Apr. 2002.
[18] M. G. Luby, M. Mitzenmacher, M. A. Shokrollahi, and D. A. Spielman,
Improved low-density parity-check codes using irregular graphs,
IEEE Trans. Inform. Theory, vol. 47, pp. 585598, Feb. 2001.
[19] X. Ma, N. Varnica, and A. Kavcic , Matched information rate codes for
binary ISI channels, in Proc. 2002 IEEE Int. Symp. Information Theory,
Lausanne, Switzerland, June/July 2002, p. 269.
[20] R. J. McEliece, Are turbo-like codes effective on nonstandard channels?, IEEE Inform. Theory Soc. Newsletter, vol. 51, pp. 18, Dec.
2001.
[21] G. Miller and D. Burshtein, Bounds on the maximum-likelihood
decoding error probability of low-density parity-check codes, IEEE
Trans. Inform. Theory, vol. 47, pp. 26962710, Nov. 2001.
[22] J. Pearl, Probabilistic Reasoning in Intelligent Systems: Networks of
Plausible Inference. San Francisco, CA: Morgan Kaufmann, 1988.
[23] T. Richardson and R. Urbanke, The capacity of low-density parity
check codes under message-passing decoding, IEEE Trans. Inform.
Theory, vol. 47, pp. 599618, Feb. 2001.
[24] T. Richardson, A. Shokrollahi, and R. Urbanke, Design of capacity-approaching irregular low-density parity-check codes, IEEE Trans. Inform. Theory, vol. 47, pp. 619637, Feb. 2001.
[25] P. Robertson and T. Wrts, Bandwidth-efficient turbo trellis-coded
modulation using punctured component codes, IEEE J. Select. Areas
Commun., vol. 16, pp. 206218, Feb. 1998.
[26] N. Shulman and M. Feder, Random coding techniques for nonrandom
codes, IEEE Trans. Inform. Theory, vol. 45, pp. 21012104, Sept. 1999.
[27] R. M. Tanner, A recursive approach to low complexity codes, IEEE
Trans. Inform. Theory, vol. IT-27, pp. 533547, Sept. 1981.
[28] S. ten Brink, G. Kramer, and A. Ashikhmin, Design of low-density
parity-check codes for multi-antenna modulation and detection, IEEE
Trans. Inform. Theory, submitted for publication.
[29] N. Varnica, X. Ma, and A. Kavcic , Iteratively decodable codes for
bridging the shaping gap in communications channels, in Proc.
Asilomar Conf. Signals, Systems and Computers, Pacific Grove, CA,
Nov. 2002.
[30]
, Capacity of power constrained memoryless AWGN channels
with fixed input constellations, in Proc. IEEE Global Telecomm. Conf.
(GLOBECOM), Taipei, Taiwan, Nov. 2002.
[31] U. Wachsmann, R. F. Fischer, and J. B. Huber, Multilevel codes: Theoretical concepts and practical design rules, IEEE Trans. Inform. Theory,
vol. 45, pp. 13611391, July 1999.