You are on page 1of 10

6.

897 Algorithmi Introdu tion to Coding Theory O tober 24, 2001


Le ture 11
Le turer: Madhu Sudan S ribe: Matt Lepinski

Today we will talk about:


1. Abstra tion of the De oding Algorithm for Reed-Solomon Codes
2. De oding Con atenated Codes (spe i ally, the Forney Codes).
1 Abstra tion of Reed-Solomon De oding Algorithm

Our rst goal today is to give a very abstra t view of the Wel h-Berlekamp de oding algorithm for
Reed-Solomon odes. This abstra tion will allow us to see its generality, and thus apply it to other
families of error- orre ting odes. The algorithm we des ribe here is from the works of Pellikaan [9℄,
Kotter [6℄, and Duursma [1℄.
1.1 Reed-Solomon De oding Review

Re all the Reed-Solomon de oding algorithm from last le ture. Here we have distin t points n

n 2 Fq given impli itly, and we are expli itly given as input elements
x1 ; : : : ; x n 2 Fq . The
y1 ; : : : ; y

de oding algorithm onsists of the following two steps:


1. Polynomials ( ) and ( ) su h that:
Find: a x b x

 For all 2 [ ℄, ( i ) i = ( i ).
i n a x y b x

 Degree of is small (at most ).


a t

 Degree of is small (less than + ).


b k t

2. bx( )
Output:
ax( )

1.2 Spe ial Stru ture of Reed-Solomon Codes

The above algorithm and its proof of orre tness used the properties of Reed-Solomon odes in
several ways. Below we list the di erent aspe ts that seem spe i to Reed-Solomon odes.
1. We used the fa t that the indi es of the odewords (i.e., n ) are eld elements.
x1 ; : : : ; x

2. We used the fa t that two low degree polynomials annot agree in very many pla es, several
times (in the onstru tion of the ode, as well as the analysis of the de oding algorithm).
3. We used the fa t that multipli ation of two low degree polynomials is a low degree polynomial.
4. We also used the fa t that under the right onditions, the ratio of two polynomials is a poly-
nomial.
Not very many of these above fa ts were really riti al to the proof | they were just the simplest
way to get to the end. Some of the above properties (like (2)) are just fa ts that hold for any
error- orre ting ode. Others, in parti ular (3), are somewhat spe ial, but an still be abstra ted
with are.

11-1
1.3 Multipli ation of ve tors

One of the riti al operations in the de oding algorithm is that of multiplying the error-lo ator
polynomial ( ) with the message polynomial ( ) and onsidering the evaluations of the produ t
a x p x

at n . This is essentially a oordinatewise produ t of ve tors, de ned below.


x1 ; : : : ; x

De nition 1 For u; v 2 Fnq , their oordinatewise produ t, denoted u ? v, is given by


u ? v = hu1 v1 ; : : : ; unvn i:
If the oordinatewise produ t is a strange operation in linear algebra, then the following produ t
of sets is even stranger, but is riti al to the working of the Reed-Solomon de oding algorithm.
De nition 2 For U; V  Fnq their produ t, denoted U ?V is given by
= fu vju 2 v 2 g
U ?V ? U; V :

Why are these operations interesting? To motivate them, let us look ba k at the RS de oding
algorithm:
 Let be the set of ve tors obtained by evaluations of polynomials of degree at most .
U t

 Let be the set of ve tors obtained by evaluations of polynomials of degree less than .
V k

 Then is the set of evaluations of polynomials of degree less than + that fa tor into a
U ?V k t

polynomial of degree less than and a polynomial of degree at most .


k t

 In parti ular is a subset of the set of evaluations of polynomials of degree less than + .
U ?V k t

 To see that this is spe ial, note that is a ve tor spa e of dimension + 1 and is a ve tor
U t V

spa e of dimension . What we have noti ed is that their produ t is ontained in a ve tor
k

spa e of dimension + . This is very spe ial. In general if we take two arbitrary ve tor spa es
k t

of dimension and , their produ t would not be ontained in any ve tor spa e of dimension
k t

less than | so polynomials end up being very spe ial!


kt

In what follows we will show that this is the only spe iality of polynomial based odes. We will
show that if any ode ends up having ni e properties with respe t to produ t with some other odes,
then it an be de oded.
1.4 Error-Lo ating Pairs

Let be a [ ?℄q ode and suppose, we wish to de ode up to errors with C. The following de -
C n; k; e

nition des ribes a simple ombinatorial obje t, whose existen e suÆ es to give a de oding algorithm
for .
C

De nition 3 A pair of linear odes ( ), with  Fnq , form an -error- orre ting pair for a
A; B A; B e

linear ode C  Fnq if they satisfy the following onditions:


1. A?C  B .
2. The dimension of A is suÆ iently large: Spe i ally, dim( ) A > e .
3. The minimum distan e of B is suÆ iently large: Spe i ally, ( )
B > e .
4. The minimum distan e of C is suÆ iently large: ( )
C > n ( ).
A

11-2
1.5 The Generalized Algorithm

In this algorithm, we assume that we are given generator matri es for linear odes , , and , A B C

where ( form an -error- orre ting pair for .


A; B e C

Abstra t de oding algorithm


Given: Matri es A B C generating odes
; ;  Fnq , with ( ) forming -error- orre ting
A; B; C A; B e

pair for . Re eived ve tor y 2 Fq .


C
n
Step 1: Find a 2 and b 2 su h that a y = b and (a b) 6= (0 0), if su h a pair exists.
A B ? ; ;

Step 2: Compute z 2 (Fq [ f?g)n as follows: If i = 0, then i =? else i = i . a z z y

Step 3: Output the result of performing erasure de oding on z for ode , if this results in a unique C

odeword.
As usual we argue eÆ ien y rst and orre tness later. EÆ ien y is obvious for Step 2. For Step 3,
re all that we observed in the last le ture that erasure-de oding an be done in ( ) time for every O n
3

linear ode. So it suÆ es to argue eÆ ien y of Step 1. We do so by laiming this is a task of nding
a solution to a linear system. Note that we are sear hing for unknowns n n 2 Fq . a1 ; : : : ; a ; b 1 ; : : : ; b

The ondition that a 2 pla es linear onstraints on


A .n Similarly, the ondition b 2
a1 ; : : : ; a B

turns into linear onstraints on n. Also, the onstraints i i = i is also linear in i and i
b1 ; : : : ; b a y b a b

(sin e i is xed). Lastly, the ondition (a b) 6= (0 0) is just asking for a non-zero solution to this
y ; ;

linear system. So the task at hand is that of nding a non-trivial solution to a homogenous linear
system. Again this an be done eÆ iently, in ( ) time. It remains to prove orre tness of the
O n
3

above algorithm, and we do so next.


1.6 Corre tness

The proof of orre tness goes through the usual steps. We assume below that there exists a odeword
2 C that is lose to (within a Hamming distan e of e of) the re eived ve tor y. We x this odeword
and use it to argue orre tness.
Below we argue that a solution pair (a b) to Step 1 does exist. We argue that any solution pair
;

(a0 b0) to Step0 1 satis es a0 =0 b00. Next0 we show that for any pair a0 2 and b0 2 there is
; ? A B

at most one 2 su h that a = b . The orre tness follows by noti ing that the solution
C ?

0 output by the algorithm satis es a0 0 = b0 , if (a0 b0 ) is the solution found by Step 1. Details
? ;

below.
Claim 4 There exists a pair (a b) as required in Step 1 exists. Furthermore they satisfy a = b.
; ?

Proof By peeking ba k at the analogous laim in the Reed-Solomon de oder, we realize we want a
to be the \error-lo ator", i.e., satisfying i = 0 if i 6= i . Can we nd su h a ve tor that is non-zero?
a y

Turns out we an, if we know has suÆ iently large dimension. In parti ular, the onstraints i = 0
A a

give homogenous linear onstraints on


e n . Sin e A has dimension +1 or larger (Condition
a1 ; : : : ; a e

(2) of the de nition of an error- orre ting pair), it ontains a non-zero ve tor that satis es all these
onstraints!
Now take a to be any non-zero ve tor satisfying i = 0 if i 6= i . Take b to be the ve tor
a y

a . Note that b 2 sin e


? B  . Furthermore, for every , we have either i = i and so
A ? C B i y

bi = i i = i i , or we have i = 0 and hen e i = i i = 0 = i i . Furthermore the pair is non-zero


a a y a b a a y

sin e a 6= 0. This on ludes the proof of the laim.

Claim 5 If a0 ; b0 are any pair of solutions to Step 1, then a0 ? = b0 .

11-3
Proof Sin e a0 b0 are outputs of Step 1, they satisfy the ondition, a0 y = b0. Let a0 = b.
; ? ?

To prove the laim we need to show b0 = b . Sin e  , we know that b 2 . Therefore,


A ? C B B

b and b are two odewords of that agree on every oordinate for whi h i = i ( 0i = i i
0  B i y b a y

and i = i i). But i = i on at least


b a y oordinates, and so (b0 b )  . But ( )
n e ; e B > e

(Condition (3) of the de nition of an error- orre ting pair), implying b0 = b as required.

Claim 6 For any (a0 ; b0 ) 2 A  B f(0; 0)g there exists at most one 2 C su h that a0 ? = b0 .
Proof Let ; 0 2 C satisfy a0 ? = b0 = a0 ? 0 . First, let us note that we a tually have a0 6= 0.
(If not, then b0 = a0 ? 0 would also be 0 and this ontradi ts the ondition that together then are
non-zero.)
To prove0 the laim, we wish to show =0 0 . Sin e both are odewords of , it suÆ es to show C

that ( ) ( ). But note that i = i for every , where 0i 6= 0. Further, sin e a0 6= 0, we


; < C i a

have 0i 6= 0 on at least ( ) oordinates. Thus we have, ( 0 )  ( ). But Condition (4) in


a A ; n A

the de nition of an error-lo ating pair ensures that ( ) ( ). Thus we get ( 0 ) ( )


n A < C ; < C

as desired.
We an now formally prove the orre tness of the de oding algorithm.
Lemma 7 If ( ) form an -error- orre ting pair for and 2 and y 2 Fnq satisfy ( y)  ,
A; B e C C ; e

then the Abstra t de oding algorithm outputs on input A; B:C , and y.


Proof By Claim 4, we have that there exists a pair a b satisfying the onditions of Step 1. So;

some su h pair a0 b0 will be found. By Claim 5, will satisfy a0 = b0. Sin e, a0 = b0 = a0 y,


; ? ? ?

we have i = i whenever i 6= 0 and thus i is a valid solution to Step 3 of the algorithm. To


y a

on lude, we need to ensure that is the only solution to Step 3 of the algorithm. But this is also
lear, sin e any solution 0 to this step must satisfy 0i 0i = 0i i = 0i for every , and thus must be a
a a y b i

odeword of satisfying a0 0 = b0 and by Claim 6, is the unique ve tor with this property.
C ?

We on lude with the following theorem:


Theorem 8 Any ode that has an -error- orre ting pair has an eÆ ient ( ( ) time) algorithm
C e O n
3

solving the bounded distan e de oding problem for up to e errors.

1.7 Appli ations

Exer ise:Verify that the Wel h-Berlekamp algorithm from the last le ture is an instantiation of
the Abstra t de oding algorithm given today.
We now move on to a more interesting appli ation. Re all the onstru tion of algebrai -geometry
odes. (To be more pre ise, re all that we know very little about them to re all mu h.)
Algebrai -geometry odes. These odes were onstru ted by nding points in Fmq and eval- n

uating all polynomials of \order" at most at all pla es. The following properties of order were
` n

used in asserting that these gave good odes:


1. There exists an integer su h that for every , the evaluations of polynomials of order at most
g `

formed a subspa e of Fnq of dimension at least


` + 1. ` g

2. Two distin t polynomials of order at most an agree on at most out of the evaluation
` ` n

points.
3. The produ t of two polynomials of order and has order at most + . `1 `2 `1 `2

11-4
These properties suÆ e to prove that odes obtained by the evaluation of polynomials of order at
most n d +1 give an [ ℄q ode for some 
n; k; d +1. As we see below, the same properties
k n d g

also give us an b n g ` -error- orre ting pair for these odes.


2
1

Lemma 9 If C is an algebrai -geometry ode obtained by evaluating polynomials of order at most

`, then it has an b n g 2 ` 1 -error- orre ting pair.

Proof Let  n g ` . Below all referen es to the \Conditions" are to the four onditions in
e
1

the de nition of an -error- orre ting pair.


2
e

To get an -error- orre ting pair, we need dim( ) (to satisfy Condition 2). We will pi k
e A > e A

to be the algebrai -geometry ode obtained by evaluations of all polynomials of order at most + . e g

Sin e we need A ?C  , we pi k to be the algebrai -geometry ode obtained by all evaluations


B B

of polynomials of order at most + + , and thus satisfy Condition 1. To satisfy Condition 3, we


e g `

need ( ) . We know from the properties of algebrai -geometry odes, that has distan e at
B > e B

leastn e g `. From the hoi e of , it follows that e . Finally, to get Condition


n e g ` > e

4, we need to verify that ( )+( ) . Sin e ( ) 


A C > n and ( )  , this amount
A n e g C n `

to verifying that 2 n ` whi h is equivalent to verifying that


e g > n . But in fa t e < n ` g

e is less than half the RHS. Thus ( ) as hosen above give an -error- orre ting ode for .
A; B e C

As a orollary we see that we get a pretty de ent de oding algorithm for algebrai -geometry
odes, orre ting about ( ) 2 errors. However, it does not de ode up to half the minimum
n ` g =

distan e, sin e the distan e is and we are only orre ting (


n ` ) 2 errors. Later in the n ` g =

ourse we will see a better algorithm.


Before on luding this se tion, we mention one more ase where the abstra t de oding algorithm
provides an inspiration for a de oding algorithm. This is the ase of the Chinese Remainder Codes,
des ribed next.
Chinese Remainder Codes. The Chinese Remainder Codes are number-theoreti odes de ned
as follows:
 Fix primes n su h that  n
1℄ where = Qki i .
p1 ; : : : ; p p1 < p2 < < p

 A message is an integer 2 [0 m :::K K


=1 p

 The en oding of is the ve tor h ( mod )


m ( mod n)i.
m p1 ; : : : ; m p

By the Chinese Remainder Theorem, residues of modulo any of the primes, suÆ es to spe ify
m k n

m . Thus spe ifying its residue modulo primes, makes the above a redundant en oding. This is
n

not one of our usual algebrai odes | in fa t it is not even linear! However, one an apply our
usual notions of distan e, error-dete tion and orre tion to this ode. We note the ode has distan e
n k + 1 (sin e spe ifying the residues modulo primes, spe i es the message). So one an as
k

the question - is it possible to orre t ( ) 2 errors. The rst de oding algorithm was given
n k =

by Mandelbaum [8℄. Later work of Goldrei h, Ron, and Sudan [2℄ showed how to interpret this
algorithm as a relative of the abstra t de oding algorithm given here. Turns out both algorithms
orre t slightly less than ( ) 2 errors in polynomial time | this was xed later by Guruswami,
n k =

Sahai, and Sudan [3℄ using the algorithm of [2℄ in ombination with an algorithm known as the
\Generalized Minimum Distan e Algorithm" that we will talk about in the next se tion.
2 De oding Con atenated Codes

We now move on to an elegant solution for the unambiguous de oding problem for some families of
on atenated odes. Let us start by re alling on atenated odes.
11-5
m

E1

x1 x2 x3 xn

E2 E2 E2 E2

y1 y2 y3 yn

Figure 1: En oding in on atenated odes

r1 r2 r3 rn
Inner Decoding

u1 u2 u3 un

Outer Decoding

m’

Figure 2: De oding on atenated odes


Con atenated Codes. Given an [n; k; d℄Q outer ode C1 with en oding fun tion E1 and an
[n2; k2 ; d2℄q inner ode C2, with en oding fun tion E2 , where Q = qk2 , their on atenation, denoted
C1  C2 , is the [nn2 ; kk2 ; dd2 ℄q ode obtained as follows: Start with a message m 2 F Q and en ode
k
it using E1 to get a ve tor x = hx1 ; : : : ; xn i, where xi 2 FQ . Now, viewing the xi 's as elements of
F kq 2 en ode them using E2 to get y = hy1 ; : : : ; yn i where yi 2 F nq 2 ad the en oding of m. (See also
Figure 1.)
Getting some reasonable algorithms for de oding on atenated odes is not so hard, under some
reasonable assumptions. Su h an algorithm would take a re eived ve tor r = hr rn i, where 1; : : : ;
ri 2 Fnq 2 and de ode them in two steps, inverting the en oding steps. So rst it would de ode the
ri 's individually to their nearest odewords. For 2 [ ℄, let ( i ) be the odeword of nearest
i n E2 u C2

to yi , where we view i 2 FQ as an element of Fkq 2 . Now we treat u = h


u n i as a orrupted
u1 ; : : : ; u

odeword of and de ode it using a de oding algorithm for . (See Figure 2.)
C1 E1

First, let us note that for the on atenated odes we have onsidered so far (Forney odes and
Justesen odes), the above is a tually eÆ ient. Re all that in these appli ations the outer ode
is a well-studied ode su h as the Reed-Solomon ode with eÆ ient de oding algorithms. On the
other hand the inner ode is not well-understood | the only thing we know about it is that it
has good minimum distan e properties. So it is not reasonable to expe t a sophisti ated de oding
algorithm for the inner ode. But then the inner odes are so small the brute-for e de oding only
takes polynomial time in the length of the on atenated ode, so we don't need a sophisti ated inner
de oding algorithm. So the entire de oding pro ess des ribed above takes time polynomial in the
length of the on atenated ode, for the typi al odes.
However, this doesn't give us an algorithm de oding upto half the minimum distan e of the
on atenated ode! It may only de ode dd2 errors. We won't prove that it an orre t so many
4

11-6
errors, or that it an't orre t more. But to see a plausibility argument, note that to get a de oding
failure, the adversary only has to ensure 2 of the symbols i 6= i and to get any su h de oding
d= u x

error for the inner de oder it may only need to ip 2 symbols of the inner alphabet. Thus a total
d2 =

of 4 errors may lead to a de oding failure. Below we will show a lever algorithm whi h gets
dd2 =

around this problem with relatively little extra information about the outer de oding algorithm.
This algorithm is the Generalized Minimum Distan e (GMD) De oding Algorithm, due to
Forney [5℄ (see also [4℄).
Exer ise: Prove that the de oding algorithm outlined above does indeed orre t at least ( 1)( d d2

1) 4 errors.
=

2.1 De oding errors and erasures

Let us start by looking at the de oding algorithm we have for the outer ode. We already seem to
be making full use of it | we assume it an orre t ( 1) 2 errors, and it an't possibly orre t
d =

more errors unambiguously. Turns out, there is one additional feature of the de oding algorithm for
the outer ode that omes in quite handy. This is the feature that it an deal with erasures quite
naturally and bene t from it.
Proposition 10 Let be an [ C + 1 ℄q Reed-Solomon ode. Suppose r 2 (Fq [ f?g)n is
n; n d ;d

a ve tor derived from a odeword 2 by erasures and errors. I.e., jf j i =?gj = and
C s t i r s

jf j i 6= i and i 6=?gj = . Then an omputed eÆ iently given r, , provided + 2 .


i r r t s t s t < d

Proof The proposition is straightforward, given the observation that the ode 0 , obtained by C

pun turing on the oordinates where there are erasures, is an [


C +1 ℄ Reed-Solomon n s; n d ;d s

ode and thus ( 1) 2 errors an be orre ted eÆ iently.


d s =

How an we use the option to de lare erasures? In the pro ess of de oding the inner ode, we
ignored some obvious information that was available to us. We not only ould nd out the en oding
of whi h message word i was losest to the th re eived blo k ri , but we also know how many
u i

errors have (potentially) o urred in ea h blo k. Somehow we should make use of this information.
A natural idea is to to de lare blo ks with large number of errors to be erasures. This works out
roughly orre t. We will see that a simple probabilisti interpretation of the distan es give a right
strategy for de laring erasures.
2.2 A randomized de oding algorithm

Let us x some notation, that we have been introdu ing as we went along. The message ve tor is
m, its en oding under the outer ode is x = h n i, where i 2 FQ 
= Fkq 2 . The en oding of
x1 ; : : : ; x x

xi under the inner ode is yi and thus the nal odeword is y = hy yn i. The noisy hannel 1; : : : ;
orrupts y and a ve tor r = hr rn i is re eived. We now des ribe the de oding algorithm for
1; : : : ;
this ode.
Random-Con at-De oder
Given: r = hr1 ; : : : ; rn i.
Step 1: For 2 [ ℄, ompute i that minimizes ( ( i) ri ).
i n u E2 u ;

Step 2: Set 0i = minf 2 ( ( i ) ri ).


e d2 = ; E2 u :

Step 3: For every 2 [ ℄ repeat the following: With probability 2 0i , set i =?, else set i = i .
i n e =d2 v v u

Step 4: Perform errors and erasures de oding of the ve tor v = h n i. v1 ; : : : ; v

11-7
Step 5: If a ve tor m0 2 FkQ is obtained in Step 4, he k to see if (  (m) r)  ( ) 2 and E2 E1 ; dd2 =

if so output m0.
The above algorithm learly works in time that is ( + ( )), where ( ) is the time taken
O nn2 Q T n T n

by the errors and erasures de oding algorithm. The main issue is how many errors it orre ts. We
will laim that it \essentially" solves the unambiguous de oding algorithms for the on atenated
ode. We do so via a laim that is weak in that it only shows that the expe ted number of errors
and erasures fall within the de oding apability of the outer ode.
;
P
Lemma 11 Let i = (ri yi ) and = ni i . When v is pi ked at random by Random-Con at-
e e
=1 e
De oder, then we have:
Exp[# erasures in v + 2  (# errors in v)℄  2 e=d2 :

Remark: Note that if 2, the RHS above is less than as one would hope for.
e < dd2 = d

Proof Note that is suÆ es to argue that for every i

Pr[erasure in th oordinate℄ 2  Pr[error in th oordinate℄  2 i


i i (1) e =d2 ;

and then the lemma will follow by linearity of expe tations. We prove this in ases.
Case 1: i = i : In this ase, the probability of an error in the th oordinate is 0 and the probability
u x i

of an erasure in the th oordinate is 2 0i . Sin e i = i , we also have i = 0i, and so we nd


i e =d2 u x e e

that the LHS of (1) is indeed equal to 2 i . e =d2

Case 2: i 6= i . In this ase, we ould have an error in the th oordinate | this happens with
u x i

probability 1 2 0i , while with probability 2 0i we get an error. We need to express these


e =d2 e =d2

quantities as a fun tion of i (rather than 0i) and so we note that i 


e e
0 (sin e ( ( i ) ri ) 
i e d2 e E2 x ;

( ( i ) ( i )) (ri ( i))). Now we have that the probability of an error is at most 2 i 1


E2 x ; E2 u ; E2 u e =d2

and the probability of an erasure is at most 2 2 i . Plugging these into the LHS of (1), again
e =d2

we nd that the LHS is bounded by 2 i . e =d2

The above lemma implies that there is positive probability asso iated with the event that v has
small number of errors and erasures. However we did prove not a high probability result. We won't
do so; instead we will just derandomize the algorithm above to get a deterministi algorithm for the
unambiguous de oding problem.
2.3 Deterministi De oder for Con atenated Codes

We will develop the deterministi de oder in two (simple) steps. First note that we didn't really
need the random hoi es in Step 3 of Random-Con at-De oder do not need to independent
and expe tation bound (Lemma 11) also holds if these events are ompletely dependent. So, in
parti ular, the following algorithm would work as well.
Modi ed-Random-Con at-De oder
Given: r = hr1 ; : : : ; rn i.
Step 1: For 2 [ ℄, ompute i that minimizes ( ( i) ri ).
i n u E2 u ;

Step 2: Set 0i = minf 2 ( ( i ) ri ).


e d2 = ; E2 u :

Step 3.1: Pi k [0 1℄ uniformly at random.


p ;

Step 3.2: For every 2 [ ℄ if 2 0i


i n, set i =?, else set i = i .
e =d2 > p v v u

Step 4: Perform errors and erasures de oding of the ve tor v = h v1 ; : : : ; v n i.

11-8
Step 5: If a ve tor m0 2 FkQ is obtained in Step 4, he k to see if (  (m) r)  ( ) 2 and E2 E1 ; dd2 =

if so output m0.
As in the analysis of Random-Con at-De oder we see that the random variable v as de ned
in Steps 3.1 and 3.2 above satis es the ondition:
Exp[# erasures in v + 2  (# errors in v)℄  2 e=d2 :

In parti ular, there exists a hoi e of in Step 3.1, su h that for this hoi e of , the ve tor v
p p

obtained in Step 3.2 satis es the ondition:


(# erasures in v) + 2  (# errors in v)  2 e=d2 :

The only interesting hoi es of are from the set = f0 1g [ f2 0i j 2 [ ℄g sin e for every other
p S ; e =d2 i n

hoi e of , there is some 0 2 for whi h the ve tor v obtained is identi al for 0 and . This gives
p p S p p

us the deterministi algorithm below:


Deterministi -Con at-De oder
Given: r = hr1 ; : : : ; rn i.
Step 1: For 2 [ ℄, ompute i that minimizes ( ( i) ri ).
i n u E2 u ;

Step 2: Set 0i = minf 2 ( ( i ) ri ).


e d2 = ; E2 u :

Step 3.1: Let = f0 1g [ f2 0i j 2 [ ℄g. Repeat Steps 3.2 to 5 for every hoi e of 2 .
S ; e =d2 i n p S

Step 3.2: For every 2 [ ℄ if 2 0i


i , set i =?, else set i = i .
n e =d2 > p v v u

Step 4: Perform errors and erasures de oding of the ve tor v = h n i. v1 ; : : : ; v

Step 5: If a ve tor m0 2 FkQ is obtained in Step 4, he k to see if (  (m) r)  ( ) 2 and E2 E1 ; dd2 =

if so output m0.
The running time of this algorithm is ( + ( )) where ( ) is the running time of the
O nn2 Q nT n T n

errors and erasures de oder of the outer ode. Corre tness follows from the arguments developed so
far. We summarize the dis ussion with the following theorem:
Theorem 12 Let be the on atenation of an [
C ℄Q outer ode and an [ n; k; d ℄q inner C1 n2 ; k2 ; d2

odee C2 with = k2 . Suppose


Q q has an \errors and erasures de oding algorithm" running in
C1

time ( ) de oding up to erasures and errors provided + 2


T n s t . Then has an unambiguouss t < d C

de oding algorithm running in time ( + ( )). O nn2 Q nT n

Again, the running times an be improved with some e ort. In parti ular, Kotter [7℄ shows how
to ut down the run time to just ( + ( )) for some families of on atenated odes.
O nn2 Q T n

Referen es

[1℄ Iwan M. Duursma. De oding Codes from Curves and Cy li Codes. PhD thesis, Eindhoven
University of Te hnology, 1993.
[2℄ Oded Goldrei h, Dana Ron, and Madhu Sudan. Chinese remaindering with errors. IEEE Trans-
a tions on Information Theory, 46(5):1330{1338, July 2000. Extended version appears as ECCC
Te hni al Report TR98-062 (Revision 4), http://www.e .uni-trier.de/e .

11-9
[3℄ Venkatesan Guruswami, Amit Sahai, and Madhu Sudan. Soft-de ision de oding of Chinese
Remainder odes. In Pro eedings of the 41st IEEE Symposium on Foundations of Computer
S ien e, pages 159{168, Redondo Bea h, California, 12-14 November 2000.
[4℄ G. David Forney Jr. Con atenated Codes. MIT Press, Cambridge, MA, 1966.
[5℄ G. David Forney Jr. Generalized minimum distan e de oding. IEEE Transa tions on Information
Theory, 12(2):125{131, April 1966.
[6℄ Ralf Kotter. A uni ed des ription of an error lo ating pro edure for linear odes. In Pro eedings
of the International Workshop on Algebrai and Combinatorial Coding Theory, pages 113{117,
Voneshta Voda, Bulgaria, 1992.
[7℄ Ralf Kotter. Fast generalized minimum distan e de oding of algebrai geometry and Reed-
Solomon odes. IEEE Transa tions on Information Theory, 42(3):721{737, May 1996.
[8℄ David M. Mandelbaum. On a lass of arithmeti odes and a de oding algorithm. IEEE Trans-
a tions on Information Theory, 21(1):85{88, January 1976.
[9℄ Ruud Pellikaan. On de oding linear odes by error orre ting pairs. Preprint, Eindhoven Uni-
versity of Te hnology, 1988.

11-10

You might also like