Professional Documents
Culture Documents
Jelinek
Abstract: In this paper a new sequential decoding algorithm is introduced that uses stack storage at the receiver. It is much simpler
to describe and analyze than the Fano algorithm, and is about six times faster than the latter at transmission rates equal to R eomp, the
rate below which the average number of decoding steps is bounded by a constant, Practical problems connected with implementing
the stack algorithm are discussed and a scheme is described that facilitates satisfactory performance even with limited stack storage
capacity. Preliminary simulation results estimating the decoding effort and the needed stack size are presented.
1. Introduction
Sequential decoding is a method of communication through
noisy channels that uses tree codes (see Fig, 2). Several
decoding algorithms have been suggested,1-3 the one due
to Fano being universally acknowledged as the best (see
also Sec. 10.4 of Jelinek"), The first two algorithms have
the common characteristic that, when used with appropriate tree codes signaling over memoryless channels,
their probability of error decreases exponentially to
zero at all transmission rates less than capacity C, while
for all rates less than a value called Rc om p , the average
amount of decoding effort per decoded information bit
is bounded above by a constant. R eom" is a function of the
channel transmission probabilities only, and exceeds CI2
for all binary symmetric channels. Figure 1 contains the
plot of Reo"r,,1 C as a function ofthe crossover probability p,
In contrast, all widely used methods of algebraic coding
achieve arbitrarily reliable performance only when the
transmission rate is sufficiently reduced toward zero. (It
ought to be added, however, that algebraic schemes are
much simpler than sequential ones.) Furthermore, these
methods work only for symmetrical channels with the
same number of outputs as inputs.
In this paper we will introduce a new sequential algorithm that is faster than all competing ones, and that is
very simple to describe and analyze. To realize its speed
advantage without an increase in error probability, it
is necessary to increase by a considerable amount the
memory of the decoder. However, in a suitable environment (e.g., when a general purpose computer is used
NOVEMBER 1969
0.7
0.6
\..l
<,
0.
0.5
<>:;
004
0.0001
0.1
0.5
Figure 1 The graph of Roomp/e as a function of the crossover probability p of a binary symmetric channel.
675
000
I
o I
I
1
I
I
()
()
0
1
I
1
o I
I
-
0
0
I
1
1
I
I
0
1;3.
II wo('1dt.).
i=l
676
F. JELINEK
of no of these that correspond to the tree branches. Accordingly, it will simplify further discussion if we henceforth restrict our attention to the no-product channel
[wry/x)] whose input symbols x and output symbols y
are strings of no inputs and outputs of the underlying
channel [wo('1J/~)]. From this point of view, the product
channel input alphabet corresponding to the tree code
of Fig. 2 is octal, and the message sequence 0011 causes
0532 to be transmitted. Let x* represent some sequence
~~,
~,,~ and let y* represent '1J~, '1)~, , '1),,~.
Then
n,
IT wo('1)V~~).
w(y*/x*)
(1)
i-l
Wr(y)
50'
w(y
I x)r(x).
(2)
==
A.(s')
(3)
' ... 1
I\,\S
i)
=
-
NOVEMBER
082
w[y,
1969
I(y.)
x.(s')]
Wr
R
no .
(4)
= MO) and L(l) = Ai(l), the likelihoods of the two branches' leaving the root node (see
Fig. 2), and place them into the decoder's memory.
(2) If L(O) ~ L(1), eliminate L(O) from the decoder's
memory and compute L(OO) = (0)
).2(00) and L(Ol) =
L(O) + A2(OI). Otherwise, eliminate L(l) and compute
L(10) = L(I)
~(10) and L(11) = L(I)
Mll).
Therefore we end with the likelihoods of three paths
in the decoder's memory, two paths of length 2 and one
oflength 1.
(3) Arrange the likelihoods of the three paths in decreasing order. Take the path corresponding to the topmost likelihood, compute the likelihoods of its two
possible one-branch extensions, and eliminate from
memory the likelihood of the just extended path [e.g.,
if (0), L(IO), L(11) are in the memory and, say, L(10) ~
L(O) ~ L(11), then L(10) is replaced in the memory by
the newly computed values of (100)
L(IO)
).3(100)
and L(101)
L(IO)
A3(101). In case, say, L(O) ~
(11) ~ L(10), then L(O) is replaced by L(OO) and L(Ol)].
At the end of this step the decoder's memory will contain
the likelihoods of four paths, and either two of these
will be of length 3 and one each of lengths 1 and 2, or
all four paths will be of length 2.
(4) The search pattern is now clear. After the kth step,
the decoder's memory will contain exactly k
1 likelihoods corresponding to paths of various lengths and
different end nodes. The (k
l)th step will consist of
finding the largest likelihood in the memory, determining
the path to which it corresponds, and replacing that
likelihood with the likelihoods corresponding to the
two one-branch extensions of that path.
(5) The decoding process terminates when the path to
be extended next is of length I', i.e., leads from the root
node to the last level of the tree.
(1) Compute L(O)
+ +
677
678
F. JELINEK
0
2, 1
5, 6,
6, 1,
1, 11,
3, 11,
8, 11,
11, 12
12, 13, 14
12, 13, 14, 4
12, 13, 14, 7, 4.
00000000
I
0
-Ly
()
{Io
==
.L:
'P[L(s~)
:::;1+ L
.L: L
2:
min L(s"')]
(6)
'P[L(s~)
2:
<
1
')'
(7)
PeCi) :::; Pr {
L: [L: w(ylx)l/(I+~)r(x)]I+'Y
(8)
"
s:
1969
[L(s~)
s.rE1)i
i<m'5.T'
:::; L
Pr
[L(s:)
2:
L(sm)]}
(9)
s.rE:Di
'In-i+l
NOVEMBER
o I
L: PeO).
J~
== -log2
r, :::;
where
E o(')', r)
L(sm)],
noR
(5)
Then
i=i+1
L:i=o
<peA)
o I
(I 0)
"",0
Peer
679
Roomp
t
max Eo(l, r)
no r
=0 -
(11)
680
F. JELIt'lEK
9000.-------~-----------...,.
8000
7000
1
.s
6000
5000
4000
3000
"[
2000
...
'0
]
.~
;::s
16 branches
per node ,../Y'"
1000
____ ---<r--.-
o
0.7
..
5)"-"-
0..-'
..-"-
..-..-""
0.9
1.0
R/Rcomp
(1) With each path likelihood L(s') also store the order j
(by size) of the likelihood },.; = Ai(s') [see Eq, (4)] of
the last branch S i of the path [where L(s ') = L(s i-l)
v, S' = (s'-\ Si) and x' 2:: A2 2:: ... 2:: Ad].
(2) If L(si) is found at the top of the stack, replace it by
the likelihood L(si+l), where Si+l has the largest likelihood of all branches leaving s '. If the order j of the
likelihood of the branch s, is less than d, also insert into
the stack the likelihood L(s~) = L(Si-l) + Ai+l corresponding to the path through the (j
I)-order branch
leaving the terminal node of S'-I. If j = d, do not insert
any additional path.
NOVEMBER
1969
state:
state:
state:
state:
state:
state:
state:
0
2
5,
6,
1,
3,
8,
1
1,
11
11, 13
11, 13
11, 13, 4.
681
branch (not bit!) had virtually the same value for both
versions of the stack algorithm at all values of the crossover probability p.
7. Handling of stack overflows
682
F. JELINEK
Discard or insert
extensions into
stack and adjust
T if necessary
_-I
1-~-
r---------
No
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
C' =C(s;-1,5;)
1
;=;-1
I
L
I
~
Figure 8 Flow chart of the combined stack-Fano decoding algorithm; notation as in Ref. 4.
stack address of the previously entered path whose likelihood has an integral part equal to the integral part of
L(g) [if no such path exists, peg) = 0].
The appropriate parameter values are maintained in
the two storages as follows: Suppose the path at location
g of the stack is to be extended (and therefore eliminated
from the stack) and suppose I equals the integral part
of the likelihood L(g). Then the entry G(l) of the auxiliary
stack will be set equal to the pointer peg). Next, suppose
the path S' of likelihood L' is to be inserted into location
g' of the stack. If I' is the integral part of L', then peg')
is set equal to the current entry G(l') of the auxiliary
stack, whereupon G(l') is reset equal to s'. Finally, L(g')
NOVEMBER
1969
683
-6
10
-3
11
-3
.2
-3
-6
Step 4
Step 6
g
000 -11
-6
100 -7
100 - 7
110 -7
110 - 7
111 -7
III - 7
101 -7
101 -7
01 -12
-5
-6
-7
-8
001 - 4 .0
-4
-7
-11
-12
684
F. JELINEK
necessary by path purges of the stack, this may be accomplished best by adding another register C*(j) at the
various map locations. C*(j) will at all times be equal
to the number v (~ d k ) of pointers pointing to the map
location j. Suppose the entry g is to be purged from the
stack and i, = Q(g). Then the value of C*(j1) is lowered
by 1. If the new value is not equal to 0 nothing further
is done, if it is equal to 0 and j2 = Q*(j1) then C*(M
is lowered by 1 and the i, map location is made available
to a pool for new refilling. The new value of C*(j2) is
similarly compared with 0 and, if it is 0, then C*(ja) Ua =
Q*(j2)] is in turn lowered by 1 and the j2 map location
becomes available, etc.
v*
Step 4
Step 3
Step I
Node
C*
v* Q* C'
Node
11
16
v* Q* C*
v*
0
Node
0
0
11
13
v*
11
13
6
3
II
4
2
13
12
14
11
v* Q* c*
Node
0
0
12
14
C*
Step 4
Node
0
0
0
0
0
6
8
v*
11
13
Node
C*
0
v* Q* C*
Node
11
13
()
2
2
10
12
Step 6
Step 5
4
o
7
9
9
5. F. Jelinek, "An Upper Bound on Moments of Sequential Decoding Efforts," IEEE Trans. Information Theory
ITIS, 140 (1969).
6. F. Jelinek, "A Stack Algorithm for Faster Sequential
Decoding of Transmitted Information," IBM Research
Report RC 2441, April 15, 1969.
7. I. M. Jacobs and E. Berlekamp, "A Lower Bound to
the Distribution of Computation for Sequential Decoding," IEEE Trans. Information Theory IT13, 167
(1967).
8. 1. M. Wozencraft and I. M. Jacobs, Principles of Communication Engineering, John Wiley & Sons, Inc., New
York 1965.
9. D. Falconer, "A Hybrid Sequential and Algebraic Decoding Scheme," Sc.D. thesis, Dept. of Elec, Eng.,
M.LT., 1966.
10. F. L. Huband and F. Jelinek, "Practical Sequential Decoding and a Simple Hybrid Scheme," IntI. Conf. on
System Sciences, Hawaii, January 1968.
11. F. Jelinek and J. Cocke, "Adaptive Hybrid Sequential
Decoding," submitted to Information and Control.
685
NOVEMBER
1969