You are on page 1of 30

Chapter 4.

Iterative Turbo Code Decoder


This chapter describes the basic turbo code decoder. The turbo code decoder is
based on a modified Viterbi algorithm that incorporates reliability values to improve
decoding performance. First, this chapter introduces the concept of reliability for Viterbi
decoding. Then, the metric that will be used in the modified Viterbi algorithm for turbo
code decoding is described. Finally, the decoding algorithm and implementation
structure for a turbo code are presented.

4.1 Principle of the General Soft-Output Viterbi Decoder


The Viterbi algorithm produces the ML output sequence for convolutional codes.
This algorithm provides optimal sequence estimation for one stage convolutional codes.
For concatenated (multistage) convolutional codes, there are two main drawbacks to
conventional Viterbi decoders. First, the inner Viterbi decoder produces bursts of bit
errors which degrades the performance of the outer Viterbi decoders [Hag89]. Second,
the inner Viterbi decoder produces hard decision outputs which prohibits the outer Viterbi
decoders from deriving the benefits of soft decisions [Hag89]. Both of these drawbacks
can be reduced and the performance of the overall concatenated decoder can be
significantly improved if the Viterbi decoders are able to produce reliability (soft-output)
values [Ber93a]. The reliability values are passed on to subsequent Viterbi decoders as a-
priori information to improve decoding performance. This modified Viterbi decoder is
referred to as the soft-output Viterbi algorithm (SOVA) decoder. Figure 4.1 shows a
concatenated SOVA decoder.

Overall Concatenated SOVA Decoder

y u1 u2

L=0 SOVA 1 L1 SOVA 2 L2

Figure 4.1: A concatenated SOVA decoder where y represents the received channel
values, u represents the hard decision output values, and L represents
the associated reliability values.

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 38


4.2 Reliability of the General SOVA Decoder
The reliability of the SOVA decoder is calculated from the trellis diagram as
shown in Figure 4.2.

0 0 S 0,t
0 0 0 0 0 0 0

0 V s(S 1,t )
S 1,t
1 1 1 1 1 1 1 1
States

0 V c(S 1,t)
1 0 1
S 2,t
2 0 2 2 2 2 2 1 2

1
S 3,t
3 3 3 3 3 3 3
Memorization
Level (MEM) 5 4 3 2 1 0
Time (t)
t-5 t-4 t-3 t-2 t-1 t

Figure 4.2: Example of survivor and competing paths for reliability estimation at
time t [Ber93a].

In Figure 4.2, a 4-state trellis diagram is shown. The solid line indicates the survivor path
(assumed here to be part of the final ML path) and the dashed line indicates the
competing (concurrent) path at time t for state 1. For the sake of brevity, survivor and
competing paths for other nodes are not shown. The label S1,t represents state 1 and time
t. Also, the labels {0,1} shown on each path indicate the estimated binary decision for
the paths. The survivor path for this node is assigned an accumulated metric Vs(S1,t) and
the competing path for this node is assigned an accumulated metric Vc(S1,t). The
fundamental information for assigning a reliability value L(t) to node S1,ts survivor path
is the absolute difference between the two accumulated metrics, L(t)=| Vs(S1,t) - Vc(S1,t) |
[Ber93a]. The greater this difference, the more reliable is the survivor path. For this
reliability calculation, it is assumed that the survivor accumulated metric is always
better than the competing accumulated metric. Furthermore, to reduce complexity, the
reliability values only need to be calculated for the ML survivor path (assume it is known
for now) and are unnecessary for the other survivor paths since they will be discarded
later.

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 39


To illustrate the concept of reliability, two examples are given below. In these
examples, the Viterbi algorithm selects the survivor path as the path with the smaller
accumulated metric. In the first example, assume that at node S1,t the accumulated
survivor metric Vs(S1,t)=50 and that the accumulated competing metric Vc(S1,t)=100. The
reliability value associated with the selection of this survivor path is L(t)=|50-100|=50. In
the second example, assume that the accumulated survivor metric does not change,
Vs(S1,t)=50, and that the accumulated competing metric Vc(S1,t)=75. The resulting
reliability value is L(t)=|50-75|=25. Although in both of these examples the survivor path
has the same accumulated metric, the reliability value associated with the survivor path is
different. The reliability value in the first example provides more confidence (twice as
much confidence) in the selection of the survivor path than the value in the second
example.

Figure 4.3 illustrates a problem with the use of the absolute difference between
accumulated survivor and competing metrics as a measure of the reliability of the
decision
.
Vs(S0,t-4)=10 Vs(S0,t-2)=50
L(t-4)=10 L(t-2)=25
Vc(S0,t-4)=20 Vc(S0,t-2)=75

0 0
0 0 0 0 0 0 0

Paths diverge 0 Vs(S1,t)=100 L(t)=0


S1,t
1 1 1 1 1 1 1 1
States

0 Vc(S1,t)=100
1 0 1

2 0 2 2 2 2 2 1 2

3 3 3 3 3 3 3
Memorization
Level (MEM) 5 4 3 2 1 0
Time (t)
t-5 t-4 t-3 t-2 t-1 t

Figure 4.3: Example that shows the weakness of reliability assignment using metric
values directly.

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 40


In Figure 4.3, the survivor and competing paths at S1,t have diverged at time t-5. The
survivor and competing paths produce opposite estimated binary decisions at times t, t-2,
and t-4 as shown in bold labels. For the purpose of illustration, let us suppose that the
survivor and competing accumulated metrics at S1,t are equal, Vs(S1,t) = Vc(S1,t) = 100.
This means that both the survivor and competing paths have the same probability of being
the ML path. Furthermore, let us assume that the survivor accumulated metric is better
than the competing accumulated metric at time t-2 and t-4 as shown in Figure 4.3. To
reduce the figure complexity, these competing paths for times t-2 and t-4 are not shown.
From this argument, it can be seen that the reliability value assigned to the survivor path
at time t is L(t)=0, which means that there is no reliability associated with the selection of
the survivor path. At times t-2 and t-4, the reliability values assigned to the survivor path
were greater than zero (L(t-2)=25 and L(t-4)=10) as a result of the better accumulated
metrics from the survivor path. However, at time t, the competing path could also have
been the survivor path because they have the same metric. Thus, there could have been
opposite estimated binary decisions at times t, t-2, and t-4 without reducing the associated
reliability values along the survivor path.

To improve the reliability values of the survivor path, a trace back operation to
update the reliability values has been suggested [Hag89], [Ber93a]. This updating
procedure is integrated into the Viterbi algorithm as follows [Hag89]:
For node Sk,t in the trellis diagram (corresponding to state k at time t),
1. Store L(t) = | Vs(Sk,t) - Vc(Sk,t) |. (This is also denoted as in other papers.)
If there is more than one competing path, then multiple reliability values must
be calculated and the smallest reliability value is then set to L(t).
2. Initialize the reliability value of Sk,t to + (most reliable).
3. Compare the survivor and competing paths at Sk,t and store the memorization
levels (MEMs) where the estimated binary decisions of the two paths differ.
4. Update the reliability values at these MEMs with the following procedure:
a. Find the lowest MEM>0, denoted as MEMlow, whose reliability value
has not been updated.
b. Update MEMlows reliability value L(t-MEMlow) by assigning the
lowest reliability value between MEM = 0 and MEM = MEMlow.
Continuing from the example, the opposite bit estimations between the survivor and
competing bit paths for S1,t are located and stored as MEM={0, 2, 4}. With this MEM
information, the reliability updating process is accomplished as shown in Figure 4.4 and
Figure 4.5. In Figure 4.4, the first reliability update is shown. The lowest MEM>0,
whose reliability value has not been updated, is determined to be MEMlow=2. The lowest
reliability value between MEM=0 and MEM=MEMlow=2 is found to be L(t)=0. Thus, the
associated reliability value is updated from L(t-2)=25 to L(t-2)=L(t)=0. The next lowest
MEM>0, whose reliability value has not been updated, is determined to be MEMlow=4.
The lowest reliability value between MEM=0 and MEM=MEMlow=4 is found to be
L(t)=L(t-2)=0. Thus, the associated reliability value is updated from L(t-4)=10 to L(t-
4)=L(t)=L(t-2)=0. Figure 4.5 shows the second reliability update.
Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 41
L(t-4)=10 L(t-2)=25
Updated L(t-2)=0
0 0
0 0 0 0 0 0 0

L(t-5)=100 0 L(t)=0
L(t-3)=200
S 1,t
1 1 1 1 1 1 1 1
States

0
1 0 1

2 0 2 2 2 2 2 1 2

1 L(t-1)=300

3 3 3 3 3 3 3
Memorization
Level (MEM) 5 4 3 2 1 0
Time (t)
t-5 t-4 t-3 t-2 t-1 t

Figure 4.4: Updating process for time t-2 (MEMlow=2).

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 42


L(t-4)=10 L(t-2)=0
Updated L(t-4)=0
0 0
0 0 0 0 0 0 0

L(t-5)=100 0 L(t)=0
L(t-3)=200
S 1,t
1 1 1 1 1 1 1 1
States

0
1 0 1

2 0 2 2 2 2 2 1 2

1 L(t-1)=300

3 3 3 3 3 3 3
Memorization
Level (MEM) 5 4 3 2 1 0
Time (t)
t-5 t-4 t-3 t-2 t-1 t

Figure 4.5: Updating process for time t-4 (MEMlow=4).

It has been suggested that the final reliability values should be normalized or
logarithmically compressed before passing to the next concatenated decoder to offset
possible defects of this updating operation [Ber93a].

4.3 Introduction to SOVA for Turbo Codes


The SOVA for turbo codes is implemented with a modified Viterbi metric. A
close examination of log-likelihood algebra and soft channel outputs is required before
attempting to derive this modified Viterbi metric. Figure 4.6 shows the system model
that is used to describe the above concepts.

u Channel x y Channel u
Channel
Encoder Decoder

Figure 4.6: System model for SOVA derivation.

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 43


4.3.1 Log-Likelihood Algebra [Hag94], [Hag95], [Hag96]
The log-likelihood algebra used for SOVA decoding of turbo codes is based on a
binary random variable u in GF(2) with elements {+1, -1}, where +1 is the logic 0
element (null element) and -1 is the logic 1 element under (modulo 2) addition.
Table 4.1 shows the outcome of adding two binary random variables under these
governing factors.

Table 4.1: Outcome of Adding Two Binary Random Variables u1 and u2


u1u2 u2=+1 u2=-1
u1=+1 +1 -1
u1=-1 -1 +1

The log-likelihood ratio L(u) for a binary random variable u is defined to be


P(u = +1)
L(u) = ln (4.1)
P (u = 1)
L(u) is often denoted as the soft value or L-value of the binary random variable u. The
sign of L(u) is the hard decision of u and the magnitude of L(u) is the reliability of this
decision. Table 4.2 shows the characteristics of the log-likelihood ratio L(u).

Table 4.2: Characteristics of the Log-likelihood Ratio L(u)


P(u=+1) P(u=-1) L(u)
1 0 +
0.9999 0.0001 9.2102
0.9 0.1 2.1972
0.6 0.4 0.4055
0.5 0.5 0
0.4 0.6 -0.4055
0.1 0.9 -2.1972
0.0001 0.9999 -9.2102
0 1 -

Clearly from Table 4.2, as L(u) increase toward +, the probability of u=+1 also
increases. Furthermore, as L(u) decreases toward -, the probability of u=-1 increases.
As it can be seen, L(u) provides a form of reliability for u. This will be exploited for
SOVA decoding as described later in the chapter.

The probability of the random variable u may be conditioned on another random


variable z. This forms the conditioned log-likelihood ratio L(u|z) and is defined to be

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 44


P(u = +1| z )
L(u| z ) = ln (4.2)
P(u = 1| z )

The probability of the sum of two binary random variables, say P(u1u2=+1), is
found from
P(u1 u2 = +1) = P(u1 = +1) P(u2 = +1) + P(u1 = 1) P(u2 = 1) (4.3)
With the following relation,
P(u = 1) = 1 P(u = +1) (4.4)
the probability P(u1u2=+1) becomes
P(u1 u 2 = +1) = P(u1 = +1) P(u 2 = +1) + (1 P(u1 = +1))(1 P(u 2 = +1)) (4.5)
Using the following relation shown in [Hag96]
e L( u )
P(u = +1) = (4.6)
1 + e L( u )
it can be shown that
1 + e L( u1 ) e L( u2 )
P(u1 u 2 = +1) = (4.7)
(1 + e L( u1 ) )(1 + e L( u2 ) )
The probability P(u1 u 2 = 1) can then be calculated as
P(u1 u2 = 1) = 1 P (u1 u2 = +1)) (4.8)
e L( u1 ) + e L( u2 )
= (4.9)
(1 + e L( u1 ) )(1 + e L( u2 ) )
From the definition of log-likelihood ratio (4.1), it follows directly that
P(u1 u2 = +1)
L(u1 u2 ) = ln (4.10)
P(u1 u2 = 1)
Using (4.7) and (4.9), L(u1u2) is found to be
1 + e L ( u1 ) e L ( u2 )
L(u1 u2 ) = ln L ( u1 ) (4.11)
e + e L ( u2 )
This result is approximated in [Hag94] as
L(u1 u 2 ) sign( L(u1 )) sign( L(u 2 )) min(| L( u1 )|,| L( u 2 )|) (4.12)
Table 4.3 shows the accuracy of this approximation compared to the exact solution.
From Table 4.3, it can be seen that the deviation between the exact and the approximated
solutions becomes larger as the reliability of the decision approaches to zero for both
variables.

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 45


Table 4.3: Comparison of L(u1u2) Between Exact and Approximated Solutions
L(u1) L(u2) Exact L(u1u2) Approximated L(u1u2)
(4.11) (4.12)
0 -1000 0 0
0 -100 0 0
0 -10 0 0
0 -1 0 0
0 0 0 0
0 1 0 0
0 10 0 0
0 100 0 0
0 1000 0 0
1 -1000 -1 -1
1 -100 -1 -1
1 -10 -0.9999 -1
1 -1 -0.4338 -1
1 0 0 0
1 1 0.4338 1
1 10 0.9999 1
1 100 1 1
1 1000 1 1
10 -1000 -10 -10
10 -100 -10 -10
10 -10 -9.3069 -10
10 -1 -0.9999 -1
10 0 0 0
10 1 0.9999 1
10 10 9.3069 10
10 100 10 10
10 1000 10 10
100 -1000 -100 -100
100 -100 -99.3069 -100
100 -10 -10 -10
100 -1 -1 -1
100 0 0 0
100 1 1 1
100 10 10 10
100 100 99.3069 100
100 1000 100 100
1000 -1000 -999.3069 -1000
1000 -100 -100 -100
1000 -10 -10 -10
1000 -1 -1 -1
1000 0 0 0
1000 1 1 1
1000 10 10 10
1000 100 100 100
1000 1000 999.3069 1000

The addition of two soft or L-values is denoted by [+] and is defined as


L(u1 )[+ ] L(u 2 ) = L(u1 u 2 ) (4.13)
with the following three properties

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 46


L(u)[ + ] = L(u) (4.14)
L(u)[ + ]( ) = L(u) (4.15)
L(u)[ + ]0 = 0 (4.16)
By induction, it can be shown that
J
J

[+]
L(u j ) = L u j

(following 4.13) (4.17)
j=1 j=1

J

P u j = +1

j =1
= ln (following 4.10) (4.18)
J

P u j = 1

j =1

J J

(e
j =1
L(u j )
+ 1) + (e
j =1
L(u j )
1)
= ln J J (4.19)
(e
j =1
L(u j )
+ 1) (e
j =1
L(u j )
1)

By using the relation


x ex 1
tanh( ) = x (4.20)
2 e +1
the induction can be simplified to
J L(u j )
J
1 + tanh(
2
)
L(u j ) = ln J
j =1
(4.21)
L( u j )
1 tanh(
[+]
j =1 )
j =1 2
J L(u j )
= 2 tanh 1 tanh( ) (4.22)
j=1 2
This value is very tedious to compute. Thus, it can be approximated as before to
J
J

[+]
L(u j ) = L u j

(4.23)
j=1 j=1
J
sign( L(u j )) min {| L(u j )|} (following 4.12) (4.24)
j =1 j =1,..., J

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 47


It can been seen from (4.24) that the reliability of the sum of soft or L-values is mainly
determined by the smallest soft or L-value of the terms.

4.3.2 Soft Channel Outputs [Hag95], [Hag96]

From the system model in Figure 4.6, the information bit u is mapped to the
encoded bits x. The encoded bits x are transmitted over the channel and received as y.
From this system model, the log-likelihood ratio of x conditioned on y is calculated as
P( x = +1| y )
L( x| y ) = ln (4.25)
P( x = 1| y )
By using Bayes Theorem, this log-likelihood ratio is equivalent to
p( y| x = +1) P( x = +1)
L( x| y ) = ln (4.26)
p( y| x = 1) P( x = 1)
p( y| x = +1) P( x = +1)
= ln + ln (4.27)
p( y| x = 1) P( x = 1)
The channel model is assumed to be flat fading with Gaussian noise. By using the
Gaussian pdf f(z),
( z m) 2
1
f ( z) = e 2
2
(4.28)
2
where m is the mean and the 2 is the variance, it can be shown that
Eb
( y a ) 2
p( y| x = +1) e No

ln = ln Eb (4.29)
p( y| x = 1) ( y +a )2
e No
Eb
2 ay
No
e
= ln Eb (4.30)
2 ay
No
e
Eb
=4 ay (4.31)
No
Eb
where is the signal to noise ratio per bit (directly related to the noise variance) and a
No
is the fading amplitude. For nonfading Gaussian channel, a=1.

The log-likelihood ratio of x conditioned on y, L(x|y), is equivalent to


L( x| y) = Lc y + L( x) (following 4.27 and 4.31) (4.32)
where Lc is defined to be the channel reliability
E
Lc = 4 b a (4.33)
No

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 48


Thus, L(x|y) is just the weighted received value (Lcy) summed with the log-likelihood
value of x (L(x)).

4.4 SOVA Component Decoder for a Turbo Code [Hag94], [Hag95],


[Hag96]

The SOVA component decoder estimates the information sequence using one of
the two encoded streams produced by the turbo code encoder. Figure 4.7 shows the
inputs and outputs of the SOVA component decoder.

L(u) u
SOVA
Lcy L(u)

Figure 4.7: SOVA component decoder.

The SOVA component decoder processes the (log-likelihood ratio) inputs L(u) and Lcy,
where L(u) is the a-priori sequence of the information sequence u and Lcy is the
weighted received sequence. The sequence y is received from the channel. However, the
sequence L(u) is produced and obtained from the preceding SOVA component decoder.
If there is no preceding SOVA component decoder then there are no a-priori values.
Thus, the L(u) sequence is initialized to the all-zero sequence. A similar concept is also
shown at the beginning of the chapter in Figure 4.1. The SOVA component decoder
produces u and L(u) as outputs where u is the estimated information sequence and
L(u) is the associated log-likelihood ratio (soft or L-value) sequence.

The SOVA component decoder operates similarly to the Viterbi decoder except
the ML sequence is found by using a modified metric. This modified metric, which
incorporates the a-priori value, is derived below.

The fundamental Viterbi algorithm searches for the state sequence S(m) or the
information sequence u(m) that maximizes the a-posteriori probability P(S(m)|y). For
binary (k=1) trellises, m can be either 1 or 2 to denote the survivor and the competing
paths respectively. By using Bayes Theorem, the a-posteriori probability can be
expressed as
P(S ( m) )
P(S ( m) | y) = p( y|S ( m) ) (4.34)
p( y)
Since the received sequence y is fixed for metric computation and does not depend on m,
it can be discarded. Thus, the maximization results to
Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 49
max p( y|S ( m) ) P(S ( m) ) (4.35)
m
The probability of a state sequence terminating at time t is P(St). This probability can be
calculated as
P(S t ) = P(S t 1 ) P( S t ) (4.36)
= P(S t 1 ) P(u t ) (4.37)
where P(St) and P(ut) denote the probability of the state and the bit at time t respectively.
The maximization can then be expanded to
t
max p( y|S ( m) ) P(S ( m ) ) = max p( y i |S i(m1) , Si( m ) ) P (S t( m ) ) (4.38)
m m
i =0
where ( S i(1
m)
, S i( m) ) denotes the state transition between time i-1 and time i and yi denotes
the associated received channel values for the state transition.
After substituting and rearranging,
t 1

max p( y|S ( m) ) P(S ( m ) ) = max P(S t( m1) ) p( y i |S i(m1) , S i( m) ) P(ut( m) ) p( y t | S t(m1) , S t( m) ) (4.39)
m m
i =0
Note that
N
p( y t | S t(m1) , S t( m) ) = p( y t , j | x t(,mj ) ) (4.40)
j=1

Thus, the maximization becomes


t 1 N
max P(S t 1 ) p( y i |Si 1 , S i ) P(ut ) p( y t , j | x t(,mj ) )
( m) ( m) ( m) ( m)
(4.41)
m
i =0 j =1
This maximization is not changed if logarithm is applied to the whole expression,
multiplied by 2, and added two constants that are independent of m. This leads to
( m)
max{M t } = max M t 1 + [2 ln P(ut ) Cu ] + [2 ln p( y t , j | x t(,mj ) ) C y ]
N
( m) ( m)
(4.42)
m m
j =1
where
M t(m1) t 1

= ln P(S t 1 ) p( y i | S i(m1) , Si( m ) )
( m)
(4.43)
2 i =0
and for convenience, the two constants are
Cu = ln P(ut = +1) + ln P(ut = 1) (4.44)
C y = ln( p( y t , j | x t , j = +1)) + ln( p( y t , j | x t , j = 1)) (4.45)
After substitution of these two constants, the SOVA metric is obtained as
N p( y t , j | x t , j = +1) P(ut = +1)
M t( m) = M t(m1) + x t(,mj ) ln + ut( m ) ln (4.46)
j =1 p( y t , j | x t , j = 1) P(ut = 1)
and is reduced to
N
M t
( m)
=M ( m)
t 1 + x t(,mj ) Lc y t , j + ut( m) L(ut ) (4.47)
j =1

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 50


For systematic codes, this can be modified to become
N
M t
( m)
=M ( m)
t 1 +u ( m)
t Lc y t ,1 + x t(,mj ) Lct , j y t , j + ut( m) L(ut ) (4.48)
j =2

As seen from (4.47) and (4.48), the SOVA metric incorporates values from the past
metric, the channel reliability, and the source reliability (a-priori value).

Figure 4.8 shows the source reliability as used in SOVA metric computation.

tra n sitio n resu ltin g in u t = + 1


tra n sitio n resu ltin g in u t = -1

A d d (+ 1 )(L (u t ))
Sa Sa

A d d (-1 )(L (u t ))

Sb Sb
A d d (+ 1 )(L (u t ))

tim e t-1 tim e t

Figure 4.8: Source reliability for SOVA metric computation.

Figure 4.8 shows a trellis diagram with two states Sa and Sb and a transition period
between time t-1 and time t. The solid line indicates that the transition will produce an
information bit ut=+1 and the dash line indicates that the transition will produce an
information bit ut=-1. The source reliability L(ut), which may be either a positive or a
negative value, is from the preceding SOVA component decoder. The add on value is
incorporated into the SOVA metric to provide a more reliable decision on the estimated
information bit. For example, if L(ut) is a large positive number, then it would be
relatively more difficult to change the estimated bit decision from +1 to -1 between
decoding stages (based on assigning max{ M t( m ) } to the survivor path). However, if L(ut)
m
is a small positive number, then it would be relatively easier to change the estimated bit
decision from +1 to -1 between decoding stages. Thus, L(ut) is like a buffer which tries
to prevent the decoder from choosing the opposite bit decision to the preceding decoder.

Figure 4.9 shows the weighting properties of the SOVA metric.

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 51


Old Metric (time t-1)

Channel Source
Channel Source
Reliability Reliability
Lc L(u)

New Metric (time t)

Figure 4.9: Weighting properties of the SOVA metric.

As it is illustrated in Figure 4.9, the balance between channel and source reliability is very
important for the SOVA metric. This does not mean that the channel and source
reliability values should have the same magnitude but rather that their relative values
should reflect the channel and source conditions. For instance, if the channel is very
good, Lc will be larger than |L(u)| and decoding relies mostly on the received channel
values. However, if the channel is very bad, decoding relies mostly on the a-priori
information L(u). If this balance is not achieved, catastrophic effects may result and
degrade the performance of the channel decoder.

At time t, the reliability value (magnitude of the log-likelihood ratio) assigned to a


node in the trellis is determined from
1
0t = | Mt(1) Mt( 2 ) | (4.49)
2
where MEM
t denotes the reliability value at memorization level MEM relative to time t.
This notation is similar to the notation L(t-MEM) as used before and is shown in
Figure 4.10 for discussion.

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 52


4t 3t 2t S0,t
0 0
0 0 0 0 0 0 0
0t
0 Require Updates Mt(1)
5t S1,t
1 1 1 1 1 1 1 1
States

0 Mt(2)
1 0 1
S2,t
2 0 2 2 2 2 2 1 2

1
t
1
S3,t
3 3 3 3 3 3 3
Memorization
Level (MEM) 5 4 3 2 1 0
Time (t)
t-5 t-4 t-3 t-2 t-1 t

Figure 4.10: Example of SOVA survivor and competing paths for reliability
estimation.

The probability of path m at time t and the SOVA metric are stated in [Hag94] to
be related as
P( path(m)) = P(S (t m) ) (4.50)
Mt( m )
=e 2
(4.51)
(1)
At time t, let us suppose that the survivor metric of a node is denoted as Mt and the
competing metric is denoted as Mt( 2 ) . Thus, the probability of selecting the correct
survivor path is
P( path(1))
P(correct ) = (4.52)
P( path(1)) + P( path(2))
Mt( 1)
2
e
= Mt( 1) Mt( 2 )
(4.53)
e 2
+e 2

0t
e
= (4.54)
1 + e t
0

The reliability of this path decision is calculated as

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 53


e t
0

0t
= log 1 + e 0
P(correct )
log (4.55)
1 P(correct ) e t
1
1 + e t
0

= 0t (4.56)

The reliability values along the survivor path for a particular node at time t are
denoted as MEMt , where MEM = 0, .., t. For this node at time t, if the bit on the survivor
path at MEM=k (or equivalently at time t-MEM) is the same as the associated bit on the
competing path, then there would be no bit error if the competing path was chosen. Thus,
the reliability value at this bit position remains unchanged. However, if the bits differ on
the survivor and competing path at MEM=k, then there is a bit error. The reliability value
at this bit error position must then be updated using the same updating procedure as
described at the beginning of the chapter. As shown in Figure 4.10, reliability updates are
required for MEM=2 and MEM=4.

The reliability updates are performed to improve the soft or L-values. It is


shown in [Hag95] that the soft or L-value of a bit decision is
MEM
L(ut' MEM ) = ut' MEM
[+]
k
t (4.57)
k =0

and can be approximated by (4.24) to become


L(ut' MEM ) ut' MEM min {kt } (4.58)
k = 0 ,..., MEM

The soft output Viterbi algorithm (along with its reliability updating procedure)
can be implemented as follows:
1. (a) Initialize time t = 0.
(b) Initialize M0( m) = 0 only for the zero state in the trellis diagram and all other
states to -.
2. (a) Set time t = t +1.
N
(b) Compute the metric M t( m) = M t(m1) + ut( m) Lc y t ,1 + x t(,mj ) Lc y t , j + ut( m ) L(ut )
j =2

for each state in the trellis diagram where


m denotes allowable binary trellis branch/transition to a state (m= 1, 2).
Mt( m) is the accumulated metric for time t on branch m.
u t( m) is the systematic bit (1st bit of N bits) for time t on branch m.
x t(,mj ) is the j-th bit of N bits for time t on branch m (2jN).
yt(,mj ) is the received value from the channel corresponding to x t(,mj ) .

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 54


Eb
Lc = 4 is the channel reliability value.
No
L(u t ) is the a-priori reliability value for time t. This value is from the
preceding decoder. If there is no preceding decoder, then this
value is set to zero.
3. ( m)
Find max Mt for each state. For simplicity, let Mt(1) denote the survivor path
m

metric and Mt( 2 ) denote the competing path metric.


4. Store Mt(1) and its associated survivor bit and state paths.
1
5. Compute 0t = | Mt(1) Mt( 2 ) | .
2
6. Compare the survivor and competing paths at each state for time t and store the
MEMs where the estimated binary decisions of the two paths differ.
7. Update MEM
t min {kt } for all MEMs from smallest to largest MEM.
k = 0 ,..., MEM

8. Go back to Step (2) until the end of the received sequence.


9. Output the estimated bit sequence u and its associated soft or L-value sequence
L(u)=u , where operator defines element by element multiplication
operation and is the final updated reliability sequence. L(u) is then
processed (to be discussed later) and passed on as the a-priori sequence
L(u) for the succeeding decoder.

4.5 SOVA Implementation


The SOVA decoder can be implemented in various ways. The straightforward
implementation of the SOVA decoder may become computationally intensive for large
constraint length K codes and long frame sizes because of the need to update all of the
survivor paths. Because the update procedure is meaningful only for the ML path, an
implementation of the SOVA decoder that only performs the update procedure for the
ML path is shown in Figure 4.11.

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 55


SOVA DECODER

Shift Register

L(u) SOVA L(u)


Without ML State Sequence
SOVA
Updating
Lcy u
Procedure

Shift Register

Figure 4.11: SOVA decoder implementation.

The SOVA decoder inputs L(u) and Lcy, the a-priori values and the weighted received
values respectively and outputs u and L(u), the estimated bit decisions and its
associated soft or L-values respectively. This implementation of the SOVA decoder is
composed of two separate SOVA decoders. The first SOVA decoder computes the
metrics for the ML path only and does not compute (suppresses) the reliability values.
The shift registers are used to buffer the inputs while the first SOVA decoder is
processing the ML path. The second SOVA decoder (with the knowledge of the ML
path) recomputes the ML path and also calculates and updates the reliability values. As it
can be seen, this implementation method reduces the complexity in the updating process.
Instead of keeping track and updating 2m survivor paths, only the ML path needs to be
processed.

4.6 SOVA Iterative Turbo Code Decoder [Hag94], [Hag94a], [Hag96]


The iterative turbo code decoder is composed of two concatenated SOVA
component decoders. Figure 4.12 shows the turbo code decoder structure.

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 56


CS Register

SOVA 1
L1(u) Le1(u)

CS Register
y2 +
Channel Reliability 4Eb/No
2 Parallel
-
Shift Registers -
y1

CS Register 2 Parallel Le2(u)


y3 Shift Registers -
-
I
+ I-1
I SOVA 2 I{L2(u)}
I-1 u
CS = Circular Shift CS Register
I = Interleaver
I-1 = Deinterleaver

Figure 4.12: SOVA iterative turbo code decoder.

The turbo code decoder processes the received channel bits on a frame basis. As shown
in Figure 4.12, the received channel bits are demultiplexed into the systematic stream y1
and two parity check streams y2 and y3 from component encoders 1 and 2 respectively.
These bits are weighted by the channel reliability value and loaded on to the CS registers.
The registers shown in the figure are used as buffers to store sequences until they are
needed. The switches are placed in the open position to prevent the bits from the next
frame from being processed until the present frame has been processed.

The SOVA component decoder produces the soft or L-value L(ut' ) for the
estimated bit ut' (for time t). The soft or L-value L(ut' ) can be decomposed into three
distinct terms as stated in [Hag94]
L(ut' ) = L(ut ) + Lc y t ,1 + Le (ut' ) (4.59)
L(u t ) is the a-priori value and is produced by the preceding SOVA component decoder.
Lc y t ,1 is the weighted received systematic channel value. Le (u t' ) is the extrinsic value
produced by the present SOVA component decoder. The information that is passed
between SOVA component decoders is the extrinsic value
Le (ut' ) = L(ut' ) L(ut ) Lc y t ,1 (4.60)

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 57


The a-priori value L(u t ) is subtracted out from the soft or L-value L(u t' ) to prevent
passing information back to the decoder from which it was produced. Also, the weighted
received systematic channel value Lc y t ,1 is subtracted out to remove common
information in the SOVA component decoders.

Figure 4.12 shows that the turbo code decoder is a closed loop serial
concatenation of SOVA component decoders. In this closed loop decoding scheme, each
of the SOVA component decoders estimates the information sequence using a different
weighted parity check stream. The turbo code decoder further implements iterative
decoding to provide more dependable reliability/a-priori estimations from the two
different weighted parity check streams, hoping to achieve better decoding performance.
The iterative turbo code decoding algorithm for the n-th iteration is as follows:
E E
1. The SOVA1 decoder inputs sequences 4 b y1 (systematic), 4 b y2 (parity
No No
check), and Le2 (u' ) and outputs sequence L1 (u') . For the first iteration, sequence
Le2 (u' ) = 0 because there is no initial a-priori value (no extrinsic values from
SOVA2).
2. The extrinsic information from SOVA1 is obtained by
E
Le1 (u ' ) = L1 (u' ) Le 2 (u') Lc y1 where Lc = 4 b .
No
Eb E
3. The sequences 4 y1 and Le1 (u') are interleaved and denoted as I4 b y1
No No
and I{Le1 (u' )} .
E E
4. The SOVA2 decoder inputs sequences I4 b y1 (systematic), I4 b y3
No No
(parity check that was already interleaved by the turbo code encoder), and
I{Le1 (u')} (a-priori information) and outputs sequences I{L2 (u')} and I{u'} .
5. The extrinsic information from SOVA2 is obtained by
{ }
I Le 2 (u ' ) = I{L2 (u' )} I{Le1 (u' )} I{Lc y1 } .
6. The sequences I{Le2 (u')} and I{u'} are deinterleaved and denoted as Le2 (u') and
u' . Le2 (u') is fed back to SOVA1 as a-priori information for the next iteration
and u' is the estimated bits output for the n-th iteration.

The SOVA component decoder and the SOVA iterative turbo code decoder are
both complicated. An example is shown below to aid in the understanding of these
decoders. Figure 4.13 shows the turbo code encoder structure used in the example.

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 58


u Shift Register 1 x2
Recursive Encoder 1
(Size L)

Interleaver
(Size L) Com plete RSC 2
x1

Shift Register 2 x3
Recursive Encoder 2
(Size L)

Figure 4.13: Turbo code encoder structure for the example.

In Figure 4.13, the systematic bit stream (with its termination bits) is associated with
recursive encoder 2. The input information bit stream u is loaded into shift register 1 to
form a data frame. This data frame is passed to the interleaver and its output is fed into
shift register 2. The two component encoders then encode their respective inputs. The
output encoded bit streams x1, x2, and x3 are multiplexed together to form a single
transmission bit stream. Figure 4.14 shows the RSC component encoder used in the
example.

u
+ D D
A

x2

x1

Figure 4.14: RSC component encoder for the example.

In Figure 4.14, the switch is turned on to position A for encoding the input sequence and
is turned on to position B for terminating the trellis. Figure 4.15 shows the state diagram
of the RSC component encoder.

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 59


10
1/11 0/01
0/00 1/10

00 11

0/00 1/10

1/11 0/01
01
Label: u / x1x2
{0,1} Domain

Figure 4.15: State diagram of the RSC component encoder for the example.

The encoded bits need to mapped from the {0,1} domain to the {-1,+1} domain for
transmission and Figure 4.16 shows this modified state diagram.

10
1/11 -1/-1 1
-1/-1 -1 1/1 -1

00 11

-1/-1 -1 1/1 -1

1/11 -1/-1 1
01
Label: u / x1x2
{-1,+1} Domain

Figure 4.16: Transmission state diagram of the RSC component encoder for the
example.

For the example, the input sequence is u={01101}. The interleaver (of size L=5)
inverses the input sequence as its output. For the input sequence, the interleaver
outputs I{u}= {10110}. From Figure 4.13 and Figure 4.14, the encoded sequences are
x1={1011010}, x2={0100010}, and x3={1100110}. Coincidentally, the encoded
sequences have the same tail bits (10). The encoded sequences are mapped to the {-1,+1}
domain for transmission as x1={1 -1 1 1 -1 1 -1}, x2={-1 1 -1 -1 -1 1 -1}, and
Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 60
x3={1 1 -1 -1 1 1 -1}. The corresponding received sequences are y1={1 -1 1 1 -1 1 -1},
y2={0 1 -1 -1 -1 1 -1}, and y3={0 1 -1 -1 1 1 -1} where errors are underlined. A 0 is
received to represent an erasure (to indicate the reception of a signal whose corresponding
symbol value is in doubt) [Wic95]. Assuming Eb/No=1, the weighted received sequences
are Lcy1={4 -4 4 4 -4 4 -4}, Lcy2={0 4 -4 -4 -4 4 -4}, and Lcy3={0 4 -4 -4 4 4 -4}.

Turbo code decoding is shown below for the first (initial) decoding iteration.
From the transmission state diagram shown in Figure 4.16, the trellis legend (state
transition diagram) is obtained and is shown in Figure 4.17. The trellis legend is required
for decoding the RSC component codes.

L E G E N D

S ta te s S ta te s
-1 /-1 -1 0 0
0 0
1 /1 1 1 /1 1

0 1 0 1
-1 /-1 -1
1 /1 -1
1 0 1 0
-1 /-1 1 -1 /-1 1

1 1 1 1
1 /1 -1
i i+ 1

T im e

L a b e l: u / x 1x 2
{ - 1 ,+ 1 } D o m a in

Figure 4.17: Trellis legend (state transition diagram) of the RSC component
encoder for the example. The first coded bit is the systematic bit
and the second coded bit is the parity check bit.

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 61


Figure 4.18 shows the SOVA iterative turbo code decoder for the example.

CS Register

I-1 SOVA 1
L1(u) Le1(u)

CS Register
y2 +
Channel Reliability 4Eb/No

2 Parallel
-
Shift Registers -
y1
CS Register

2 Parallel Le2(u)
y3 Shift Registers -
-
I
+ I-1
SOVA 2 I{L2(u)}
I-1 u
CS = Circular Shift CS Register
I = Interleaver
I-1 = Deinterleaver

Figure 4.18: SOVA iterative turbo code decoder for the example.

The SOVA1 component decoder is used to decode the RSC1 code. The SOVA1
component decoders input sequences are I-1{Lcy1}, Lcy2, and Le2(u). The systematic
sequence Lcy1 is deinterleaved to decode the RSC1 code. The input sequences are
I-1{Lcy1}={-4 4 4 -4 4 4 -4}, Lcy2={0 4 -4 -4 -4 4 -4}, and Le2(u)={0 0 0 0 0 0 0} (no a-
priori knowledge). The SOVA1 component decoder is implemented with two separate
SOVA decoders (Figure 4.11). The first SOVA decoder (of the SOVA1 component
decoder) computes the SOVA metric (4.48) for the ML path. (Note that the notation from
the iterative turbo code decoder Le(u) is equivalent to the notation from the SOVA
metric L(u)). Figure 4.19 shows the first SOVA decoders (of the SOVA1 component
decoder) ML path.

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 62


TRELLIS DIAGRAM
States
TIE
00 0 4 -4 -4 12 12 44 52

01 -4 20 4 36 20

10 -4 12 -4 28 12
TIE

11 -4 4 4 20
TIE
0 1 2 3 4 5 6 7
Time
1
Metric Used: M t
( m)
=M ( m)
t 1 +u ( m)
t I { Lc y t ,1} + x t(,m2 ) Lc y t ,2 + ut( m) Le 2 (ut' )
I-1{Lcy1} -4 4 4 -4 4 4 -4
Lcy2 0 4 -4 -4 -4 4 -4
Le2(u) 0 0 0 0 0 0 0

Figure 4.19: The first SOVA decoders (of the SOVA1 component decoder) ML
path.

In Figure 4.19, the bold partial path metrics correspond to the ML path. Survivor paths
are represented by bold solid lines and competing paths are represented by simple solid
lines. For metric ties, the first branch is always chosen. Table 4.4 shows the survivor
(larger) and competing (smaller) partial path metrics for the trellis diagram in Figure 4.19.

Table 4.4: Survivor and Competing Partial Path Metrics for the Trellis Diagram
in Figure 4.19
Time 0 Time 1 Time 2 Time 3 Time 4 Time 5 Time 6 Time 7
State 00 0 4 -4 -4 / -4 4 / 12 12 / 4 4 / 44 52 / 12
State 01 4 20 / -12 -4 / 4 36 / -4 12 / 20
State 10 -4 12 -4 / -4 -12 / 28 12 / 4
State 11 -4 4/4 -4 / 4 20 / 12

With the knowledge of the ML path, the second SOVA decoder (of the SOVA1
component decoder) recomputes the SOVA metric and also calculates and updates the
reliability values. From the trellis legend (Figure 4.17) and the ML path (Figure 4.19),
the SOVA1 component decoder produces the estimated bit sequence
Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 63
u={-1 1 1 -1 1 1 -1} and state sequence s={00, 10, 01, 10, 01, 00, 00}. Table 4.5,
obtained from Figure 4.19, shows the competing path (bit and state sequences) for
reliability updates.

Table 4.5: SOVA1s Competing Path (Bit and State Sequences) for Reliability
Updates
Time Time Time Time Time Time Time Time
Index Index Index Index Index Index Index Index
0 1 2 3 4 5 6 7
Time 3 1/10 -1/11 -1/01
Time 4 -1/00 -1/00 -1/00 1/10
Time 5 -1/00 1/10 -1/11 1/11 -1/01
Time 6 -1/00 1/10 1/01 1/00 -1/00 -1/00
Time 7 -1/00 1/10 1/01 -1/10 -1/11 -1/01 1/00

Table 4.6, obtained from Figure 4.19, Table 4.4, and Table 4.5, shows the calculated and
updated (in bold) reliability values.

Table 4.6: SOVA1s Calculated and Updated (In Bold) Reliability Values
Time Time Time Time Time Time Time Time
Index Index Index Index Index Index Index Index
0 1 2 3 4 5 6 7
Time 3 16 16 16
Time 4 16 16 16 20
Time 5 16 16 16 20 20
Time 6 16 16 16 20 20 24
Time 7 16 16 16 20 20 24 32

From Table 4.6, the SOVA1 component decoder produces the final reliability sequence
={16, 16, 16, 20, 20, 24, 32}. The SOVA1 component decoder outputs the soft or L-
value sequence L1(u)={-16, 16, 16, -20, 20, 24, -32}. The extrinsic value sequence,
obtained by subtracting SOVA1s inputs from the soft or L-value sequence, is
Le1(u)=L1(u)-I{Lcy1}-Le2(u)={-12, 12, 12, -16, 16, 20, -28}.

The SOVA2 component decoder is used to decode the RSC2 code. The SOVA2
component decoders input sequences are Lcy1, Lcy3, and I{Le1(u)}. The extrinsic value
sequence Le1(u) is interleaved to decode the RSC2 code. These input sequences are
Lcy1={4 -4 4 4 -4 4 -4}, Lcy3={0 4 -4 -4 4 4 -4}, and
I{Le1(u)}={16, -16, 12, 12, -12, 20, -28}. The SOVA2 component decoder is
implemented with two separate SOVA decoders (Figure 4.11). The first SOVA decoder

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 64


(of the SOVA2 component decoder) computes the SOVA metric (4.48) for the ML path.
(Note that the notation from the iterative turbo code decoder Le(u) is equivalent to the
notation from the SOVA metric L(u)). Figure 4.20 shows the first SOVA decoders (of
the SOVA2 component decoder) ML path.

TRELLIS DIAGRAM
States

00 0 -20 -4 8 36 48 132 168

01 -4 24 44 104 52

10 20 -36 8 20 32

11 44 64 84 64

0 1 2 3 4 5 6 7
Time
Metric Used: M t( m) = M t(1
m)
+ ut( m) Lc y t ,1 + x t(,m2 ) Lc y t ,3 + ut( m) I { Le1 (ut' )}
Lcy1 4 -4 4 4 -4 4 -4
Lcy3 0 4 -4 -4 4 4 -4
I{Le1(u)} 16 -16 12 12 -12 20 -28

Figure 4.20: The first SOVA decoders (of the SOVA2 component decoder) ML
path.

In Figure 4.20, the bold partial path metrics correspond to the ML path. Survivor paths
are represented by bold solid lines and competing paths are represented by simple solid
lines. Table 4.7 shows the survivor (larger) and competing (smaller) partial path metrics
for the trellis diagram in Figure 4.20.

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 65


Table 4.7: Survivor and Competing Partial Path Metrics for the Trellis Diagram
in Figure 4.20
Time 0 Time 1 Time 2 Time 3 Time 4 Time 5 Time 6 Time 7
State 00 0 -20 -4 -16 / 8 -4 / 36 48 / 32 20 / 132 168 / 16
State 01 -4 -16 / 24 28 / 44 0 / 104 52 / 44
State 10 20 -36 8 / -16 20 / 12 24 / 32
State 11 44 -56 / 64 -12 / 84 40 / 64

With the knowledge of the ML path, the second SOVA decoder (of the SOVA2
component decoder) recomputes the SOVA metric and also calculates and updates the
reliability values. From the trellis legend (Figure 4.17) and the ML path (Figure 4.20),
the SOVA2 component decoder produces the estimated bit sequence
I{u}={1 -1 1 1 -1 1 -1} and state sequence I{s}={10, 11, 11, 11, 01, 00, 00}. Table 4.8,
obtained from Figure 4.20, shows the competing path (bit and state sequences) for
reliability updates.

Table 4.8: SOVA2s Competing Path (Bit and State Sequences) for Reliability
Updates
Time Time Time Time Time Time Time Time
Index Index Index Index Index Index Index Index
0 1 2 3 4 5 6 7
Time 3 -1/00 1/10 -1/11
Time 4 -1/00 -1/00 1/10 -1/11
Time 5 1/10 1/01 1/00 1/10 1/01
Time 6 1/10 -1/11 -1/01 1/00 -1/00 -1/00
Time 7 1/10 -1/11 1/11 -1/01 -1/10 1/01 1/00

Table 4.9, obtained from Figure 4.20, Table 4.7, and Table 4.8, shows the calculated and
updated (in bold) reliability values.

Table 4.9: SOVA2s Calculated and Updated (In Bold) Reliability Values
Time Time Time Time Time Time Time Time
Index Index Index Index Index Index Index Index
0 1 2 3 4 5 6 7
Time 3 60 60 60
Time 4 48 60 60 48
Time 5 48 48 60 48 52
Time 6 48 48 48 48 52 56
Time 7 48 48 48 52 52 56 76

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 66


From Table 4.9, the SOVA2 component decoder produces the final reliability sequence
I{}={48, 48, 48, 52, 52, 56, 76}. The SOVA2 component decoder outputs the soft or
L-value sequence I{L2(u)}={48, -48, 48, 52, -52, 56, -76}. The extrinsic value
sequence, obtained by subtracting SOVA2s inputs from the soft or L-value sequence,
is I{Le2(u)}=I{L2(u)}-Lcy1-I{Le1(u)}={28, -28, 32, 36, -36, 32, -44}. The extrinsic
value is deinterleaved (I-1{I{Le2(u)}}=Le2(u)={-36, 36, 32, -28, 28, 32, -44}) and is
used to decode the RSC1 code for the next decoding iteration. The estimated bit
sequence I{u}={1 -1 1 1 -1 1 -1} is also deinterleaved (I-1{I{u}}=u={-1 1 1 -1 1}, not
including the tail bits) to produce the estimated information sequence. After mapping
from the {-1,+1} domain to the {0,1} domain, the estimated information sequence is
u=u={01101}.

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 67