You are on page 1of 6

7th International Symposium on Power-Line Communications and Its Applications

Kyoto, Japan, March 26-28, 2003


Session B2: Coding and Modulation

Viterbi Decoding for Convolutional Code over Class A Noise Channel

Yuki NAKANO† , Daisuke UMEHARA‡ , Makoto KAWAI‡ , and Yoshiteru MORIHIRO‡



School of Electrical and Electronic Engineering

Graduate School of Informatics
Kyoto University
Yoshidahonmachi, Sakyoku, Kyoto, 606-8501, Japan
Phone: +81-75-753-5960, Fax: +81-75-753-5349
E-mail: yuki@forest.kuee.kyoto-u.ac.jp, {umehara,kawaki,morihiro}@i.kyoto-u.ac.jp

Abstract sumption that there are many independently identically dis-


tributed (iid) Class A noise samples in one symbol duration,
Power line channel often suffers from impulsive interfer- i.e. the noise bandwidth is wider than the signal bandwidth,
ences generated by electrical appliances. Therefore, power the optimum receivers for single carrier modulations [2–4]
line communication (PLC) suffers the degradation due to and trellis coded modulation (TCM) [5, 6] have been de-
such the impulsive interferences. Middleton’s Class A noise veloped, and then it has been shown that their bit error
model is frequently utilized for the modeling of such the im- rates (BERs) are lower than those of the receivers opti-
pulsive noise environment. mized in Gaussian noise. On the other hand, Häring et. al
In this paper, we deal with the Viterbi decoding for con- have proposed an effective iterative detection using MMSE
volutional code over additive white Class A noise (AWAN) (minimum mean square error) estimator for complex num-
channel. We propose a Viterbi decoder for convolutional ber codes including OFDM (orthogonal frequency division
code optimized in AWAN channel. In addition, we show multiplexing) and PSS (parallel transmitting spread spec-
the performance of the proposed Viterbi decoder for con- trum) [7, 8]. Also Umehara et. al have shown an iterative
volutional code in Class A noise environment by computer detection using normalized LOBD (locally optimum Bayes
simulation. detector) for PCSS (parallel combinatory spread spectrum)
system [9, 10]. The performances of these iterative de-
codings have a remarkable improvement as compared with
1. Introduction those of the conventional decodings on the assumption that
signal bandwidth is equal to noise one. However, Viterbi de-
Residential power line is one of the most attractive com-
coding for convolutional code, optimized in Class A noise,
munication media for home networking, since almost all
has not been discussed on the assumption that signal band-
rooms in a house have its outlets. However, many elec-
width is equal to noise one.
tric appliances frequently cause man-made electromagnetic
noises on power line channel. Such the man-made noises In this paper, we propose an optimum Viterbi decoder
have the impulsive feature, and then are one of technical for convolutional code over AWAN channel. One of the ad-
problems for the realization of the power line communi- vantages of our proposed decoder is that there is no need to
cations with high rate and high reliability. We introduce configure the conventional Viterbi decoder. Our proposed
Middleton’s Class A noise model into a statistical model for decoder can be established only by putting a preprocessing
such the impulsive noises. Class A noise channel proposed device on the front-end of the conventional Viterbi decoder.
by Middleton [1] is one of non-Gaussian noise channels and Then, we evaluate the BER performance of our proposed
is currently applied to the modeling of man-made impulsive decoder over AWAN channel. Furthermore, we compare the
noise channels, for example, wireless channel, power line performance of our proposed decoder with that of the con-
channel and so on. ventional Viterbi decoder over AWAN channel by computer
The statistical characteristic of Class A noise is much simulation.
different from that of Gaussian noise. Therefore, the con- In Section 2, we present the Middleton’s Class A noise
ventional decoders optimized in additive white Gaussian model and a simplified Class A noise model [4]. Also, we
noise (AWGN) channel, in general, are not suitable for explain the convolutional code and the Viterbi decoding for
Class A noise environment. A variety of optimum receivers the convolutional code optimized in AWGN channel. In
for Class A noise have been discussed [2–10]. On the as- Section 3, we propose a Viterbi decoding for convolutional

97
code over AWAN channel. In Section 4, we show the per- noise. In [4], Kusao et. al have simplified the PDF of Class
formance of the proposed decoder over AWAN channel by A noise. We utilize this simplified Class A noise model. If
computer simulation. In Section 5, we present the conclu- impulsive index A is smaller than 0.25, Eq. (1) is approxi-
sions. mate to the sum of three terms in m = 0, 1, 2,
e−A

1 −z2 /2σ02 A 2 2
2. Preliminaries P̃A,Γ,σ2 (z) = √ e + e−z /2σ1
2π σ0 σ1
A2 −z2 /2σ22

+ e , (2)
2.1. Impulsive noise model 2σ2
with
2.1.1. Class A noise model
Γ 1/A + Γ 2/A + Γ
Power line channel is different from many other com- σ02 = σ 2 , σ12 = σ 2 , σ22 = σ 2 .
1+Γ 1+Γ 1+Γ
munication channels. One of characteristics of power line
channel is impulsive interferences generated from electri- Further, Eq. (2) can be approximate to
cal appliances connected to power line. Such the impulsive  −A m 
e A 1 −z 2 /2σm
2
noise is one of the serious factors influencing digital com- P̂A,Γ,σ (z) = max
2 · √ e
munications over power line channel. The occurrence of the
m=0,1,2 m! 2πσm
impulsive noise may cause bit or burst errors in data trans-
1
 2 2
mission. Middleton’s Class A noise model is one of models 
 e−A √ e−z /2σ0 , 0 ≤ |z| < a,
for impulsive noise environments. The probability density



 2πσ 0
−A √ 1
 2 2
function (PDF) of Class A noise is given by = e e−z /2σ1 , a ≤ |z| < b,
 2πσ1
∞ 
e−A Am z2
 
1 1

2 2
X 
PA,Γ,σ2 (z) := ·√ exp − 2 , −A
 e √ e−z /2σ2 , b ≤ |z|,


m=0
m! 2πσm 2σm 2πσ2
(1) (3)

with with
s  
2 m/A
+Γ 2σ02 σ12

2
σm := σ , σ0 
a= ln A


1+Γ σ02 − σ12 σ1


2
s  .
where A is the impulsive index, Γ := σG /σI2 is the GIR 2σ12 σ22

σ1 
(Gauss-to-impulsive noise power ratio) with Gaussian noise b= ln A 

σ12 − σ22 2σ2

2
power σG and impulsive noise power σI2 , and σ 2 = σG 2
+σI2
is the total noise power. The noise z followed by Eq. (1) al- As impulsive index A becomes smaller, Eq. (1) sufficiently
ways includes the background Gaussian noise with power approximates Eq. (3). We call Eq. (3) a simplified Class
2
σG . On the other hand, sources of impulsive noise are dis- A noise model. Some receivers optimized in the simplified
tributed with Poisson distribution (e−A Am )/m!. If one im- Class A noise model have been proposed [4–6].
pulsive noise source generates a noise, then the noise is
characterized by the Gaussian PDF with variance σI2 /A. 2.2. Convolutional code
Consequently, at a certain observation time, assuming that
the number of impulsive noise sources is m which is char- Error-correcting code is classified into two kinds. One
acterized by Poisson distribution with mean A, the noise of is block code, the other is trellis code. Convolutional code
the receiver is characterized by the Gaussian PDF with vari- belongs to trellis code. In convolutional code, each k-bit in-
ance σm 2
= σG2
+ (mσI2 /A). The larger A, the impulsive formation block (k-bit string) is encoded to an n-bit block,
noise will be continuous, and then the Class A noise is led where n is larger than k, and the current code block is re-
to be more likely to the Gaussian noise. In particular, if A lated to the last L information blocks. In such the case, the
is nearly equal to 10, the statistical feature of the Class A code rate is defined by k/n. The code rate k/n has not only
noise is almost similar to that of the Gaussian noise [1]. the meaning of rate but also the meaning that the numerator
k is the information block length and the denominator n is
the code block length. Therefore, code rate 2/4 is different
2.1.2. Simplified Class A noise model
from code rate 1/2. The L is the number of code blocks
As the PDF of Class A noise consists of infinite series, with which a information block relates directly in encoding
it is intractable to develop receivers optimized in Class A and is called a constraint length.

98
+ +
2.3. Viterbi decoding

The Viterbi decoding is the most general decoding for

D D
convolutional code by using an asymptotically optimum de-
coding technique. The Viterbi decoding is an efficient and

+
recursive algorithm that performs the maximum likelihood
(ML) decoding. A noisy channel causes bit errors at the
receiver. The Viterbi algorithm finds the most likely bit se-
ries that is closest to the actual received series. The Viterbi
Figure 1: Convolutional encoder (k/n = 1/2 and L = 3). decoder uses the redundancy, which is imposed by the con-
volutional encoder, to decode the bit stream and correct the
errors. At first, we explain the various terms with respect to
By using the delay operator D, the information series Viterbi decoding.
(m0 m1 m2 · · · ) and the code series (w 0 w1 w2 · · · ) is ex-
pressed by • Trellis diagram
2 The trellis diagram is the diagram representing a tran-
M (D) = m0 + m1 D + m2 D + · · · , sition of the state of shift registers in convolutional
W (D) = w 0 + w 1 D + w2 D2 + · · · , encoder by taking time axis to the horizontal axis.

where information series (m0 m1 m2 · · · ) is each k-bit in- • Traceback depth


formation block and code series (w 0 w1 w2 · · · ) is each The traceback depth is the number of trellis states
n-bit code block. The i-th component Mi (D) (i = 1, processed before the Viterbi decoder makes a deci-
2, · · · , k) of M (D) = (M1 (D), M2 (D), · · · , Mk (D)) is sion on a bit. For blocks of data, the best performance
given by is achieved if decoding decisions are delayed until all
Mi (D) = mi,0 + mi,1 D + mi,2 D2 + · · · , input symbols have been processed. For continuous
streams, this is not possible, and there is no benefit in
and the i-th component Wi (D) (i = 1, 2, · · · , n) of increasing traceback depth beyond several times con-
W (D) = (W1 (D), W2 (D), · · · , Wn (D)) is given by straint length. Empirically, it is known that traceback
depth of about 5 ∼ 6 times length of constraint length
Wi (D) = wi,0 + wi,1 D + wi,2 D2 + · · · . in convolutional encoder over AWGN channel is use-
ful.
The code series W (D) is expressed by

W (D) = M (D)G(D), • Hard/soft decision


In hard decision, the receiver deliver a hard symbol
[W1 (D) · · · Wn (D)] = [M1 (D) · · · Mk (D)] G(D),
equivalent to a binary ±1 to the Viterbi decoder. In
where G(D) is a k × n matrix whose elements are polyno- soft decision, the receiver deliver a soft symbol multi-
mial of D, and is expressed by leveled to represent the confidence in the bit being
  positive or negative to the Viterbi decoder.
G1,1 (D) G1,2 (D) · · · G1,n (D)
 G2,1 (D) G2,2 (D) · · · G2,n (D)  We assume that code series of convolutional code with
G(D) =  .
 
.. .. .. .. code rate k/n is transmitted over memoryless stationary
 . . . . 
channel and series Y = y 0 y 1 y 2 · · · y N −1 is received,
Gk,1 (D) Gk,2 (D) · · · Gk,n (D) where N is arbitrarily large positive integer. The t-th block
The G(D) is called a generator matrix of convolutional of received series is y t = (y1,t , y2,t , · · · , yn,t ), and yi,t is
code, and Gi,j (D), which is (i, j)-th element of G(D), is output symbol of channel. The ML decoding outputs the
called a generator polynomial. code series W = (w 0 w1 w2 · · · wN −1 ) whose likelihood
We utilize the convolutional encoder with code rate 1/2 function P (Y |W ), which is the conditional probability of
and constraint length 3, whose generator matrix is Y given W , is maximal over all possible code series. As
channel is memoryless stationary, this likelihood function is
G(D) = 1 + D2 1 + D + D2 ,
 
written by

in our computer simulation. Figure 1 illustrates the block di- N


Y −1
agram of the convolutional encoder. The block of D repre- P (Y |W ) = P (y t |w t ).
sents a shift register and the block of ⊕ represents an XOR. t=0

99
Moreover, this logarithm is represented by 100
Soft decision
Hard dicision
N −1
10−1
X
ln P (Y |W ) = ln P (y t |wt ).
t=0

Bit Error Rate


10−2
The ML decoding can be established by calculating the log-
arithm ln P (y t |wt ) for any t and code block w t , summing
ln P (y t |w t ) for all t, and picking up the code series corre- 10−3

sponding to the maximal sum. The ln P (y t |w t ) for code


block wt corresponding to the branch of trellis diagram is 10−4
called a branch metric. In general, the total number of code
series W is 2nN , where nN is the length of the received 10−5
series Y . If nN is large, the ML decoding may require the -20 -15 -10 -5 0 5 10 15 20
enormous calculations. However, we can reduce the number Eb / (G0+I0) [dB]

of series in the trellis search by using the Viterbi decoding.


The Viterbi decoding is a sequential trellis search algorithm
Figure 2: The BER performances of Gaussian Viterbi de-
for performing the ML decoding.
coders over AWAN channel (A = 0.01 and Γ = 0.01).
We explain the Viterbi decoding over AWGN channel.
The probability density function (PDF) of the Gaussian
noise is expressed by
Therefore, the ML decoding for AWGN channel can be es-
z2
 
1 tablished to select the code series W which has the maxi-
Pσ2 (z) = √ exp − 2 , mum correlation distance to the received series Y . We call
2πσ 2σ
the Viterbi decoder optimized in AWGN channel a Gaussian
where z is a random variable of noise. By using transmitted Viterbi decoder.
code series wt = (w1,t , w2,t , · · · , wn,t ), the noise series
Figure 2 illustrates the BER performances of the Gaus-
z t = (z1,t , z2,t , · · · , zn,t ) is expressed by
sian Viterbi decoders with hard and soft decision over
zi,t = yi,t − wi,t , AWAN channel with the impulsive index A = 0.01 and the
GIR Γ = 0.01. The convolutional encoder is defined by Fig-
for all i. Therefore, the metric for AWGN is expressed by ure 1 and the traceback depth of the Viterbi decoder is 24.
Let G0 and I0 denote the power spectrum densities of the
N −1
X back ground Gaussian noise and the impulsive noise, re-
ln P (Y |W ) = ln P (y t |wt ) spectively.
t=0
N n
−1 X
Figure 2 shows that impulsive noises less influence the
Gaussian Viterbi decoder with hard decision than that with
X
= ln Pσ2 (yi,t − wi,t ). (4)
t=0 i=1
soft decision. This is because the hard decision gives the
effect of a limiter of the impulsive noises.
The (i, t)-th term in Eq. (4) is represented by
!
2
1 zi,t
ln Pσ2 (zi,t ) = ln √ exp − 2
2πσ 2σ 3. Viterbi Decoding Optimized for Class A Noise
2
1 zi,t
= ln √ − In this section, we propose a Viterbi decoder optimized
2πσ 2σ 2
2 2
in AWAN channel. We call our proposed decoder a Class
1 yi,t + wi,t yi,t wi,t A Viterbi decoder. In the simplified Class A noise model
= ln √ − 2
+ . (5)
2πσ 2σ σ2 shown in P̂A,Γ,σ2 (z), the branch metric is approximate to

√Eb denote the bit energy. Then, wi,t is 2equal to Eb or
Let n
− Eb . Since the first, second terms and σ are constant in ln P (y t |w t ) ≈
X
ln P̂A,Γ,σ2 (yi,t − wi,t ).
Eq. (5), we obtain the branch metric i=1
n
X
yi,t wi,t = y t · wt . We deduce the Class A Viterbi decoder by √
using the Gaus-

i=1 sian Viterbi decoder. As wi,t is Eb or − Eb , the differ-

100
√ − 2σ1 2 ln σ10
Eb PSfrag replacements 0
- √ +
Eb
+
+ ln P̂A,Γ,σ2 ( · ) −
− 2σ1 2 ln σA1
1
g replacements yi,t +
Viterbi +
+ ( ) 2
+ MAX
+ A2
- decoder − 2σ1 2 ln 2σ2
+ 2
+
+ ln P̂A,Γ,σ2 ( · ) yi,t +

- −
+
√ +
− Eb − Eb

+ ( )2 + MAX
+
Figure 3: Class A Viterbi decoder.
+

ence of this two branch metric is expressed by


Figure 4: Class A preprocessing.
p p
ln P̂A,Γ,σ2 (yi,t − Eb ) − ln P̂A,Γ,σ2 (yi,t + Eb )

P̂A,Γ,σ2 (yi,t − Eb ) Hence, the branch metric is expressed by
= ln √ . (6)
P̂A,Γ,σ2 (yi,t + Eb )
z2

ˆ
ln P̂ (z) = − + L0 , 0 ≤ |z| < a,

0

2σ02


We regard this difference as an input of the Gaussian Viterbi 

z2

ˆ ˆ

decoder. Then, the branch metric of the Gaussian Viterbi ln P̂A,Γ,σ2 (z) = ln P̂1 (z) = − 2 + L1 , a ≤ |z| < b,
decoding is 
 2σ1

ˆ z2



n  ln P̂2 (z) = − 2 + L2 , b ≤ |z|,


X P̂A,Γ,σ2 (yi,t − Eb ) 2σ2
ln √ wi,t .
P̂A,Γ,σ2 (yi,t + Eb ) (8)
i=1
with
The difference of the branch metric of a term is
1 A A2
√ L0 = ln , L1 = ln , L2 = ln .
P̂A,Γ,σ2 (yi,t − Eb ) p σ0 σ1 σ2
ln √ Eb
P̂A,Γ,σ2 (yi,t + Eb ) Since logarithmic function is a monotone increasing func-
√ tion, Eq. (8) is expressed by
P̂A,Γ,σ2 (yi,t − Eb ) p
− ln √ (− Eb )
P̂A,Γ,σ2 (yi,t + Eb ) ˆ ˆ
h i
√ ln P̂A,Γ,σ2 (z) = max ln P̂m (z) .
m=0,1,2
p P̂A,Γ,σ2 (yi,t − Eb )
= 2 Eb ln √ . (7)
P̂A,Γ,σ2 (yi,t + Eb ) Therefore, the dotted line part in Figure 3 is represented as
Figure 4. We call the block diagram shown in Figure 4 a
Eq. (7) is constant times Eq. (6). Hence, the Gaussian Class A preprocessing.
Viterbi decoder whose input is Eq. (6) plays the same role in The Class A Viterbi decoder consists of the Class A
the Class A Viterbi decoder. Figure 3 illustrates the concept preprocessing and the Gaussian Viterbi decoder. There is
of our proposed Class A Viterbi decoder. no need to configure the Gaussian Viterbi decoder. The
We analyze the dotted line part of Figure 3 in detail to Class A Viterbi decoder can be easily established by putting
realize the Class A Viterbi decoder. At first, we simplify the Class A preprocessing on the front-end of the Gaussian
Eq. (3) to Viterbi decoder.

ˆ √
P̂A,Γ,σ2 (z) = 2πeA P̂A,Γ,σ2 (z) 4. Simulation Results
 1 2 2 ˆ
 e−z /2σ0 = P̂0 (z), 0 ≤ |z| < a, In this section, we discuss the performance of our pro-
σ


 0

 A
 posed Class A Viterbi decoder through the simulation re-
2 2 ˆ
= e−z /2σ1 = P̂1 (z), a ≤ |z| < b, sults. We design the simulation model as a low pass equiv-

 σ 1 alent model. Figure 5 illustrates the block diagram of our
2

 A e−z2 /2σ22 = P̂ˆ2 (z), b ≤ |z|.


 simulation system. We compare the performance of our
2σ2 proposed Class A Viterbi decoder to that of the Gaussian

101
Random Binary Convolutional BPSK 100
Generator Encoder Gaussian Viterbi decoder
Modulation Class A Viterbi decoder
10−1
BER AWAN
Calculation Channel

Bit Error Rate


10−2

Viterbi Class A
Preprocessing Re( )
Decoder
10−3

10−4
Figure 5: The block diagram of the simulation system.

10−5
-20 -15 -10 -5 0 5 10 15 20
Viterbi decoder with soft decision over AWAN channel with
Eb / (G0+I0) [dB]
the impulsive index A = 0.01 and the GIR Γ = 0.01. The
convolutional encoder is defined by Figure 1 and the trace-
back depth of both the Viterbi decoders is 24. Figure 6 illus- Figure 6: The BER performances of Viterbi decoders over
trates the BER performances of the Class A Viterbi decoder AWAN channel (A = 0.01 and Γ = 0.01).
and the Gaussian Viterbi decoder over AWAN channel. As
shown in Figure 6, our proposed Class A Viterbi decoder
has about 30dB decoding gain at BER = 10−5 as compared Part II: Incoherent detection,” IEEE Trans. Commun.,
to the Gaussian Viterbi decoder. Therefore, these simula- vol. COM-25, no. 9, pp. 924–934, Sep. 1977.
tion results show that our proposed Class A Viterbi decoder
is useful for the Class A noise environment. [4] H. Kusao, N. Morinaga, and T. Namekawa, “Opti-
mum coherent receiver for impulsive RF noise,” IE-
ICE Trans. B, vol. J86-B, no. 6, pp. 684–691, June
5. Conclusion 1985. (in Japanese)

In this paper, we have discussed the Viterbi decoder [5] S. Miyamoto, M. Katayama, and N. Morinaga, “Sig-
for convolutional code over AWAN channel. Our proposed nal detection characteristics in trellis coded modula-
Viterbi decoder consists of a nonlinear preprocessing de- tion system under impulsive noise environment and its
vice and the conventional Viterbi decoder, and then can be optimum reception,” IEICE Trans. B-II, vol. J75-B-II,
easily established by putting the preprocessing device on no. 10, pp. 671–681, Oct. 1992. (in Japanese)
the front-end of the conventional Viterbi decoder. Further-
[6] S. Miyamoto, M. Katayama, and N. Morinaga, “De-
more, we have compared the BER performance of our pro-
sign of TCM signals for class-A impulsive noise envi-
posed Viterbi decoder with that of the conventional Viterbi
ronment,” IEICE Trans. Commun., vol. E78-B, no. 2,
decoder by computer simulation. Consequently, we have
pp. 253–259, Feb. 1995.
shown that our proposed Viterbi decoder has a better per-
formance than the conventional Viterbi decoder over AWAN [7] J. Häring and A. J. Han Vinck, “Coding for impulsive
channel. noise channels,” Proc. ISPLC 2001, Malmö, Sweden,
pp. 103–108, April 2001.
References [8] J. Häring and A. J. Han Vinck, “Iterative decoding of
codes over complex numbers,” Proc. ISIT 2001, Wash-
[1] D. Middleton, “Statistical-physical model of elec-
ington D.C., USA, P. 25, June 2001.
tromagnetic interference,” IEEE Trans. Electromagn.
Compat., vol. EMC-19, no. 3, pp. 106–126, Aug. [9] D. Umehara, M. Kawai, and Y. Morihiro, “An iterative
1977. detection for M-ary SS system over impulsive noise
channel,” Proc. ISPLC 2002, Athens, Greece, pp. 203–
[2] A. D. Spaulding and D. Middleton, “Optimum re- 207, March 2002.
ception in an impulsive interference environment -
Part I : Coherent detection,” IEEE Trans. Commun., [10] D. Umehara, M. Kawai, and Y. Morihiro, “A subop-
vol. COM-25, no. 9, pp. 910–923, Sep. 1977. timal receiver for PCSS system over Class A noise
channel,” Proc. ISITA 2002, Xi’an, PRC, pp. 775–778,
[3] A. D. Spaulding and D. Middleton, “Optimum re- Oct. 2002.
ception in an impulsive interference environment -

102

You might also like