You are on page 1of 29

Equalization in a

wideband TDMA system


Three basic equalization methods
Linear equalization (LE)
Decision feedback equalization (DFE)
Sequence estimation (MLSE-VA)
Example of channel estimation circuit

Three basic equalization methods (1)


Linear equalization (LE):
Performance is not very good when the frequency response
of the frequency selective channel contains deep fades.
Zero-forcing algorithm aims to eliminate the intersymbol
interference (ISI) at decision time instants (i.e. at the
center of the bit/symbol interval).
Least-mean-square (LMS) algorithm will be investigated in
greater detail in this presentation.
Recursive least-squares (RLS) algorithm offers faster
convergence, but is computationally more complex than
LMS (since matrix inversion is required).

Three basic equalization methods (2)


Decision feedback equalization (DFE):
Performance better than LE, due to ISI cancellation of tails
of previously received symbols.
Decision feedback equalizer structure:
Feed-back
Feed-back
filter
filter(FBF)
(FBF)
Input

Feed-forward
Feed-forward
filter
filter(FFF)
(FFF)
Adjustment of
filter coefficients

Output
+

Symbol
decision

Three basic equalization methods (3)


Maximum Likelihood Sequence Estimation using
the Viterbi Algorithm (MLSE-VA):
Best performance. Operation of the Viterbi algorithm can be
visualized by means of a trellis diagram with m K-1 states,
where m is the symbol alphabet size and K is the length of
the overall channel impulse response (in samples).
State trellis
diagram

Allowed transition
between states

State
Sample time
instants

Linear equalization, zero-forcing algorithm


Basic idea:
Raised
Raised
cosine
cosine
spectrum
spectrum

Z f B f H f E f

Transmitted
Transmitted
symbol
symbol
spectrum
spectrum

Channel
Channelfrequency
frequency
response
response
(incl.
(incl.TT&&RRfilters)
filters)

Equalizer
Equalizer
frequency
frequency
response
response

B f
H f
E f
Z f
0

fs = 1/T

Zero-forcing equalizer
Transmitted
impulse
sequence

Communication
channel

Overall
channel

FIR filter contains


2N+1
coefficients

Channel impulse response

h k

h k n
fk

FIR filter contains


2M+1
coefficients

c k

Coefficients of
equivalent FIR filter

Equalizer

z k
Input to
decision
circuit

Equalizer impulse response

n N

r k

m M

cm hk m

m M

cm k m

( M k M )

(in fact the equivalent FIR filter consists of 2M+1+2N coefficients,


but the equalizer can only handle 2M+1 equations)

Zero-forcing equalizer
We want overall filter response
to be non-zero at decision time
k = 0 and zero at all other
sampling times k 0 :

fk

m M

cm hk m

1, k 0

0, k 0

h0 c M h1c M 1 ... h2 M cM 0
This leads to
a set of 2M+1
equations:

(k = M)

h1c M h0c M 1 ... h2 M 1cM 0


:
hM c M hM 1c M 1 ... h M cM 1

(k = 0)

:
h2 M 1c M h2 M 2c M 1 ... h1cM 0
h2 M c M h2 M 1c M 1 ... h0 cM 0

(k = M)

Minimum Mean Square Error (MMSE)


The aim is to minimize:

J E ek

ek zk bk (or bk zk depending on the source)


Input to
decision
circuit

s k

Estimate
of k:th
symbol

Channel

r k

Error

ek
Equalizer

bk

z k

b k

zk

MSE vs. equalizer coefficients


2

J E ek

quadratic multi-dimensional function of


equalizer coefficient values

J
c2
c1

Illustration of case for two real-valued


equalizer coefficients (or one complexvalued coefficient)

MMSE aim: find minimum value directly (Wiener solution),


or use an algorithm that recursively changes the equalizer
coefficients in the correct direction (towards the minimum
value of J)!

Wiener solution
We start with the Wiener-Hopf equations in matrix form:

Rc opt p
R = correlation matrix (M x M) of received (sampled) signal
values

rk

p = vector (of length M) indicating cross-correlation between


received signal values

rk and estimate of received symbol bk

copt = vector (of length M) consisting of the optimal equalizer


coefficient values

(We assume here that the equalizer contains M taps, not


2M+1 taps like in other parts of this presentation)

Correlation matrix R & vector p


R E r k r *T k
where

r k rk , rk 1 ,..., rk M 1

*
p E r k bk

M samples

Before we can perform the stochastical expectation operation,


we must know the stochastical properties of the transmitted
signal (and of the channel if it is changing). Usually we do not
have this information => some non-stochastical algorithm like
Least-mean-square (LMS) must be used.

Algorithms
Stochastical information (R and p) is available:
1. Direct solution of the Wiener-Hopf equations:

Rc opt p

c opt R 1p

Inverting a large
matrix is difficult!

2. Newtons algorithm (fast iterative algorithm)


3. Method of steepest descent (this iterative algorithm is slow
but easier to implement)

R and p are not available:


Use an algorithm that is based on the received signal sequence
directly. One such algorithm is Least-Mean-Square (LMS).

Conventional linear equalizer of LMS type


Received complex
signal samples

rk M
c M

c1 M

Widrow

Transversal FIR filter


with 2M+1 filter taps

cM 1

Complex-valued tap coefficients


of equalizer filter

LMS algorithm for


adjustment of
tap coefficients

rk M

ek

cM

zk

bk

Estimate of k:th symbol


after symbol decision

Joint optimization of coefficients and phase


r k

zk

Equalizer
Equalizerfilter
filter

e
Coefficient
Coefficient
updating
updating

bk

Phase
Phase
synchronization
synchronization

ek
Godard

Proakis, Ed.3, Section 11-5-2

Minimize:

J E ek

cm rk m exp j bk
m M

ek zk bk

Least-mean-square (LMS) algorithm


(derived from method of steepest descent)
for convergence towards minimum mean square error (MMSE)

Real part of n:th coefficient:

Re cn i 1 Re cn i

Imaginary part of n:th coefficient:


2

ek ek ek

2 2 M 1 1
equations

Phase:

i 1 i

Iteration index

Re cn

Im cn i 1 Im cn i
ek

ek

ek

Im cn

Step size of iteration

LMS algorithm (cont.)


After some calculation, the recursion equations are obtained in
the form

Re cn i 1 Re cn i 2 Re e

Im cn i 1 Im cn i 2 Im e

r e j
c
r

b
m k m k k n
m M

r e j
c
r

m k m
k k n
m M

j M

i 1 i 2 Im bk e cm rk m
m M

ek

Effect of iteration step size

smaller

larger

Slow
Slow acquisition
acquisition

Poor
Poor stability
stability

Poor
Poor tracking
tracking
performance
performance

Large
Large variation
variation
around
around optimum
optimum
value
value

Decision feedback equalizer


bk 1

q1

qQ 1

c M

c1 M

bk

qQ

FBF

rk M

bk Q
?

zk

rk M

cM 1
FFF

cM

ek
LMS
algorithm
for tap
coefficient
adjustment

Decision feedback equalizer (cont.)


The purpose is again to minimize J E ek
where

ek zk bk

m M

ek

cm rk m qn bk n bk
n 1

Feedforward filter (FFF) is similar to filter in linear equalizer


tap spacing smaller than symbol interval is allowed
=>
fractionally spaced equalizer
=> oversampling by a factor of 2 or 4 is common
Feedback filter (FBF) is used for either reducing or canceling
(difference: see next slide) samples of previous symbols at
decision time instants
tap spacing must be equal to symbol interval

Decision feedback equalizer (cont.)


The coefficients of the feedback filter (FBF) can be
obtained in either of two ways:
Recursively (using the LMS algorithm) in a similar
fashion as FFF coefficients
Proakis, Ed.3, Section 11-2
By calculation from FFF coefficients and channel
coefficients (we achieve exact ISI cancellation in
this way, but channel estimation is necessary):

qn

m M

cm hn m

n 1, 2, K , Q
Proakis, Ed.3, Section 10-3-1

Channel estimation circuit


Proakis, Ed.3, Section 11-3

Estimated
symbols

bk

LMS
algorithm

Filter length = CIR length


T

c0

c1 cM 1

cM

rk

rk
+

k:th sample of received signal

bk M
hm cm

Estimated channel coefficients

Channel estimation circuit (cont.)


1. Acquisition phase
Uses training sequence
Symbols are known at receiver, bk bk .
2. Tracking phase
Uses estimated symbols (decision directed mode)
Symbol estimates are obtained from the decision
circuit (note the delay in the feedback loop!)
Since the estimation circuit is adaptive, time-varying
channel coefficients can be tracked to some extent.
Alternatively: blind estimation (no training sequence)

Channel estimation circuit in receiver


Mandatory for MLSE-VA, optional for DFE

b k
Training
symbols
(no
errors)

r k

Symbol estimates (with errors)


Estimated channel coefficients

Channel
Channel
estimation
estimation
circuit
circuit

Received signal samples

h m

Equalizer
Equalizer
&& decision
decision
circuit
circuit

b k

Clean output symbols

Theoretical ISI cancellation receiver


(extension of DFE, for simulation of matched filter bound)

bk P K bk 1

bk 1 K bk Q

Precursor
Precursorcancellation
cancellation
of
offuture
futuresymbols
symbols

Postcursor
Postcursorcancellation
cancellation
of
ofprevious
previoussymbols
symbols

rk P

Filter
Filtermatched
matchedto
to
sampled
sampledchannel
channel
impulse
impulseresponse
response

bk
+

If previous and future symbols can be estimated without error


(impossible in a practical system), matched filter performance
can be achieved.

MLSE-VA receiver structure


r t

Matched
Matched
filter
filter

NW
NW
filter
filter

y k

MLSE
MLSE
(VA)
(VA)

b k

f k
f k

Channel
Channel
estimation
estimationcircuit
circuit

MLSE-VA circuit causes delay of estimated symbol sequence


before it is available for channel estimation
=> channel estimates may be out-of-date
(in a fast time-varying channel)

MLSE-VA receiver structure (cont.)


The probability of receiving sample sequence y (note: vector form)
of length N, conditioned on a certain symbol sequence estimate and
overall channel estimate:
Since we have AWGN

p y b, f p y k b , f
exp 2
N 2
N
k 1
2
2
Objective:
find symbol
sequence that
maximizes this
probability

This is allowed since


noise samples are
uncorrelated due to NW
(= noise whitening) filter

Length of f (k)

y f b
k 1

K 1

n 0

k n

Metric to be
minimized

(select best b
..
using VA)

MLSE-VA receiver structure (cont.)


We want to choose that symbol sequence estimate and overall
channel estimate which maximizes the conditional probability.
Since product of exponentials <=> sum of exponents, the
metric to be minimized is a sum expression.
If the length of the overall channel impulse response in
samples (or channel coefficients) is K, in other words the time
span of the channel is (K-1)T, the next step is to construct a
state trellis where a state is defined as a certain combination
of K-1 previous symbols causing ISI on the k:th symbol.

f k
0

Note: this is overall CIR,


including response of matched
filter and NW filter
K-1

MLSE-VA receiver structure (cont.)


At adjacent time instants, the symbol sequences causing ISI
are correlated. As an example (m=2, K=5):
At time k-3

1 0 0 1 0

At time k-2

1 0 0 1 0 0

At time k-1

1 0 0 1 0 0 1

At time k

1 0 0 1 0 0 1 1

Bit detected at time instant


Bits

causing ISI

:
16 states

not causing ISI at time instant

MLSE-VA receiver structure (cont.)


State trellis diagram
Number
of states

The best state


sequence is
estimated by means
of Viterbi algorithm
(VA)

m K 1
Alphabet
size

k-3

k-2

k-1

k+1

Of the transitions terminating in a certain state at a certain


time instant, the VA selects the transition associated with
highest accumulated probability (up to that time instant) for
further processing.

Proakis, Ed.3, Section 10-1-3

You might also like