You are on page 1of 42

1

1. INTRODUCTION

One of the most important advantages of the digital transmission systems for voice,
data and video communications is their higher reliability in noise environment in comparison
with that of their analog counterparts. Unfortunately most often the digital transmission of
information is accompanied with a phenomenon known as intersymbol interference (ISI) .
Briefly this means that the transmitted pulses are smeared out so that pulses that correspond
to different symbols are not separable. Depending on the transmission media the main causes
for ISI are:

cable lines the fact that they are band limited;
cellular communications multipath propagation

Obviously for a reliable digital transmission system it is crucial to reduce the effects of ISI
and it is where the equalizers come on the scene. The need for equalizers arises from the fact
that the channel has amplitude and phase dispersion which results in the interference of the
transmitted signals with one another. The design of the transmitters and receivers depends on
the assumption of the channel transfer function is known. But, in most of the digital
communications applications, the channel transfer function is not known at enough level to
incorporate filters to remove the channel effect at the transmitters and receivers. For example,
in circuit switching communications, the channel transfer function is usually constant, but, it
changes for every different path from the transmitter to the receiver. But, there are also non
stationary channels like wireless communications. These channels transfer functions vary
with time, so that it is not possible to use an optimum filter for these types of channels. So, In
order to solve this problem equalizers are designed. Equalizer is meant to work in such a way
that BER (Bit Error Rate) should be low and SNR (Signal-to-Noise Ratio) should be high.
Equalizer gives the inverse of channel to the Received signal and combination of channel and
equalizer gives a flat frequency response and linear phase.
The static equalizer is cheap in implementation but its noise performance is not very good
As it is told before, most of the time, the channels and, consequently, the transmission
systems transfer functions are not known. Also, the channels impulse response may vary
2

with time. The result of this is that the equalizer cannot be designed. So, mostly preferred
scheme is to exploit adaptive equalizers. An adaptive equalizer is an equalization filter that
automatically adapts to time-varying properties of the communication channel. It is a filter
that self-adjusts its transfer function according to an optimizing algorithm.
Efficient use of the available bandwidth in communication systems requires methods
such as crowding of frequency-division multiplexed signals, reusing frequencies in cellular
systems, and increasing data rates Use of these, or other methods, to improve spectral
efficiency often leads to more adjacent-channel interference (ACI), co channel interference
(CCI), and intersymbol interference (ISI). ACI is the interference due to signals with different
carrier frequencies which are close enough to cause mutual overlaps in the spectra. CCI is the
interference due to signals with similar carrier frequencies. IS1 is the interference among the
data of interest. It is necessary to find efficient techniques to simultaneously reduce the
harmful effects of ACI, CCI, and IS1 in digital communication systems. The use of
equalizers is shown to be such a technique.
This paper is concerned with the situation occurring when the signal carrying the data
of interest (containing ISI) and the signals of the interferers (ACI and CCI) are linearly
modulated by their respective data, where all systems use the same symbol rate, and where
the receivers make decisions at that symbol rate. Therefore some of the analytical subtleties
that arise in other continuous-time contexts do not arise here.

To show the effectiveness of equalizers in suppressing ACI, CCI, and IS1 we attempt to
evaluate their performance in a framework which employs all of the possibilities of
interference presence, bandwidths, and antenna diversity. The method is used to evaluate the
effectiveness of the equalizers. The method is the analysis of the number of degrees of
freedom and number of constraints for generalized zero-forcing linear and decision- feedback
equalizers employing multiple antennas which try to suppress all ACI, CCI, and ISI. This
analysis is based on the assumption that the values of the frequency responses of the channels
give a system of linearly independent equations. The analysis is a generalization of the linear
equalizer case of CCI and IS1 with one antenna.

Four results are obtained and presented in this paper. The results show some potential and
limits of using fractionally spaced equalizers and linear combiners to suppress ISI, ACI, and
3

CCI. First, with one antenna and a linear equalizer, arbitrarily large receiver bandwidths
allow for marginal improvements in spectral efficiency through decreased carrier spacing,
because the carrier spacing cannot be reduced to a value below the symbol rate without
incurring insuppressible interference. Second, large receiver bandwidths assist multiple
antennas in improving the spectral efficiency in that carrier spacing values may go below the
symbol rate, even in the presence of CCI. Third, the use of equalizers and linear combiners,
together with large receiver bandwidths, allows large transmitter bandwidths to be used. This
may allow system design flexibility, e.g., constant or near-constant envelope modulation.
Fourth, for CCI and ISI, the number of interferers that may be suppressible by a generalized
zero-forcing linear equalizer increases linearly with the product of the number of antennas
and the truncated integer ratio of the total bandwidth to the symbol rate.


2. EQUALIZER AND ITS APPLICATION
For a reliable digital transmission system it is crucial to reduce the effects of ISI and it is
where the equalizers come on the scene. The need for equalizers arises from the fact that the
channel has amplitude and phase dispersion which results in the interference of the
transmitted signals with one another. The design of the transmitters and receivers depends on
the assumption of the channel transfer function is known. But, in most of the digital
communications applications, the channel transfer function is not known at enough level to
incorporate filters to remove the channel effect at the transmitters and receivers. For example,
in circuit switching communications, the channel transfer function is usually constant, but, it
changes for every different path from the transmitter to the receiver. But, there are also non
stationary channels like wireless communications. These channels transfer functions vary
with time, so that it is not possible to use an optimum filter for these types of channels. So, In
order to solve this problem equalizers are designed. Equalizer is meant to work in such a way
that BER (Bit Error Rate) should be low and SNR (Signal-to-Noise Ratio) should be high.
Equalizer gives the inverse of channel to the Received signal and combination of channel and
equalizer gives a flat frequency response and linear phase.

4


Fig 1. Concept of equalizer
2.1.Interference

Interference is anything which alters, modifies, or disrupts a signal as it travels along a
channel between a source and a receiver.

The term typically refers to the addition of unwanted signals to a useful signal.

Common examples are:
1. Adjacent-channel interference (ACI)
2. Co-channel interference (CCI), also known as crosstalk
3. Intersymbol interference (ISI)

2.1.1 Adjacent-channel interference

Adjacent-channel interference or ACI is interference caused by extraneous power from a
signal in an adjacent channel.

ACI is the interference due to signals with different carrier frequencies which are close
enough to cause mutual overlaps in the spectra.

ACI may be caused by inadequate filtering, improper tuning, or poor frequency control

5





2.1.2 Co channel interference

CCI is the interference due to signals with similar carrier frequencies.
CCI may be caused by Frequency reuse phenomenon


Cellular Mobile Networks:

In cellular mobile communication frequency spectrum is a precious resource which is
divided into non-overlapping spectrum bands which are assigned to different cells (In cellular
communications, a cell refers to the hexagonal/circular area around the base station antenna).
However, after certain geographical distance, the frequency bands are re-used, i.e. the same
spectrum bands are re-assigned to other distant cells. The co-channel interference arises in the
cellular mobile networks owing to this phenomenon of Frequency reuse. Thus, besides the
intended signal from within the cell, signals at the same frequencies (co-channel signals)
arrive at the receiver from the undesired transmitters located (far away) in some other cells
and lead to deterioration in receiver performance.
Fi g 2. Adj acent -channel i nt er f er ence i n 3 FM st at i ons
6



Fig.3. Co channel interference in cellular network


2.1.3.Intersymbol interference

In telecommunication, intersymbol interference (ISI) is a form of distortion of a signal in
which one symbol interferes with subsequent symbols. One of the most important advantages
of the digital transmission systems for voice, data and video communications is their higher
reliability in noise environment in comparison with that of their analog counterparts.
Unfortunately most often the digital transmission of information is accompanied with a
phenomenon known as intersymbol interference (ISI).Briefly this means that the transmitted
pulses are smeared out so that pulses that correspond to different symbols are not separable.
Depending on the transmission media the main causes for ISI are:

Cable lines the fact that they are band limited

One of the causes of intersymbol interference is the transmission of a signal through a band
limited channel, i.e., one where the frequency response is zero above a certain frequency (the
cut-off frequency). Passing a signal through such a channel results in the removal of
frequency components above this cut-off frequency; in addition, the amplitude of the
frequency components below the cut-off frequency may also be attenuated by the channel.
7

This filtering of the transmitted signal affects the shape of the pulse that arrives at the
receiver. The effects of filtering a rectangular pulse; not only change the shape of the pulse
within the first symbol period, but it is also spread out over the subsequent symbol periods.
When a message is transmitted through such a channel, the spread pulse of each individual
symbol will interfere with following symbols

Fig 4.Spreading of pulse in a band-limited channel



Fig 5.Overlapping of pulses in a time variant channel

Cellular communications multipath propagation

Another cause of intersymbol interference is what is known as multipath propagation in
which a wireless signal from a transmitter reaches the receiver via many different paths. The
causes of this include reflection (for instance, the signal may bounce off buildings), refraction
(such as through the foliage of a tree) and atmospheric effects such as atmospheric ducting
and ionospheric reflection. Since all of these paths are different lengths - plus some of these
effects will also slow the signal down - this results in the different versions of the signal
8

arriving at different times. This delay means that part or all of a given symbol will be spread
into the subsequent symbols, thereby interfering with the correct detection of those symbols.
Additionally, the various paths often distort the amplitude and/or phase of the signal thereby
causing further interference with the received signal.


Fig 6. Multipath propagation

When a radio signal is transmitted, it can propagate along many paths to reach the receiver.
At the receiver antenna, the received signal can be viewed as a complex sum of vectors with
independent gains and phases. Multipath interference is the effect of multiple versions of a
transmitted signal arriving at a radio receiver and combining in a way that produces distortion
of the original signal. Figure bellow shows how multipath interference is produced by
reflections from buildings or other objects.


Fig 7.Multipath propagation in a moving system
9


When multiple versions of the same transmitted signal arrive simultaneously, an interference
pattern is formed with the various signals combining according to their amplitudes and
phases. As the mobile receiver moves, the relationship between the amplitudes and phases of
the signals from the various paths changes, causing undulations in both the composite
amplitude and the composite phase.

2.2. INTERSYMBOL INTERFERENCE & ITS EFFECTS

Intersymbol interference (ISI) is a form of distortion of a signal in which one symbol
interferes with subsequent symbols. This is an unwanted phenomenon as the previous
symbols have similar effect as noise, thus making the communication less reliable. ISI is
usually caused by multipath propagation or the inherent non-linear frequency response of a
channel causing successive symbols to "blur" together. The presence of ISI in the system
introduces errors in the decision device at the receiver output. Therefore, in the design of the
transmitting and receiving filters, the objective is to minimize the effects of ISI, and thereby
deliver the digital data to its destination with the smallest error rate possible. Ways to fight
intersymbol interference include adaptive equalization and error correcting codes.

EFFECTS:-

Pulse spreading:-

Assume polar NRZ line code. The channel outputs are shown as spreaded (width Tb becomes
2Tb) pulses shown (Spreading due to bandlimited channel characteristics).








10



Example 1


Fig 8. Spreading of pulse by channel
Example 2














Fig 9. Superposition of bit

11

If the rectangular multilevel pulses are filtered improperly as they pass through a
communications system, they will spread in time, and the pulse for each symbol may be
smeared into adjacent time slots and cause Intersymbol Interference.


Example 3



Fig 10. Interference of symbol in receiver block









12


The amount of ISI can be seen on an oscilloscope using an Eye Diagram or Eye pattern.




Fig 11. Eye diagram for ISI


3. EQUALIZATION TECHNIQUE

Channel equalization can be done by using different types of equalization techniques.
By using this technique the weights of the filter are adjusted so as to have the characteristics
of the equalizer inverse to that of the channel. If the transfer function or characteristics of the
channel is known then the equalizer design is easy. But in practice the channel characteristics
is unknown and time varying. In this case it is difficult to design an equalizer. So different
adaptive equalization techniques are used to design a practical equalizer.




13

3.1. TYPES OF EQUALIZER

3.1.1. Static equalizer:-

The type of equalizers in which the transfer function of the system is known and time
invariant is called as static equalizer.

The static equalizer is cheap in implementation but its noise performance is not very good.
But most of the time, the channels and, consequently, the transmission systems transfer
functions are not known. Also, the channels impulse response may vary with time. The
result of this is that the equalizer cannot be designed. So, mostly preferred scheme is to
exploit adaptive equalizers.



3.1.2. Adaptive equalizer:-

An adaptive equalizer is an equalization filter that automatically adapts to time-varying
properties of the communication channel. It is a filter that self-adjusts its transfer function
according to an optimizing algorithm.

Types of adaptive equalizer:-

a. Linear equalizer
b. Non linear equalizer
c. Blind equalizer

a. Linear equalizer:-
A linear equalizer is a linear filter wk that is designed to reduce the noise and ISI according
to some criterion of optimality
x
k
= w
k
* y
k


14

X(z) = W (z) Y (z)

x
k
= Input signal to the channel
y
k
= Output signal from the channel


Types of linear equalizer:-

1.Zero forcing equalizer:-

The Zero Forcing (ZF)-equalizer is designed using the peak-distortion criterion. Ideally
eliminates all ISI. The names zero forcing corresponds to bringing down to intersymbol
interference to zero in a noise free case. This will be useful when ISI is significant compared
to noise .Zero-forcing equalizers ignore the additive noise and may significantly amplify
noise for channels with spectral nulls. The limitations of ZFE are:
1. Even though the channel impulse response has finite length, the impulse response
of the equalizer needs to be infinitely long.
2. At some frequencies the received signal may be weak. To compensate the
magnitude of the zero forcing equalizer grows very large. As a consequence any noise added
after the channel gets boosted by a large factor and destroys the overall SNR. Further more
the channel may have zeros in its frequency response that cannot be inverted at all.

2. MMSE equalizer:-

The Minimum Mean-Square Error (MMSE) equalizer minimizes the mean of the square of
the error (noise+ISI) after the equalizer.
Minimum-mean-square error (MMSE) equalizers minimize the mean-square error between
the output of the equalizer and the transmitted symbol. They require knowledge of some auto
and cross-correlation functions, which in practice can be estimated by transmitting a known
signal over the channel.


15

3. Symbol spaced equalizer:-

A symbol-spaced linear equalizer consists of a tapped delay line that stores samples from the
input signal. Once per symbol period, the equalizer outputs a weighted sum of the values in
the delay line and updates the weights to prepare for the next symbol period. This class of
equalizer is called symbol-spaced because the sample rates of the input and output are equal.
In this configuration the equalizer is attempting to synthesize the inverse of the folded
channel spectrum which arises due to the symbol rate sampling (aliasing) of the input. In
typical application, the equalizer begins in training mode together information about the
channel, and later switches to decision directed mode. Here, the newest of weights depends
on these quantities:

The current set of weights
The input signal
The output signal
For adaptive algorithms other than CMA, a reference signal, d, whose characteristics
depend on the operation mode of the equalizer.



Fig 12. Symbol spaced equalizer
16

b. Non linear Equalizer

Decision feedback equalizer:-

The basic limitation of a linear equalizer, such as transversal filter, is the poor perform on the
channel having spectral nulls. A decision feedback equalizer (DFE) is a nonlinear equalizer
that uses previous detector decision to eliminate the ISI on pulses that are currently being
demodulated. In other words, the distortion on a current pulse that was caused by previous
pulses is subtracted. Figure 7shows a simplified block diagram of a DFE where the forward
filter and the feedback filter can each be a linear filter, such as transversal filter. The
nonlinearity of the DFE stems from the nonlinear characteristic of the detector that provides
an input to the feedback filter. The basic idea of a DFE is that if the values of the symbols
previously detected are known, then ISI contributed by these symbols can be cancelled out
exactly at the output of the forward filter by subtracting past symbol Values with appropriate
weighting. The forward and feedback tap weights can be adjusted simultaneously to fulfil a
criterion such as minimizing the MSE. The DFE structure is particularly useful for
equalization of channels with severe amplitude distortion, and is also less sensitive to
sampling phase offset. The improved performance comes about since the addition of the
feedback filter allows more freedom in the selection of feed forward coefficients. The exact
inverse of the channel response need not be synthesized in the feed forward filter, therefore
excessive noise enhancement is avoided and sensitivity to sampler phase is decreased.


17



Fig 13.decision feedback equalizer

The advantage of a DFE implementation is the feedback filter, which is additionally working
to remove ISI, operates on noiseless quantized levels, and thus its output is free of channel
noise. One drawback to the DFE structure surfaces when an incorrect decision is applied to
the feedback filter. The DFE output reflects this error during the next few symbols as the
incorrect decision propagates through the feedback filter. Under this condition, there is a
greater likelihood for more incorrect decisions following the first one, producing a condition
known as error propagation. On most channels of interest the error rate is low enough that the
overall performance degradation is slight.

c. Blind equalizer:-

Blind equalization of the transmission channel has drawn achieving equalization without
the need of transmitting a training signal. Blind equalization algorithms that have been
proposed are the constant modulus algorithm (CMA) and multimodals algorithm (MMA).
This reduces the mean square error (MSE) to acceptable levels. Without the aid of training
sequences, a blind equalization is used as an adaptive equalization in communication
systems. In the blind equalization algorithm, the output of the equalizer is quantized and the
18

quantized output is used to update the coefficients of the equalizer. Then, for the complex
signal case, an advanced algorithm was presented in. However, the convergence property of
this algorithm is relatively poor. In an effort to overcome the limitations of decision directed
equalization, the desired signal is estimated at the receiver using a statistical measure based
on a priori symbol properties; this technique is referred to as blind equalization. In the Blind
Error is selected as the basis for the filter coefficient update. In general, blind equalization
directs the coefficient adaptation process towards the optimal filter parameters even when the
initial error rate is large. For best results the error calculation is switched to decision directed
method after an initial period of equalization, we call this the shift blind method. Referring to,
the Reference Selector selects the Decision Device Output as the input to the error calculation
and the Error Selector selects the Standard Error as the basis for the filter coefficient update.



Fig 14. Blind equalizer
19


Fig 15. A typical blind equalizer

The constant modulus algorithm (CMA) is the most popular adaptive blind equalizer used
today because of its relative simplicity and its good global convergence properties. It is very
effective in equalizing signals that are of constant modulus. However, for non-constant
modulus signals, CMA equalization can suffer from a considerable amount of residual ISI,
and produce non-minimal convergence.

Barbarossa and Scaglione propose an alphabet-matched algorithm (AMA), which is able to
provide better equalization of non-constant modulus signals, and they introduce a scheme that
uses CMA to initialize the alphabet-matched algorithm (AMA). The AMA cost function tries
to force the equalizer output to belong to the signal constellation of interest, and it is shown
that this method is able to lower the residual ISI and improve the convergence rate over that
of CMA alone. However, the AMA equalizer works well only when the initialization is good.
An issue here is how to automatically switch from the initialization step using CMA, to the
AMA. Using too few iterations of CMA will lead to ineffective equalization overall, while
using too many iterations increases the runtime without improving performance.
A new adaptive channel equalization scheme based on combining the two well-defined
cost functions of the constant modulus algorithm (CMA) and the alphabet-matched algorithm
(AMA) into a single cost function.
20

3.2. ADAPTATION ALGORITHMS

The main adaptation algorithm used in equalizer is

3.2.1. Least Mean Square (LMS)

The Least Mean Square (LMS) algorithm, introduced by Widrow and Hoff in 1959 is an
adaptive algorithm, which uses a gradient-based method of steepest decent. LMS algorithm
uses the estimates of the gradient vector from the available data. LMS incorporates an
iterative procedure that makes successive corrections to the weight vector in the direction of
the negative of the gradient vector which eventually leads to the minimum mean square error.
Compared to other algorithms LMS algorithm is relatively simple; it does not require
correlation function calculation nor does it require matrix inversions.

Least mean squares (LMS) algorithms are class of adaptive filter used to mimic a
desired filter by finding the filter coefficients that relate to producing the least mean squares
of the error signal (difference between the desired and the actual signal). It is a stochastic
gradient descent method in that the filter is only adapted based on the error at the current
time. LMS filter is built around a transversal (i.e. tapped delay line) structure. Two practical
features, simple to design, yet highly effective in performance have made it highly popular in
various application. LMS filter employ, small step size statistical theory, which provides a
fairly accurate description of the transient behavior. It also includes H8 theory which
provides the mathematical basis for the deterministic robustness of the LMS filters. As
mentioned before LMS algorithm is built around a transversal filter, which is responsible for
performing the filtering process. A weight control mechanism responsible for performing the
adaptive control process on the tape weight of the transversal filter.

21


Fig.16. Block diagram of adaptive transversal filter employing LMS algorithm

The LMS algorithm in general, consists of two basics procedure:
Filtering process-which involve, computing the output of a linear filter in response to the
input signal and generating an estimation error by comparing this output with a desired
response as follows:
e(n) = d(n) - y(n)
y(n) is filter output and is the desired response at time n

Adaptive process - which involves the automatics adjustment of the parameter of the filter in
accordance with the estimation error.

w( n + 1) = w( n) + p( n) c

( n)

Where
p is the step size
(n+1) is estimate of tape weight vector at time (n+1)
If prior knowledge of the tape weight vector n is not available set n=0

The combination of these two processes working together constitutes a feedback loop, as
illustrated in the block diagram. First, we have a transversal filter, around which the LMS
algorithm is built; this component is responsible for performing the filtering process. Second,
22

we have a mechanism for performing the adaptive control process on the tap weight of the
transversal filter- hence the designated adaptive weight control mechanism in the figure.

Characteristics of LMS algorithm:-

i. LMS is the most well-known adaptive algorithms by a value that is proportional to the
product of input to the equalizer and output error.
ii. LMS algorithms execute quickly but converge slowly, and its complexity grows linearly
with the no of weights.
iii. Computational simplicity
iv. In which channel parameter dont vary very rapidly

3.2.1.1.LMS algorithm formulation

From the method of steepest descent, the weight vector equation



Where is the step-size parameter and controls the convergence characteristics of the LMS
algorithm; e
2
(n) is the mean square error between the beam former output y(n) and the
reference signal which is given by



The gradient vector in the above weight update equation can be computed as



In the method of steepest descent the biggest problem is the computation involved in finding
the values r and R matrices in real time. The LMS algorithm on the other hand simplifies this
23

by using the instantaneous values of covariance matrices r and R instead of their actual values
i.e.



Therefore the weight update can be given by the following equation,



The LMS algorithm is initiated with an arbitrary value w(0) for the weight vector at n=0. The
successive corrections of the weight vector eventually leads to the minimum value of the
mean squared error.

Therefore the LMS algorithm can be summarized in following equations;

Output, y(n)=w
h
x( n)

Error, c( n) = J

( n) y( n) Weight, w(n+1) =w(n)+x(n)e


*
(n)










24

Description


Fig 17. N-tap transversal adaptive filter

We assume that the signals involved are real-valued.

The LMS algorithm changes (adapts) the filter tap weights so that e(n) is minimized
in the mean-square sense. When the processes x(n) & d(n) are jointly stationary, this
algorithm converges to a set of tap-weights which, on average, are equal to the
Wiener-Hopf solution.

The LMS algorithm is a practical scheme for realizing Wiener filters, without
explicitly solving the Wiener-Hopf equation.

The conventional LMS algorithm is a stochastic implementation of the steepest
descent algorithm. It simply replaces the cost function = E[e
2
(n)] by its
instantaneous coarse estimate = e
2
(n) .



25

Substituting = e
2
(n) for in the steepest descent recursion, we obtain

w

( n + 1) = w

( n) p7c
2
( n)


Where

Note that the i-th element of the gradient vector e
2
(n) is

Then

Where x( n) = [x(n) x(n-1) x(n-2)x( n N + 1) ]
1



Finally we obtain


3.2.1.2.Summary of the LMS algorithm
Input :-
Tap-weight vector, w

( n)
input vector, x ( n)
desired output, d(n)
Output :-
Filter output, y(n)
Tap-weight vector update, w

( N + 1)
26


3.2.1.3.Convergence and Stability of the LMS algorithm
The LMS algorithm initiated with some arbitrary value for the weight vector is seen to
converge and stay stable for

0 < < 1/
max

Where
max
is the largest eigenvalue of the correlation matrix R. The convergence of the
algorithm is inversely proportional to the eigenvalue spread of the correlation matrix R.
When the Eigen values of R are widespread, convergence may be slow. The eigenvalue
spread of the correlation matrix is estimated by computing the ratio of the largest eigenvalue
to the smallest eigenvalue of the matrix. If is chosen to be very small then the algorithm
converges very slowly. A large value of may lead to a faster convergence but may be less
stable around the minimum value. One of the literatures also provides an upper bound for
based on several approximations as
<= 1/(3trace(R)).




3.2.1.4.Normalised least mean squares filter (NLMS)

The main drawback of the "pure" LMS algorithm is that it is sensitive to the scaling of its
input x(n). This makes it very hard (if not impossible) to choose a learning rate that
guarantees stability of the algorithm (Haykin 2002). The Normalised least mean squares filter
(NLMS) is a variant of the LMS algorithm that solves this problem by normalising with the
power of the input. The NLMS algorithm can be summarised as:
Parameters: p = filter order = step size
Initialization:
`
( 0) =0
Computation: For n = 0,1,2,..

27



3.2.1.5.Advantages & disadvantages of LMS algorithm:

(1) Simplicity in implementation
(2) Stable and robust performance against different signal conditions
(3) slow convergence ( due to Eigen value spread )

3.2.2. Blind Adaptive algorithm

The scheme that we propose combines the conventional CMA with a constellation matched
error (CME) term, similar to the modified CMA (MCMA) scheme presented by He et. al. The
authors show that for QAM signals, this new scheme improves the convergence rate of the
adaptive equalizer and is more robust than the conventional CMA for low signal-to-noise
ratios (SNRs). The cost function presented is a weighted sum of two individual cost
functions, one is CMA and the other is a constellation matched error (CME) term. The
authors state that the CME term should satisfy certain desirable properties namely uniformity,
symmetry, and zero/maximum penalties at the zero/maximum distance from the QAM
constellation points. Uniformity implies that the CME function does not favour any one data
symbol in the alphabet over another, while symmetry implies that the function is symmetric
around each constellation symbol. It is also desirable that the CME assigns the
zero/maximum penalty at the zero/maximum distance from a constellation point. A zero
penalty occurs when an equalized symbol lies directly on a constellation point and the
maximum penalty occurs midway between two adjacent constellation points. In this work we
propose the use of AMA, presented by Barbarossa and Scaglione as the CME term. AMA
satisfies both the uniformity and symmetry properties while satisfying a minimum/maximum
penalty assignment at the zero/maximum deviation points.


28

3.2.2.1.Constant Modulation algorithm(CMA)

For both the CMA and the AMA, the cost function J(w) to be minimized is a function of the
difference between the equalizer output, and the known constellation
points. The cost function for CMA is given by

J
CMA
(w) = E{ ( | y( n)
2
R
2
| )
2
} (10)

Where R
2
=
L{ |c( )
4
|}
L{ | c( )
2
| }
, and c(i)=1,2..M are the known constellation points. M is the number
of constellation points. The notation E{} denotes the expected value, taken over the entire
block of equalizer output symbols. This cost function attempts to restore the shape of the
constellation by taking into account the distance between the equalizer output symbols and
the computed constant modulus of the known constellation symbols. CMA does not take the
phase of the constellation points into consideration. CMA is well studied and is the most
popular algorithm used for blind adaptive equalization.

3.2.2.2. Alphabet matched algorithm(AMA)

The AMA cost function, proposed by Babarossa and Scaglione is given by

J
AMA
(w)=E _1 c
-| j( n) -c( i) |
2
2o
2 M
=1
_ (11)

The AMA cost function uses the same notation as above and a parameter, , is used to
control the width of the nulls that the function places around each constellation point. Its
value is chosen so that these nulls do not overlap for adjacent constellation points. The AMA
cost function attempts to restore the shape of the constellation by considering the distance
between the equalizer output symbol and each of the known constellation symbols. It assigns
an appropriate penalty based on this distance. When an equalized symbol is sufficiently close
to a constellation point the cost function is minimized. The opposite is true when an equalized
symbol is far from the constellation points. AMA is known to depend on good initialization
for optimization of the cost function.
29

In our equalization scheme we combine the two cost functions described above into a
single cost function but modified as given below:

J(w)=J
CMA
(w)+J
AMA
(w) (12)

This combined cost function attempts to restore the constellation shape by taking into account
both the amplitude and phase of the equalized symbol. The parameter is a fixed scaling
factor between the CMA and AMA terms. Making the substitution of the CMA and AMA
expressions into equation (12), we obtain

J(w)= E{ ( | y( n)
2
R
2
| )
2
} + E _1 c
-| j( n) -c( i) |
2
2o
2 M
=1
_ (13)



The update for the equalizer weights in this proposed scheme is based on the following
stochastic block gradient descent rule:

w
n+1
= w
n
- 7
w
J(w) (14)
where is the algorithm step-size. The equalizer output is given as for
equalizer weight vector,w and a received data vector, x
n
, which is a function of time n. The
gradient of the cost function is given in equation (17) as the linear combination of the
gradients of the CMA and AMA cost functions shown in equations (15) and (16)
respectively:
7
w
J
CMA
(w)= E{ ( 4| y( n)
2
R
2
| )
2
| y( n) | x

} (15)
7
w
J
AMA
(w)=E{
1
c
2
x
n

(y( n) c( i) )c
-| j( n) -c( i) |
2
2o
2 M
=1
} (16)

Note that the * operator denotes the complex conjugate. Thus the gradient expression for the
combined scheme is:

7
w
J(w)= 7
w
J
CMA
(w) + 7
w
J
AMA
(w) (17)
30


and it should be noted that the averaged block gradient is computed at each iteration.

The cost function given in (12) combines the previously described CMA and AMA
equalizers into a single equalizer. Note that if is zero, we have the pure CMA equalizer and
if is very large, (12) is essentially controlled by the AMA equalizer. For a fixed , it should
also be noted that when the equalizer is far from convergence, the AMA cost function
remains fairly flat, and the evolution of the equalizer is controlled by CMA. Once CMA has
converged, the evolution can then be controlled by the AMA. Thus the combined cost
function eliminates the need to monitor CMA for convergence. Further, it provides some
additional gains in the region where CMA has not yet converged, and the AMA initialization
is poor.


4. CHANNEL EQUALIZATION & SYSTEM MODEL

As mentioned in the introduction the intersymbol interference imposes the main obstacles
to achieving increased digital transmission rates with the required accuracy. ISI problem is
resolved by channel equalization in which the aim is to construct an equalizer such that the
impulse response of the channel/equalizer combination is as close to z

as possible, where
is a delay. Frequently the channel parameters are not known in advance and moreover they
may vary with time, in some applications significantly. Hence, it is necessary to use the
adaptive equalizers, which provide the means of tracking the channel characteristics. The
following figure shows a diagram of a channel equalization system
31



Fig 18. Digital transmission system using channel equalization

In the previous figure, s(n) is the signal that you transmit through the communication
channel, and x(n) is the distorted output signal. To compensate for the signal distortion, the
adaptive channel equalization system completes the following two modes:
Training mode - This mode helps you determine the appropriate coefficients of the
adaptive filter. When you transmit the signal s(n) to the communication channel, you
also apply a delayed version of the same signal to the adaptive filter. In the previous
figure, z

is a delay function and d(n) is the delayed signal, y(n) is the output signal
from the adaptive filter and e(n) is the error signal between d(n) and y(n) .The
adaptive filter iteratively adjusts the coefficients to minimize e(n) .After the power of
e(n)converges, y(n) is almost identical to d(n),which means that you can use the
resulting adaptive filter coefficients to compensate for the signal distortion.
Decision-directed mode - After you determine the appropriate coefficients of the
adaptive filter, you can switch the adaptive channel equalization system to decision-
directed mode. In this mode, the adaptive channel equalization system decodes the
signal and y(n) produces a new signal, which is an estimation of the signal s(n)
except for a delay of taps.

32

Here, Adaptive filter plays an important role. The structure of the adaptive filter is
showed in Fig.14

J
`
(n) d(n)
x(n)
- + e(n)
W
n






Fig 19. Adaptive filter

Fig 20.function of equalizer


SYSTEM MODEL

We consider a communications environment, which is coherent and synchronous with
single-tone signaling. Carrier timing and waveform recovery have also been achieved. The
received data block is of length N symbols and the equalizer attempts to remove the
distortions introduced by the channel. Both the channel and equalizer are constrained to be
Var i abl e f il t er (W
n
)
Updat e al gor i t hm(n)

+
33

linear, time-invariant FIR filters. The time-invariance assumption on the channel is
reasonable because we consider short blocks of data. Noise in the channel is modeled as
zero-mean additive white Gaussian noise (AWGN). Figure 16 provides a block diagram for
the communication system.


Fig 21. Communication system

The channel is a single-input single-output (SISO) system with an input/output relationship
characterized by



where T is the number of channel zeros and {h(0) . h(T-1)} is the channel impulse
response, s(n) is the transmitted symbol sequence, and v(n) is AWGN noise in a data block of
length N. We assume the symbol sequence in the block of data is independently and
identically distributed (i.i.d.) and is non-Gaussian. The equalizer output, y(n) at time n is
defined as


The discussion develops results for linear equalizers and decision-feedback equalizers
impaired by ISI, ACI, and CCI, which state that relatively wider bandwidths (with respect to
the symbol rate) in the combined channel, combined interferers, and equalizer, may provide
34

the flexibility to an equalizer to suppress larger numbers of interferers. This analysis also
includes the effect of receiver antenna diversity. If the number of receiver antennas is A
r
it
means that in Fig. l(a) and l(b) there are A
r
times the number of combined channels and
combined interferers and that receiver inputs are put together using a continuous-time
infinite-length optimal combiner. In other words, the output of each receiver antenna would
be fed to a continuous-time infinite-length linear equalizer with all equalizer outputs added
together, and the adder output would be fed to the input of the quantizer. Note that the
description of this result emphasized the work may; this is because wide relative bandwidths
do not guarantee improved interference suppression capability under all conditions.
Pathological conditions exist where wide relative bandwidths do not guarantee improved
interference suppression, such as when the signals of interest and interferers have identical
impulse responses. However, under conditions of linear independence, wide relative
bandwidths allow for substantial equalizer/combiner performance improvements over the
case where narrow relative bandwidths are used, and the improvements are compounded by
the use of multiple antennas.
In this generalization, the number of interferers L contains two components

L = Na + Nc (1)

Where Na is the number of ACI signals and Nc is the number of CCI signals present at the
receiver input. Fig. 7(a) and 7(b) introduce three parameters. The transmitter bandwidth Bt
equalizer/combiner (receiver) bandwidth Br and carrier spacing C all of which are measured
relative to the symbol rate, 1/T. The figures also show the effect of the
equalizer/combiner/receiver bandwidth. All signals are zero outside the frequency band (-
Br/T, Bt/T). All the CCI signals have carrier frequency of zero. However, although it is not
shown in this paper, it would be a slight generalization to include multiple CCI signals at
each carrier frequency. All the ACI carrier frequencies are separated by C/T with one ACI
signal per carrier frequency. Depending on Bt, Br and C, a number of ACI signals may be
present after receiver filtering.



35











Fig 22. (a)Magnitude response of the signals after receiver filtering separated adjacent
channels
(b) Magnitude response of the signals after receiving filtering unseparated adjacent
channels

condition for zero interference

Fig. 22(a) shows a case where the adjacent bands are separated from the signal carrying the
data of interest. In this particular case, the problem of ACI suppression is trivial; the receiver
filter
frequency response is set to zero in the adjacent interferers' bands. Of more interest in this
paper is the situation shown in Fig. 22(b) where substantial spectral overlap occurs.
Define the equalized combined channel as








(2)
36

Where the symbol * denotes convolution, r
m
(t) denotes the impulse response of the m
th

equalizer, Ar { 1, 2, 3,. . .} is the number of antennas, and denotes the impulse response
between the desired signal transmitter and the m
th
antenna element. Define the equalized
combined interferers to be







The time-domain condition for zero intersymbol interference at the sampling time can be
written as



and the time-domain condition for zero interference at the sampling time can be written as




The concept of suppressing interference by putting T-spaced zero crossings in the equalized
combined interference responses is similar to the idea avoiding interference by generating
nulls in an antenna pattern. A linear equalizer/combiner which causes (4) and (5) to be
satisfied is called a generalized zero-forcing linear equalizer
Conditions (4) and (5) can be expressed in the frequency domain as






(3)
(4)
(5)
(6)
37

is the fourier transform of
is the frequency response of the equalizer at mth antenna output
This equation can only be satisfied (and suppressing all ISI, ACI, and CCI) if for any
frequency the number of unknowns equals or exceeds the number of
equations L + 1


TABLE 1: FOUR BASIC CLASSES OF ADAPTIVE FILTERING APPLICATION


Class of adaptive
Filtering
Application Purpose
Identification System
identification





Layered earth
Modelling
Given an unknown
dynamical system, the
purpose system
identification is to design
an adaptive filter that
provides an approximation
to the system.
In exploration seismology,
a layered modelled of the
earth is developed to
unravel the complexities of
the earths surface.
Inverse modelling Equalization Given a channel of
unknown impulse
response, the purpose of an
adaptive equalizer is to
operate on the channel
output such that the
cascade connection of the
channel and the equalizer
38

provides an approximation
to an ideal transmission
medium.
Prediction Predictive coding








Spectrum analysis


The adaptive prediction is
used to develop a model of
a signal of interest (e.g., a
speech signal); rather than
encode the signal directly,
in predictive coding the
prediction error is encoded
for transmission or storage.
Typically, the prediction
error has a smaller
variance than the original
signal-Hence the basis for
improved encoding.

In this application,
predictive modelling is
used to estimate the power
spectrum of a signal of
interest.
Interference
cancellation
Noise cancellation









The purpose of an adaptive
noise canceller is to
subtract noise from a
received signal in
adaptively controlled
manner so as to improve
the signal-to-noise ratio.
Echo cancellation,
experienced on telephone
circuits, is a special form
39






Beam forming
of noise cancellation. Noise
cancellation is also used in
Electrocardiography.

A beam forming is spatial
filter consist of an array of
antenna elements with
adjustable weights
(Coefficients). The twin
purposes of an adaptive
beam former are to
adaptively control the
weights so as to cancel
interfering signal impinging on
the array
from unknown direction
and, at the same time,
provide protection to a
target signal of interest.


4. FUTURE WORK
Recursive Least Square Algorithm (RLS)

The Recursive least squares (RLS) adaptive filter is an algorithm which recursively finds the
filter coefficients that minimize a weighted linear least squares cost function relating to the
input signals. This in contrast to other algorithms such as the least mean squares (LMS) that
aim to reduce the mean square error. In the derivation of the RLS, the input signals are
considered deterministic, while for the LMS and similar algorithm they are considered
stochastic. Compared to most of its competitors, the RLS exhibits extremely fast
convergence. However, this benefit comes at the cost of high computational complexity, and
potentially poor tracking performance when the filter to be estimated changes.
40

As illustrated in Figure the RLS algorithm has the same to procedures as LMS algorithm,
except that it provides a tracking rate sufficient for fast fading channel, moreover RLS
algorithm is known to have the stability issues due to the covariance update formula p(n) ,
which is used for automatics adjustment in accordance with the estimation error as follows:

p(0) =

I
Where p is inverse correlation matrix and is regularization parameter, positive constant for
high SNR and negative constant for low SNR

For each instant of time n=1,2,3.....



Time varying gain vector



Then the priori estimation error





Fig 23.Block diagram of adaptive transversal filter using RLS algorithm
41

REFERENCES

[1] Garima Malik, Amandeep Singh Sappal, Adaptive Equalization Algorithms:An
Overview (IJACSA) International Journal of Advanced Computer Science and
Applications,Vol. 2, No.3, March 2011.
[2] Adaptive Signal Processing, B. Widrow, S. D. Stearns, 2001
[3] Analog and Digital Communications, S.Haykins, Prentice Hall, 1996.
[4] Modern Digital and analog Communications system, B.P.Lathi
[5] S. U. H. Qureshi, Adaptive equalization, Proceedings of the IEEE 1985, vol. 73, no. 9,
pp. 1349-1387.
[6] Theory and design of adaptive filters, John R. Treichler, Richard Johnson .Jr, Michael G.
Larimore, Prentice-hall of India private ltd.
[7] B. Petersen, D. Falconer, Suppression of adjacent-channel, cochannel and intersymbol
interference by equalizers and linear combiners, IEEE Trans on Communication, vol. 42, pp.
3109-3118, Dec. 1994
[8] Wang Junfeng, Zhang Bo, Design of Adaptive Equalizer Based on Variable Step LMS
Algorithm, Proceedings of the Third International Symposium on Computer Science and
Computational Technology(ISCSCT 10) Jiaozuo, P. R. China, 14-15, pp. 256-258,
August 2010.
[9] Antoinette Beasley and Arlene Cole-Rhodes, Performance of Adaptive Equalizer for
QAM signals, IEEE, Military communications Conference, 2005. MILCOM 2005, Vol.4,
pp. 2373 - 2377.
[10] En-Fang Sang, Hen-Geul Yeh, The use of transform domain LMS algorithm to
adaptive equalization, International Conference on Industrial Electronics, Control, and
Instrumentation, 1993. Proceedings of the IECON '93, Vol.3, pp.2061-2064, 1993.
[11] Kevin Banovic, Raymond Lee, Esam Abdel-Raheem, and Mohammed A. S. Khalid,
Computationally-Efficient Methods for Blind Adaptive Equalization, 48th Midwest
Symposium on Circuits and Systems, Vol.1, pp.341-344, 2005.
[12] En-Fang Sang, Hen-Geul Yeh, The use of transform domain LMS algorithm to
adaptive equalization, International Conference on Industrial Electronics, Control, and
Instrumentation, 1993. Proceedings of the IECON '93, Vol.3, pp.2061-2064, 1993.
42

[13] Mahmood Farhan Mosleh, Aseel Hameed AL-Nakkash, Combination of LMS and RLS
Adaptive Equalizer for Selective Fading Channel, European Journal of Scientific Research
ISSN 1450-216X Vol.43, No.1, pp.127-137, 2010.
[14] Iliev,G and Kasabov, N. 2000 Channel Equalization using Adaptive Filtering with
Averaging, 5th Joint coference on Information Science (JCIS), Atlantic city, USA., 2000.
[15] Sharma, O., Janyani, V and Sancheti, S. 2009. Recursive Least Squares Adaptive Filter
a better ISI Compensator, International Journal of Electronics, Circuits and Systems.
[16] Antoinette Beasley and Arlene Cole-Rhodes, Performance of Adaptive Equalizer for
QAM signals, IEEE, Military Communications Conference, 2005. MILCOM 2005, Vol.4,
pp. 2373 - 2377.

You might also like