You are on page 1of 14

LEAST MEAN SQUARE ALGORITHM:

The earliest work on adaptive filters may be traced back to the late 1950s, during which time a number of researchers were working independently on theories and applications of such filters. From this early work, the least-mean-square algorithm emerged as a simple, yet effective, algorithm for the design of adaptive transversal (tapped-delay-line) filters. The LMS algorithm was devised by Widrow and Hoff in 1959 in their study of a pattern recognition machine known as the adaptive linear element, commonly referred to as the Adaline [1, 2]. The LMS algorithm is a stochastic gradient algorithm in that it iterates each tap weight of the transversal filter in the direction of the instantaneous gradient of the squared error signal with respect to the tap weight in question. Let w^ n denote the tapweight vector of the LMS filter, computed at iteration (time step) n. The adaptive operation of the filter is completely described by the recursive equation (assuming complex data)

where u(n) is the tap-input vector, d(n) is the desired response, and is the step-size parameter. The quantity enclosed in square brackets is the error signal. The asterisk denotes complex conjugation, and the superscript H denotes Hermitian transposition (i.e., ordinary transposition combined with complex conjugation). Equation (1) is testimony to the simplicity of the LMS filter. This simplicity, coupled with desirable properties of the LMS filter (discussed in the chapters of this book) and practical applications [3, 4], has made the LMS filter and its variants an important part of the adaptive signal processing kit of tools, not just for the past 40 years but for many years to come. Simply put, the LMS filter has withstood the test of time. Although the LMS filter is very simple in computational terms, its mathematical analysis is profoundly complicated because of its stochastic and nonlinear nature. The stochastic nature of the LMS filter manifests itself in the fact that in a stationary environment, and under the assumption of a small step-size parameter, the filter executes a form of Brownian motion. Specifically, the small step-size theory of the LMS filter is almost exactly described by the discrete-time version of the Langevin equation

which is naturally split into two parts: a damping force - kvk(n) and a stochastic force k(n). The terms used herein are defined as follows:

To illustrate the validity of Eq. (2) as the description of small step-size theory of the LMS filter, we present the results of a computer experiment on a classic example of adaptive equalization. The example involves an unknown linear channel whose impulse response is described by the raised cosine [3].

where the parameter W controls the amount of amplitude distortion produced by the channel, with the distortion increasing with W. Equivalently, the parameter W controls the eigenvalue spread (i.e., the ratio of the largest eigenvaiue to the smallest eigenvalue) of the correlation matrix of the tap inputs of the equalizer, with the eigenvalue spread increasing with W. The equalizer has M 11 taps. Figure 1 presents the learning curves of the equalizer trained using the LMS algorithm with the step-size parameter = 0:0075 and varying W. Each learning curve was obtained by averaging the squared value of the error signal e(n) versus the number of iterations n over an ensemble of 100 independent trials of the experiment

LEAST MEAN SQUARE ADAPTIVE FILTER


Introduction Adaptive algorithms are a mainstay of Digital Signal Processing (DSP). They are used in a variety of applications including acoustic echo cancellation, radar guidance systems, and wireless channel estimation, among many others. An adapative algorithm is used to estimate a time varying signal. There are many adaptive algorithms such as Recursive Least Square (RLS) and Kalman filters, but the most commonly used is the Least Mean Square (LMS) algorithm. It is a simple but powerful algorithm that can be implemented to take advantage of Lattice FPGA architectures. Developed by Window and Hoff, the algorithm uses a gradient descent to estimate a time varying signal. The gradient descent method finds a minimum, if it exists, by taking steps in the direction negative of the gradient. It does so by adjusting the filter coefficients to minimize the error. The LMS reference design consists of two main functional blocks - a FIR filter and the LMS algorithm. The FIR filter is implemented serially using a multiplier and an adder with feedback. The FIR result is normalized to minimize saturation. The LMS algorithm iteratively updates the coefficient and feeds it to the FIR filter. The FIR filter than uses the coefficient e(n) along with the input reference signal x(n) to generate the output y(n). The output y(n) is then subtracted to from the desired signal d(n) to generate an error, which is used by the LMS algorithm to compute the next set of coefficients. Figure 1 is a block diagram of system identification using adaptive filtering. The objective is to change (adapt) the coefficients of an FIR filter, W, to match as closely as possible the response of an unknown system, H. The unknown system and the adapting filter process the same input signal x[n] and have outputs d[n] (also referred to as the desired signal) and y[n].

Figure 2.2: Least Mean Square adaptive filter

GRADIENT-DESCENT ADAPTATION [6] The adaptive filter, W, is adapted using the least mean-square algorithm, which is the most widely used adaptive filtering algorithm. First the error signal, e[n], is computed as e[n]=d[n]y[n], which measures the difference between the output of the adaptive filter and the output of the unknown system. On the basis of this measure, the adaptive filter will

change its coefficients in an attempt to reduce the error. The coefficient update relation is a function of the error signal squared and is given by

( e) 2 h n +1 [ i ] = h n [ i ] + 2 h n [ i ]
(2.7) The term inside the parentheses represents the gradient of the squared-error with respect to
th

the I coefficient. The gradient is a vector pointing in the direction of the change in filter coefficients that will cause the greatest increase in the error signal. Because the goal is to minimize the error, however, Equation 1 updates the filter coefficients in the direction opposite the gradient; that is why the gradient term is negated. The constant is a step-size, which controls the amount of gradient information used to update each coefficient. After repeatedly adjusting each coefficient in the direction opposite to the gradient of the error, the adaptive filter should converge; that is, the difference between the unknown and adaptive systems should get smaller and smaller. To express the gradient decent coefficient update equation in a more usable manner, we can rewrite the derivative of the squared-error term as
( e) 2 (e) =2 e h [ i ] h [ i ]

(2.8) Or,
(e) 2 (d y ) = 2 h [ i ] h [ i ] e

(2.9)

N 1 ( d h [ i ] x[ n i]) (e) 2 i=0 h [ i ] = 2 h [ i ]

(2.10)
( e) 2 = 2( x[n i])e h [ i ]

(2.11) which in turn gives us the final LMS coefficient update,

h n +1 [ i ] = h n [ i ] + ex[ n i]
(2.12) The step-size directly affects how quickly the adaptive filter will converge toward the unknown system. If is very small, then the coefficients change only a small amount at each update, and the filter converges slowly. With a larger step-size, more gradient information is included in each update, and the filter converges more quickly; however, when the step-size is too large, the coefficients may change too quickly and the filter will diverge. (It is possible in some cases to determine analytically the largest value of ensuring convergence.)

CONVERGENCE AND STABILITY [6] Assume that the true filter H(n) = H is constant, and that the input signal x(n) is wide-sense stationary. Then E{W(n)} converges to H as n if and only if

0 < <

max

(2.13)
max

Where

is the greatest eigenvalue of the autocorrelation matrix. If this condition is not

fulfilled, the algorithm becomes unstable and W(n) diverges. Maximum convergence speed is achieved when

2 max + min

(2.14)
min

where

is the smallest eigenvalue of autocorrelation matrix. Given that is less than or


min

equal to this optimum, the convergence speed is determined by . , with a larger value
max

yielding faster convergence. This means that faster convergence can be achieved when
min

is

close to , that is, the maximum achievable convergence speed depends on the eigenvalue spread of autocorrelation matrix.
2 2

A white noise signal has autocorrelation matrix R = I, where is the variance of the signal. In this case all eigenvalues are equal, and the eigenvalue spread is the minimum over all possible matrices. The common interpretation of this result is therefore that the LMS converges quickly for white input signals, and slowly for colored input signals, such as processes with low-pass or high-pass characteristics.

It is important to note that the above upperbound on only enforces stability in the mean, but the coefficients of W(n) can still grow infinitely large, i.e. divergence of the coefficients is still possible. A more practical bound is 0 < < 2 tr[ R ]

(2.15) where tr[R] denotes the trace of autocorrelation matrix. This bound guarantees that the coefficients of W(n) do not diverge (in practice, the value of should not be chosen close to this upper bound, since it is somewhat optimistic due to approximations and assumptions made in the derivation of the bound).

IMPLEMENTATION OF LMS ADAPTIVE FILTER


INTRODUCTION
In LMS the weight vector is updated from sample to sample as follows-

hk+1 = hk k (3.1) hk and k are the weights and the true gradient vectors respectively. At the kth sampling instant, controls the stability and the rate of convergence. LMS algorithm for updating the weights from sample to sample is hk+1 = hk + 2 ekxk (3.2) where, ek = yk - hkTxk (3.3)

IMPLEMENTATION OF LMS ALGORITHM


1) Initially, set each each weight hk(i), for i=0,1,2,,N-1 to an arbitrary fixed value such as 0. For each subsequent sampling instant, k=1,2,.. carry out steps (2) to step (4) below. 2) Compute filter output as
n k = hk (i ) xk i
i =0 N 1

(3.4)

3) Compute the error estimate e k = yk - nk 4) Update the next filter weights h k +1 (i) = h k (i ) + 2 e k xk i (3.6) (3.5)

The LMS algorithm requires approximately 2N+1 multiplications and 2N+1 additions for each new set of input and output samples.

3.3) FLOWCHART FOR THE LMS ADAPTIVE FILTER

IMPLEMENTATION OF DIFFERENT ORDERS LMS ADAPTIVE FILTER


Introduction The LMS algorithm is a linear adaptive filtering algorithm, which, in general, consists of two basic processes: 1) A filtering process, which involves (a) computing the output of a linear filter in response to an input signal and (b) generating an estimation error by comparing this output with a desired response. 2) An adaptive process, which involves the automatic adjustment of the parameters of the filter in accordance with the estimation error. The combination of these two processes working together constitutes a feedback loop. First we have a transversal filter, around which the LMS algorithm is built, this component is responsible for performing the filtering process. Second, we have a mechanism for performing the adaptive control process on the tap weights of the transversal filter, hence is called adaptive weight-control mechanism.

Figure 3.1: LMS filter

1st order LMS adaptive filter


Introduction

Figure 3.2: 1st order LMS adaptive filter

dout is the output of transversal filter yn is the desired signal e(n) is the estimation error given as-

e(n) = dout(n) y(n) (3.7) w(n+1) = w(n) + 2e(n)xin(n) (3.8) w(n+1) is the updated weight and w(n) is the previous weight

Components required for designing of 1st order LMS adaptive filter areNumber of delay elements required Number of multipliersin transversal filter =1 =2

Number of multipliersin adaptive weight control mechanism = 3 Number of adders in transversal filter Number of adders in adaptive weight mechanism =1 =3

Here total number of multipliers are 5 and total number of adders are 4. The delay of QSD adder is 13.931ns and the delay of QSD multiplier is 11.348ns, so the total delay of 1st order LMS adaptive filter is 112.464ns.

2nd order LMS adaptive filter

Figure 3.10: 2nd order LMS adaptive filter

dout is the output of transversal filter yn is the desired output e(n) is the estimation error given ase(n) = dout(n) y(n) (3.9) w(n+1) = w(n) + 2e(n)xin(n) (3.10) w(n+1) is the updated weight and w(n) is the previous weight Components required for designing of 2nd order LMS adaptive filter areNumber of delay elements required Number of multipliersin transversal filter =2 =3

Number of multipliersin adaptive weight control mechanism = 4 Number of adders in transversal filter Number of adders in adaptive weight mechanism =2 =4

Here total number of multipliers are 7 and total number of adders are 6. The delay of QSD adder is 13.931ns and the delay of QSD multiplier is 11.348ns, so the total delay of 2nd order LMS adaptive filter is 163.022ns.

You might also like