You are on page 1of 7

Adaptive Noise Cancellation by Using

LMS,NLMS and RLS Algorithms


Student OF MS EE, I682@hotmail.com

Abstract— In statistical signal processing noise cancellation.


adaptive filters plays a vital role. In Adaptive
filtering, filter coefficients are eventually
changes automatically according to some
adaptive algorithm in synchronization with
input signal by keeping error into account, to
adapt to changing signal characteristics. In
this research paper we have chosen noise
cancellation application of adaptive filers.
First of all we gave a brief overview of
adaptive filters. Then we make the reader
Figure 1: Hierarchy of our Research Work
familiar with LMS, NLMS and RLS
algorithm. LMS, NLMS and the RLS are
II. ADAPTIVE FILTERS
implemented in MATLAB to remove noise
According to an adaptive algorithm adaptive
from the information carrying signal and
filter self-adjusts the filter coefficients. The
present us helpful information as an output.
relationship between the input and output signals
Then we compared the results. Graphs are
is iteratively modeled in adaptive filter.
the results of implementation of these
Figure given below is a general adaptive filter
algorithms; with the help of these graphs
consisting of a shift varying filter wn(z) and an
performance of each algorithm is analyzed.
adaptive algorithm for updating the filter
Keywords—Recursive Least Square (RLS), coefficients wn(k).
Least Mean Square (LMS), MATrix
LABoratory (MATLAB)and Normalised least
mean square(NLMS).
I. INTRODUCTION
Adaptive filters are appropriate for the systems
where statistical parameters of system are
indefinite. Frequently known adaptation
algorithms are RLS and LMS, where RLS
algorithm has higher convergence speed than the Figure 2: Typical adaptive filter
LMS algorithm, but at the expense of
computation complexity. LMS algorithm has Adaptive filters are capable of adaptation to an
less computational complexity than RLS unknown setting. These filters are most
algorithm. frequently used because of their adaptability and
After reading this research paper one can easily low cost. Adaptive filters can operate in an
understand the theory behind the adaptive filters. unknown environment and can track time
Possible Algorithms solutions and their variations of input statistics. Undeniably,
performance results will be presented here for adaptive filters are frequently and effectively
utilized over the years. Applications of Adaptive
filters can be classified as follows:
Identification, Prediction ,Inverse modeling, and
noise cancelling.
Following characteristic are in common for all
the above mentioned applications:
Comparison of an input fed to an adaptive filter Figure 4: Inverse Modeling
is done with a required output which results an
error. This error than updates the weights C. Prediction: The figure 5 is explaining the
(coefficients of filter). This is done to lessen the logic of the predictor adaptive filter. This
error so that we can reach the optimization. practical usage gives the best estimate of a
Error in best possible case is zero but is always arbitrary signal, this filter uses the previous
some points greater than zero. values of the arbitrary signal x(n), achieved as a
III. APPLICATIONS OF ADAPTIVE result of providing a delay to that signal fed to
FILTERS the adaptive filter input and comparing its output
y(n), with the desired response d(n), that is the
To make the reader familiar with the overview actual random signal x(n). To adjust weights of
with the variety of adaptive filter’s applications filter, when output of filter is used, then that
four main applications are listed below. filter is said to be a predictor.

A. Identification or Modeling:
Figure 3 is illustrating the identification
problem. Here the same input x(n) is fed to the
adaptive filter as the system. y(n) is the adaptive
filter’s output. Output of adaptive filter is then
compared with the preferred response d(n).This
comparison generates something which is
known as error . Error is then used to amend the
w(n) weight for the reduction of error in order to Figure 5: Predictor Adaptive Filter
identify the system.
D. Noise Cancellation:
Noise cancellation is the main focal point of this
research paper. The idea behind this application
is following
let d(n) is a preferred output tarnished by a
noise n2(n) and is said to be main signal.
Main signal = s(n) + n2(n)
Comparison of d(n) is then done with y(n) ,input
fed to adaptive filter is a reference signal n1(n).
this reference signal also produces noise which
Figure 3: System Identification or Modeling
tarnishes the main signal. e(n) is the system
output which is actually the comparison of
B. Inverse Modeling:
filter output and the preferred output .For best
Figure 4 illustrates inverse modeling, the.
state, this error will be equal to the original
Inverse modeling is also known as
signal .
deconvolution. Its goal is to find out and follow
the inverse transfer function of the system.
system receives an input x(n) and its output
u(n) is fed to the adaptive filter .when a delay is
given to the input x(n) we get preferred output
which is d(n).Then we compare the filter output
y(n) and the desired response d(n). To update or
correct the weights of filter error is used .

Figure 6: Interference Cancellation

IV. ACTIVE NOISE CANCELLING

Adaptive noise cancellation is also known as


active noise cancellation . Active noise
cancellation can simply be referred as noise Here, ℇ is presenting the mean square error.
cancellation class. Our main focus is to lessen
(-) sign is showing there is the need to change
the noise intrusion or terminate the disturbance.
the weight opposite to gradient slop,that is if
The approach chosen in the ANC algorithm, is
slope is positive reduce the weight,if negative
to try to mimic the original signal s(n).
increase the weight.
Central goal is to use an active noise cancelling
algorithm to stop speech noise intrusion and to A. LMS algorithm Summary
coap with several type of tarnished signal.
for pth order:
Interference cancellation is explained in figure 6.
The LMS algorithm for a pth order algorithm
The intrusion signal for the reference sensor is a can be described as
noise and given in the system as a reference P = filter order
signal. Primary sensor is finds out the preferred
signal. Adaptive filters produce a first response. μ = step size
Preferred signal is then compared with that first Start with: ĥ (0) = 0
response which results an error. Error produced
is then used as the feedback of filter, correcting Computation: For n = 0, 1, 2...
weight of filter and the response of system .To X(n) = [x(n), x(n - 1), …, x(n – p + 1)]T
compute the best active noise canceller e(n) = d(n) – ĥH(n) X(n)
algorithm diverse approaches can be used. Noise ĥ (n+1) = ĥ(n) + μ e*(n) X(n)
canceller algorithms to a preferred usage with
diverse outputs for a large number of algorithm B. Convergence and stability in the
can be designed. Depending on the specific mean of LMS:
features pertaining in each Algorithm give its
One of the drawback of LMS algorithm is that we
unique applications having its own merits and cannot obtain optimal weight in absolute sense due to
demerits. not using the exact values of the expectations, but
convergence to the mean is possible that is helpful in
achieving desired results. Here variation of weights
V. Least mean squares (LMS) to optimal weight is possible to small amount.
Convergence to the mean can be misleading if the
In LMS Algorithms we are concerned with variance is high with which weights are changing..
desired filter by finding the filter coefficients This problem arises when the value of step-size μ is
that produce the least mean squares of the error not selected correctly.
signal. Error signal is the difference between the Thus, an upper bound on μ is needed which is
desired and the actual signal. Stochastic gradient given as
descent method is used in that algorithm. 0 < μ < 2/λmax
The basic idea behind LMS filter is to approach Where λmax is an autocorrelation matrix, its
the most favorable filter weights (R-1 P), by eigen vales can never be negative. Algorithm’s
updating the filter weights in such a way to stability is dependent on its value, negative
converge to the most favorable filter weight. value of λmax will make it unstable. If the value
Firstly, a small weight is assumed, most of μ selected is very small then the algorithm
commonly it is zero. will converge very slowly. A large value of μ
At each step gradient of the mean square error is will result into faster convergence but at the
determined, the weights are updated expense of stability around the minimum value.
accordingly. If the MSE-gradient is positive, it Maximum convergence speed can be achieved
shows that error will increase positively if the by
same weight is used repeatedly, so here is the
need to reduce the weights to be closer to the 2
𝜇=
required weight. In the same way, if the gradient 𝜆𝑚𝑎𝑥 + 𝜆𝑚𝑖𝑛
is negative reflects the need of increase in
weight till to get closer to the desired weight. So, Here in order to get convergence faster the value
the basic weight update equation is: of 𝜇 should be large which is achievable when
λmax is close to λmin, where λmin is the
wn+1 = wn - μΔε[n] smallest eigen value of R and also determining
the speed of convergence.
𝑛
VI. NORMALIZED LEAST MEAN ℰ(𝑛) = ∑ λ𝑛−𝑖 | 𝑒(𝑛)|2
SQUARE (NLMS) ALGORITHM 𝑖=0

Learning rate μ assures the stability of the RLS algorithm reduces a weighted linear least
algorithm and makes it more effective. squares error. The RLS algorithms gives
Sensitivity to the scaling of inputs makes pure outstanding performance in non stationary
LMS less effective because it makes selection of situation.
learning rate μ difficult which is a drawback.
The Normalized least mean squares (NLMS) VIII. MATLAB EXPERIMENTS
filter is a modification of the LMS algorithm. It
normalizes with the power of the input. A. LMS algorithm

NLMS algorithm outline: Some alterations has been done to figure 6 and
Parameters: P = filter order summary for the LMS to attain the interference
μ = step size cancelling.
Initialization: ĥ (0) = 0 The figure 7 shows the result of usage of LMS
Computation: For n = 0, 1, 2... algorithm to intrusion problem, where s(n) is
X(n) = [x(n), x(n - 1), …, x(n – p + 1)]T the input, the preferred signal x(n)=s(n) + n2(n)
e(n) = d(n) – ĥH (n) X(n) and the error signal, which should be equal to
ĥ(n+1) = ĥ(n) + μ e∗(𝑛) X(𝑛)/ XH(𝑛) X(𝑛) the input signal s(n). In figure 7,the step-size
parameter μ is se equal to 0.0002 and the length
Optimal Learning Rate: of adaptive filter has kept 5. The input signal
s(n) is shown in blue. The input signal After the
if there is no intrusion [v(n) = 0], then the noise corruption s(n) + n2(n) is shown in green
optimal learning rate for the NLMS algorithm is color, and the error signal e(n) is shown in red.
μopt = 1
and is free of the input X(n) and the real L = 5, μ = 0.0002
(unfamiliar) impulse response h(n). In the
general case with intrusion v(n) does not equal
to 0, the most favorable learning rate is
μopt = 𝐸 [│𝑦 (𝑛) –y^(𝑛) │2]/𝐸 [│𝑒(𝑛)│2]
We assume that the signals v(n) and X(n) are
uncorrelated to each other.

VII. RECURSIVE LEAST SQUARE (RLS)


ALGORITHM

LMS may not provide fast rate of convergence


and small mean square error. Figure 7- Output of noise cancellation using LMS
algorithm.
In case of LMS algorithm it is necessary to have
knowledge of autocorrelation of input signal and Analyzing Figure 7 the LMS we can see
cross-corelation between the input and desired algorithm has not done excellent performance,
output. not entirely removing the error signal e(n) hence
Therefore, we consider error measures that do not resulting in the original signal s(n), entirely
not include from expectations and may be free of the noise intrusion n1(n).
included from the data.
L = 5, μ = 0.0002
ℰ(𝑛) = ∑𝑛𝑖=0 | 𝑒(𝑛)|2

No Statistical information about x(n) or d(n) is


required and may be evaluated directly from
x(n) and d(n).
In RLS algorithm we introduces forgetting
factor λ
figure algorithm has presented conversion after
round about 10000 iterations. After iterations
1000 number, the algorithm has shown some
variance but not at the expense of conversion.

L = 5, δ = 3.2, μ = 0.005

Figure 8- Mean-squared error of noise


cancellation using the LMS algorithm

Figure 8 is showing the mean-squared error for


the LMS algorithm for noise cancellation. This
error is not the error signal e(n) but the
difference between this signal and the input Figure 10- Mean-squared error of noise cancellation
signal s(n). even until 30000 iterations, using the NLMS algorithm
convergence has not occurred. Practically, the
algorithm should have a reasonable number of C. RLS ALGORITHM
iterations (a few seconds) for the error to reach a
value closer to its most favorable in the mean-
square sense. Figures 11and 12 are showing output of noise
cancellation using RLS algorithm. Algorithm
B.NLMS ALGORITHM has shown good job by removing noise and
proved it a good active noise canceller. The
L = 5, δ = 3.2, μ = 0.005 original input signal s(n) is shown in blue color
line, the green line shows the same signal after
the noise corruption x(n)=s(n)+n2(n) and the red
line shows the error signal, which must be closer
to the original input signal s(n).

L = 5, λ = 1

Figure 9- Output of noise cancellation using NLMS


algorithm.

Figures 9 and 10 show the outputs noise


cancellation using the NLMS algorithm. The
input signal s (n), is tarnished by the noise n1(n)
resulting in the tarnished signal Figure 11- Output of noise cancellation using RLS
x (n) =s(n)+n2 (n). The error signal e (n) which is algorithm.
thought to reproduce the input signal s (n) is L = 5, λ = 1
shown by color in red.
Looking at the figure 9, it can be seen that the
algorithm has done a good job, removing the
noise interfering the original signal.
Figure 10 illustrate the MSE which is difference
between the error signal and input signal. In this
filter usage for noise cancellation,
comprehensive investigation and evaluation of
LMS, NLMS and RLS algorithms for noise
cancellation.

REFERENCES

[1] Diniz, P. S. R. Adaptive Filtering – Algorithams


and Practical Implementation . Kluwer
Figure 10- Mean-squared error of noise cancellation Academic Publishers, 2008.
using the RLS algorithm [2] Falcão, Rafael Merredin Alves. "ADAPTIVE
FILTERING ALGORITHMS FOR NOISE
THE RLS ALGORITHM CANCELLATION." 2012.
[3] Guan, X., X. Chen, and G. Wu. "QX-LMS
Figures 11and 12 are also showing the cost Adaptive Filters For System Identification." 2nd
function of the error which is between the international Congress on Image and Signal
original input signal represented by s(n) and the Processing, 17-19 Oct 2009: CISP '09, vol., no.,
error signal represented by e(n). By analyzing pp.1-5,.
the figures it can be seen the algorithm has very [4] Haykin, Simon S. Least-Mean-Square Adaptive
Filters. Wiley: Bernard Widrow , ISBN 0-471-
fast convergence as compare to LMS AND
21570-8., 2003.
NLMS only after the first few thousand [5] Haykin:, Simon. Adaptive Filter Theory. Prentice
iterations, so worked very efficiently in Hall,: ISBN 0-13-048434-2., 2002.
removing noise. [6] Thenua, Raj Kumar, and S.K. Agarwal.
"Simulation and Performance Analyasis of
IX. RESULTS COMPARISON Adaptive Filter in Noise Cancellation."
International Journal of Engineering Science
Analyzing above figures , we came to the and Technology, 2010: Vol. 2(9),4373-4378.
conclusion that the LMS algorithm takes time [7] Wallace and R.B. Goubran. "Improved tracking
and has a very slow convergence for the given adaptive noise canceler for nonstationary
environments" IEEE Transactions on Signal
number of iterations making noise cancellation
Processing, mar 1992: vol.40, no.3, pp.700-703.
less effective as compare to NLMS and RLS. [8] Raj Kumar Thenua and S.K. AGARWAL”
Moreover in case of LMS even convergence Simulation and Performance Analyasis of
occurs but it incorporates a high error value, Adaptive Filter in Noise Cancellation”
about 30dB per 20dB as compare to other International Journal of Engineering Science and
algorithms. In a situation where we are Technology Vol. 2(9), 2010, 4373-4378.
concerned about the convergence speed LMS Functions (Universitext), New York: Springer,
algorithm is not favorable here we should select 1986.
the NLMS or RLS algorithm depending upon
speed and other requirements. RLS algorithm [9] Monson H. Hayes: Statistical Digital Signal
Processing and Modeling, Wiley, 1996, ISBN 0-
works three times more efficiently and faster
471-59431-8.
than the NLMS algorithm, So, in our research
span the most efficient is the RLS algorithm [10] Bouchard, M.; Quednau, S., "Multichannel
having faster speed of convergence, lesser error recursive-least-square algorithms and fast-
and efficient output. It has its own transversal-filter algorithms for active noise
computational cost higher than other two. control and sound reproduction systems," IEEE
Transactions on Speech and Audio Processing,
vol.8, no.5, pp.606-618, Sep 2000

I. CONCLUSIONS

In a nutshell, this research paper gives a brief


idea of our achievements in the field of adaptive

You might also like