You are on page 1of 14

Matt Watson

Project Report: One-Dimensional Adaptive Noise Cancellation Using LMS


I. Introduction
There are many situations where a desired signal must be obtained, but all that can be measured
is that signal along with additive noise. For some applications, where the characteristics of the
noise are known, the solution is to implement a fixed filter that reduces the noise power and
leaves the signal (mostly) unchanged.
However, if the characteristics of the noise are not known or the characteristics change with
time, a fixed filter will have reduced effectiveness. Adaptive filters constantly adjust their
parameters to perform the highest-quality filtering possible for this situation, and do not require
any knowledge of the noise characteristics.
The diagram below can be used to conceptually explain an adaptive noise-cancelling filter. The
parameters can be described as follows:

s(n) is the desired signal to be recovered.


v(n) is the additive noise, measured from a location different than the location of
measurement for the signal.
h(n) is the impulse response of the system that transmits the noise from the measurement
location of v(n) to the measurement location of s(n).
h'(n) is the impulse response of the adaptive filter. The goal of the algorithm is to match
h'(n) is closely to h(n) as possible
y(n) is the output of the additive noise that has been filtered through h(n)
y'(n) is the output of the additive noise that has been filtered through h'(n)
d(n) is desired signal plus additive noise
e(n) is the final output of the adaptive noise cancelling system, and should match s(n) as
closely as possible

Figure 1: Adaptive Noise-Cancelling System

It is assumed that v(n), d(n), and e(n) may be directly measured, but s(n), h(n) and y(n) cannot.
As stated above, the goal of any adaptive noise-cancelling algorithm is to match the adaptive
filter, h'(n), to the actual impulse response of the system in question, h(n). If h'(n) is identical to
h(n), the output of the system, e(n) will simply be the desired signal, s(n).
Another way of stating this goal is that e(n) must be minimized. To understand why this is the
case, consider that,
d(n) = s(n) + y(n)
Eq. 1
therefore,
e(n) = s(n) + y(n) - y'(n)
Eq. 2
The closer that h'(n) matches h(n), the closer y'(n) matches y(n). If h'(n) = h(n), then y'(n) = y(n)
and equation 2 becomes e(n) = s(n), which is the desired signal.
There are multiple algorithms to implement adaptive noise cancelling filters, but the least mean
squares (LMS) method will be the focus of this report. Applications of adaptive noise
cancellation are often multi-dimensional, but for conceptual simplicity single-dimensional
cancellation is used in this report.

II. Applications
Example applications of adaptive filters for noise cancellation include a fighter pilot headset.
When a pilot's voice is picked up by a microphone, there may be additive noise that is caused by
cockpit noise being filtered through the pilot's helmet. The impulse response of the helmet may
not be known, and therefore adaptive filtering may be of use.
The desired signal plus filtered noise, d(n), can be picked up by the microphone inside the
helmet. The noise signal, v(n), may be obtained from a microphone inside the cockpit but
outside the helmet, and e(n) will be the output of the adaptive filter.
Another application of adaptive noise cancellation is in an auditorium. If the desired signal is the
voice of an orator, the orator's microphone may be used as d(n), while a microphone placed in a
different location in the auditorium may be v(n). The characteristics by which the auditorium
transmits unwanted noise to the microphone may change depending on the contents of the
auditorium, so adaptive filtering is a favorable solution.

III. MatLab LMS Algorithm Explanation and Theoretical Background


In adaptive filters, the output e(n) is considered to be the error signal. The basic concept of the
LMS algorithm is to minimize the mean square error. This is done by finding the gradient of the
error as a function of the filter weights and updating the following iteration's filter weights in a
direction opposite the gradient. Since the gradient, by definition, points in the direction of the
steepest ascent of the error, changing the filter weights in a direction opposite of the gradient
yields a solution that steps in the direction of steepest descent of the error. As this is done
repeatedly, the mean square error approaches its minimum, which leads to the minimum noise
power.
A full discussion of the derivation of the LMS algorithm is presented in [1], but the important
result (equation A.15 in [1]) is that the filter weight vector is updated as follows:
hj+1 = hj + 2ejXj
Eq. 3
Where Xj is a vector containing the last p samples of the input v(n), and p is the order of the
adaptive filter (number of filter coefficients). The parameter is known as the step size, and its
selection affects how quickly the algorithm converges to the minimum error and also the value of
the minimum error attained by the algorithm. The considerations for the choice of is discussed
later in this report.
The MatLab implementation of the LMS algorithm can be viewed below. Its operation is
relatively simple. The for loop simulates time progressing. At each time step, the input vector
must be formatted for calculation as the last p samples of v(n), the noise input. The practical
challenge of formatting this vector before p samples have been obtained is solved by zeropadding the input until enough samples have been received. This is necessary so the filter can be
applied via vector multiplication, i.e.
y'(n) = [v1 v2 ... vp]T[h'1 h'2 ... h'p]
Eq. 4
The output signal (error signal), e(n), is then calculated according to equation 2. As discussed in
[1], the gradient of h as a function of e can be approximated by
j

= -2ejXj

Eq. 5 (Eq. A.14 in [1])


Therefore, in order to update the filter weights in a direction opposite the gradient, the new filter
coefficients are determined according to equation 3. It should be noted that since equation 5 is

an approximation of the gradient, the LMS algorithm will not perform as well when
implemented as it would in theory.
for n = 1:36000
%This if statement is included to make sure that the input is formatted
%appropriately for computational purposes. One step of the LMS
%algorithm involves the filter coefficient vector being multiplied by
%the input vector. Therefore the input vector must be equal to the
%size of the filter coefficient vector. If there have not yet been
%enough previous values to fill the input vector, it is padded with
%zeros
if n < p
input = [zeros(1,p-n),v(n:-1:1)];
else
input = v(n:-1:(n-p+1));
end
%The new output value, e(n) is calculated by applying the most recent
%iteration of the h'(n) filter to the system
e(n) = y(n) + s(n) - Hprime*(input.');
%The subsequent filter coefficients are calculated by adding the
%previous filter coefficients to the step size * the current error *
%the input to the system
Hprime = Hprime + 2*u*e(n)*input;
end

Figure 2: LMS Algorithm Simulation

IV. MatLab Simulation Summary


The full MatLab code used to implement the adaptive filtering simulation can be viewed in
appendix 1. A .wav file was processed using the wavread function in MatLab and was used as
the signal source s(n). Colored noise v(n) was produced by first creating a vector with random
numbers and passing it through a filter given by

Eq. 6
The error signal e(n) was initialized to zero. The LMS simulation detailed above was then
executed.
The content of the onefive.wav file that was used for simulation can be seen below.

onefive.wav Signal Content


1
0.9
0.8

Signal Power

0.7
0.6
0.5
0.4
0.3
0.2
0.1
0

0.5

1.5

2
Sample #

2.5

3.5

4
4

x 10

Figure 3: onefive.wav Signal Content


All filter simulations (except for those used in section VII) used an IIR filter defined by

Eq. 7

V. Effect of Filter Order on LMS Noise-Cancelling Results


The results of using the LMS algorithm for noise cancelling with a filter of varying order are
presented in this section. It should be noted that h(n) was IIR, so it would be impossible for any
finite-length h'(n) to perfectly match it.
It is also important to note that in the figures and discussion that follow, error is determined as
the difference between the output of the adaptive filter and the desired signal s(n). Put another
way,
Error2 = (e(n) - s(n))2
Eq. 8
The reader must take precaution to avoid confusing the output of the filter, e(n), with the error
between the desired signal s(n) and the output of the filter.

The graph below shows the Error2 of the adaptive filter output using an filter order of p=2.
Error w/ 2-pt IIR Filter, u = 0.01
0.7

0.6

0.5

Error2

0.4

0.3

0.2

0.1

0.5

1.5

2
Sample #

2.5

3.5

4
4

x 10

Figure 4: 2-Pt IIR Filter, u = 0.01


It can be seen that the error remains relatively constant after the algorithm converges to the
minimum error, and that its constant value is higher than either of the two following simulations.
This is due to the fact that a 2-point filter has a difficult time simulating a filter that is infinitely
long.

Error w/ 5-pt IIR Filter, u = 0.01


0.7

0.6

0.5

Error2

0.4

0.3

0.2

0.1

0.5

1.5

2
Sample #

2.5

3.5

4
4

x 10

Figure 5: 5-Pt IIR Filter, u = 0.01


When comparing the error of a 5-point filter to that of the 2-point filter, it can be seen that the
minimum error obtained is much lower. This is due to the fact that a 5-point filter can better
simulate an infinitely long filter than a 2-point filter can. However, it is also worth noting that
there are larger spikes in the error near where the power content of the onefive.wav file
undergoes rapid increases. This is because a large change in the overall power of e(n) yields
large changes in the filter that can overshoot the desired filter.
It can be seen in the 20-pt filter results shown below that this effect is increased with a larger
filter, but the minimum error obtained is lower than with a 5-pt filter because the filters are
approximations of an infinite-length filter.
These results suggest that there is an ideal filter order, where depending on the application a
filter order that is too large may yield large increases in output noise power at certain instances,
whereas a filter order that is too small is unable to adequately approximate an IIR filter.

Error w/ 20-pt IIR Filter, u = 0.01


1.4

1.2

Error2

0.8

0.6

0.4

0.2

0.5

1.5

2
Sample #

2.5

3.5

4
4

x 10

Figure 6: 20-Pt IIR Filter, u = 0.01

VI. Effect of Step Size on LMS Noise-Cancelling Results


There is a similar tradeoff to be considered when selecting the step size of an adaptive filter. It
can be seen below that with a relatively large step size of 0.1, the LMS algorithm converges to
the minimum error very quickly but must tolerate very large steady-state error.

Error w/ 5-pt IIR Filter, u = 0.1


2.5

Error2

1.5

0.5

0.5

1.5

2
Sample #

2.5

3.5

4
4

x 10

Figure 7: 5-pt IIR Filter, u = 0.1


By decreasing the step size to 0.01, little speed of convergence is given up in order to obtain a
much smaller steady-state error.
Error w/ 5-pt IIR Filter, u = 0.01
0.7

0.6

0.5

Error2

0.4

0.3

0.2

0.1

0.5

1.5

2
Sample #

2.5

Figure 8: 5-pt IIR Filter, u = 0.01

3.5

4
4

x 10

Finally, decreasing the step size even further to 0.001 yields a result with a very slow time of
convergence but a very small steady-state error.
Error w/ 5-pt IIR Filter, u = 0.001
0.7

0.6

0.5

Error2

0.4

0.3

0.2

0.1

0.5

1.5

2
Sample #

2.5

3.5

4
4

x 10

Figure 9: 5-pt IIR Filter, u = 0.001


It should also be noted that if is chosen to be too large, the LMS algorithm will become
unstable and diverge. This is because the step size is multiplied by the negative gradient when
updating the filter weights, and if the update is too drastic the updated value will yield an even
larger gradient than before, and this process will repeat as the filter oscillates with increasing
amplitude.
Further analysis of this concept is performed in [1]. Equation A.16 in [1] indicates that must
be greater than 0 but less than 1/max in order for the algorithm to converge. max is the largest
eigenvalue of the autocorrelation matrix R, with
R = E{XjXjT}
Eq. 9
where E refers to the expected value operator and Xj is a vector containing the last p samples of
v(n) at time n=j. This means that the more power the noise has, the smaller the step size will
have to be to maintain convergence of the LMS algorithm.

VII. Effect of FIR vs. IIR Filters on LMS Noise-Cancelling Results


Because of the fact that h'(n) must be FIR, there is a smaller output error when trying to emulate
another FIR filter rather than an IIR filter. Figure 4 (shown previously) has been placed here for
ease of comparison.
Error w/ 2-pt IIR Filter, u = 0.01
0.7

0.6

0.5

Error2

0.4

0.3

0.2

0.1

0.5

1.5

2
Sample #

2.5

3.5

4
4

x 10

Figure 4: 2-Pt IIR Filter, u = 0.01


Error w/ 2-pt FIR Filter, u = 0.01
0.4
0.35
0.3

Error2

0.25
0.2
0.15
0.1
0.05
0

0.5

1.5

2
Sample #

2.5

Figure 10: 2-pt FIR Filter, u = 0.01

3.5

4
4

x 10

Figure 4 was generated with h'(n) trying to emulate an IIR filter defined by equation 7 detailed
previously. Figure 10 was generated using an FIR filter that has the same zeros, but has all of its
poles at 0. This filter is defined by

Eq. 10
It can be observed that the error with the FIR filter is less than half of the error with the IIR Filter
for most samples.
This effect is diminished, however, as the filter order increases. This is due to the fact that as
h'(n) increases in length, it becomes a better approximation of an IIR filter.

VIII. Conclusions
It has been shown that there is an ideal filter order for an LMS noise cancellation algorithm
implementation. A filter order that is too large may yield large errors at times when the signal
power increases rapidly but a filter order that is too small will have difficulty approximating a
long FIR filter or an IIR filter with acceptable error levels.
There is a tradeoff when selecting the step size of the algorithm between speed of convergence
and steady-state error. Additionally, a step size that is too large will cause the LMS algorithm to
become unstable altogether.
Finally, FIR systems are better approximated using an adaptive filter than IIR systems due to the
fact that adaptive filters must be FIR themselves.

IX. References
[1] Widrow, B., J. R. Glover, Jr., J. M. McCool, J. Kaunitz, C. S. Williams, R. H. Hearn, J. R.
Zeidler, Eugene Dong, Jr., and R. C. Goodlin. "Adaptive Noise Cancelling: Principles
and Applications." Proceedings of the IEEE63.12 (1975): 1692-716. IEEE Xplore. Web.
24 Feb. 2015.
[2] "Least Mean Squares Filter." Wikipedia. Wikimedia Foundation, n.d. Web. 04 May 2015.
[3] "Adaptive Filter." Wikipedia. Wikimedia Foundation, n.d. Web. 04 May 2015.

X. Appendices
Appendix 1: MatLab Code
%ECE 529 Semester Project: Adaptive Filtering for noise cancellation
%Read in the file to be used as s(n)
[s,fs,bits] = wavread('onefive.wav');
s = s.';
nmax = length(s)-1; % index for the last sample
%Plot the content of the signal
figure;
plot(s.^2);
title('onefive.wav Signal Content');
xlabel('Sample #');
ylabel('Signal Power');
%Create a vector with random numbers to be used for noise generation
w = rand(1,nmax+1);
%Pass the noise through a coloring filter that is described by
%H(z) = (1-c)/(1-cz^-1)
c = 0.3; %parameter for noise coloring
b = 1-c;
a = [1 -c];
v = filter(b,a,w);
%Take the colored noise and filter the noise signal through the
%transfer function H(z) = (z-0.7)(z-0.2)(z+0.5)/((z-0.3)(z+0.15)(z+0.75))
a = [1 0.6 -.1575 -.03375];
b = [1 -.4 -.31 .07];
y = filter(b,a,v);
%Create d(n) by adding the noise that has been filtered through H(z) to
%the desired signal s(n)
d = y + s;
%Create the adaptive filter array and step size
p = 20; %filter order
u = 0.01; %step size for the adaptive filter
Hprime = zeros(1,p);
%initialize the e output to zeros
e = zeros(1,36000);
%Run the LMS Algorithm for all samples
for n = 1:36000
%This if statement is included to make sure that the input is formatted
%appropriately for computational purposes. One step of the LMS
%algorithm involves the filter coefficient vector being multiplied by
%the input vector. Therefore the input vector must be equal to the
%size of the filter coefficient vector. If there have not yet been
%enough previous values to fill the input vector, it is padded with
%zeros

if n < p
input = [zeros(1,p-n),v(n:-1:1)];
else
input = v(n:-1:(n-p+1));
end
%The new output value, e(n) is calculated by applying the most recent
%iteration of the h'(n) filter to the system
e(n) = y(n) + s(n) - Hprime*(input.');
%The subsequent filter coefficients are calculated by adding the
%previous filter coefficients to the step size * the current error *
%the input to the system
Hprime = Hprime + 2*u*e(n)*input;
end
%Plot error results
figure
title('Error with various step sizes, 20-pt IIR Filter')
hold on;
error = e - s;
plot(error.^2)
title('Error w/ 20-pt IIR Filter, u = 0.01');
xlabel('Sample #');
ylabel('Error^2');

You might also like