Professional Documents
Culture Documents
The problem of acoustic echo cancellation is the result of hands-free telephony and
teleconferencing systems. In early telephony the microphone and loudspeaker were separated
and no sound could propagate between the speaker and the microphone. Therefore no echo
would be transmitted back. Using a hands free loudspeaker telephone, however, the sound from
the loudspeaker will be picked up by the microphone and transmitted back to the sender who will
recognize this as an echo. This severely reduces conversation quality, even at very small echo
delays.
This project shows some implementations of acoustic echo cancellation algorithms in
MATLAB and the results of analysis on the broader systems involved. It focuses on Normalized
Least Mean Square(NLMS) and Variable Impulse Response Double Talk Detector (VIRE DTD).
In the case of acoustic echo in telecommunications, the optimal output is an echoed signal that
accurately emulates the unwanted echo signal. This is then used to negate the echo in the return
signal. The better the adaptive filter emulates this echo, the more successful the cancellation will
be. This project examines various techniques and algorithms of adaptive filtering, employing
discrete signal processing in MATLAB.
CHAPTER - 1
BACKGROUND
1.1 Introduction.
The project entitled Acoustic Echo Cancellation using Digital Signal Processing. Acoustic
echo occurs when an audio signal is reverberated in a real environment, resulting in the original
intended signal plus attenuated, time delayed images of this signal.
This project will focus on the occurrence of acoustic echo in telecommunication systems.
Such a system consists of coupled acoustic input and output devices, both of which are active
concurrently. An example of this is a hands-free telephony system. In this scenario the system
has both an active loudspeaker and microphone input operating simultaneously. The system then
acts as both a receiver and transmitter in full duplex mode. When a signal is received by the
system, it is output through the loudspeaker into an acoustic environment. This signal is
reverberated within the environment and returned to the system via the microphone input. These
reverberated signals contain time delayed images of the original signal, which are then returned
to the original sender (Figure 1.1, ak is the attenuation, tk is time delay). The occurrence of
acoustic echo in speech transmission causes signal interference and reduced quality of
communication.
The method used to cancel the echo signal is known as adaptive filtering. Adaptive filters are
dynamic filters which iteratively alter their characteristics in order to achieve an optimal
desired output. An adaptive filter algorithmically alters its parameters in order to minimise a
function of the difference between the desired output d(n) and its actual output y(n). This
function is known as the cost function of the adaptive algorithm. Figure 1.2 shows a block
diagram of the adaptive echo cancellation system implemented throughout this thesis. Here
the filter H(n) represents the impulse response of the acoustic environment, W(n) represents
the adaptive filter used to cancel the echo signal. The adaptive filter aims to equate its output
y(n) to the desired output d(n) (the signal reverberated within the acoustic environment). At
each iteration the error signal, e(n)=d(n)-y(n), is fed back into the filter, where the filter
characteristics are altered accordingly.
This project deals with acoustic echo as applies to telecommunication systems, although
the techniques will be applicable to a variety of other disciplines.
Before giving any explanation of adaptive filters we will begin our study with a short
overview of different types of signals and some basic concepts which will help to understand
the adaptive filter.
vector represents the instantaneous value of this waveform at integer multiples of the
sampling period. The values of the sequence, x(t) corresponding to the value at n times the
sampling period is denoted as x(n).
x (n) = x (nTs) .. eq 1.1
So the discrete time signals may arise by sampling a continuous-time signal, or they may
be generated by some discrete-time process. Whatever the origin of discrete time signals,
discrete time signal processing systems have many attractive features. Such as charge import
devices, general purpose digital computers and can be used to simulate analog systems
function, x(t), does not have a precise description of its waveform. It may, however, be
possible to express these random processes by statistical or probabilistic models a single
occurrence of a random variable appears to behave unpredictably. But if we take several
occurrences of the variable, each denoted by n, then the random signal is expressed by two
variables, x(t, n).
The main characteristic of a random signal is known as the expectation of a random
signal. It is defined as the mean value across all n occurrences of that random variable,
denoted by E[x(t)], where x(t) is the input random variable. It should be noted that the
number of input occurrences into the acoustic echo cancellation system is always 1.
Throughout this thesis the expectation of an input signal is equal to the actual value of that
signal. However, the E[x(n)] notation shall still be used in order to derive the various
algorithms used in adaptive filtering.
when faced with raw sensory data. Teaching a computer to send you a monthly electric bill is
easy. Teaching the same computer to understand your voice is a major undertaking. Digital
Signal Processing generally approaches the problem of voice recognition in two steps:
Feature extraction followed by feature matching. Each word in the incoming audio signal is
isolated and then analyzed to identify the type of excitation and resonate frequencies. These
parameters are then compared with previous examples of spoken words to identify the closest
match. Often, these systems are limited to only a few hundred words; can only accept speech
with distinct pauses between words; and must be retrained for each individual speaker. While
this is adequate for many commercial
correlation, the signals are different, if it is autocorrelation; the two signals used in the
function are the same.
a. Characterisation of Linear System
A discrete linear system is a digital implementation of a linear time-invariant system. Its
input is a vector representing the sampled input signal (we will use x[]), its output is a vector
of the same size as the input representing the sampled output signal (y[]). The system itself
has to obey the usual conventions of linearity, particularly that it obeys the principle of
superposition, so that f(a+b) = f(a) + f(b), where f() is the processing performed by the
system and a and b are arbitrary signals. Linear time-invariant systems can be characterised
in the frequency domain by their frequency response.
The frequency response of a system tells us how a sinusoidal input is changed in
magnitude and phase as a function of its frequency. Although it is possible to implement
systems in the frequency domain, it is more typical to implement them in the time domain:
sample by sample. To do this we use the time domain equivalent of the frequency response,
each input sample scaled according to the amplitude of the sample. The sum of the
overlapping triggered scaled impulse responses is the output signal. Mathematically this is
called convolution, and can be expressed as:
fir1()
implementation of low-pass, high-pass and band-pass filters. The call: h = fir1(n,w) generates
a truncated impulse response in h[] of length n samples, corresponding to a low-pass filter
with a cut-off at a normalised frequency w. In MATLAB filter design functions, frequencies
are normalised to half the sampling rate, i.e. a normalised frequency of 0.5 corresponds to
one quarter of the sampling rate (don't ask). Here we generate 1 second of white noise signal
at 10,000 samples/sec and low-pass filter it at 2500Hz with a filter of size 100 samples.
x = rand(1,10000);
soundsc(x,10000);
pause;
h = fir1(100,0.5);
y = conv(h,x);
soundsc(y,10000);
High-pass filters are designed with
h = fir1(n,w,'high');
and band-pass filters are designed with
h = fir1(n,[w1 w2]);
where w1 and w2 are the normalised frequencies of the band edges.
In general you need truncated impulse responses that are long enough to capture the
majority of the energy in the full impulse response. Filters with 100 coefficients are not
uncommon. Filters with sharp edges need longer impulse responses.
c. Infinite length impulse response filters
Finite impulse response filters have a number of good characteristics: they are simple to
understand, have good phase response, and are always stable. However they are inefficient.
Each output sample requires a convolution sum that is the size of the impulse response. That
is for an impulse response of length N, the system implementation will require N multiplyadd operations per output sample.
Increased efficiency can be obtained by allowing the linear system calculation to use not
only the current and past samples of the input signal, but also past samples from the output
signal. Often the calculation performed by a linear system can be simplified if it can re-use
some of the calculation that it performed on previous samples. This is called recursive system
design, in contrast to finite impulse response filters which can be called non-recursive.
Recursive systems are more complex to build, often have messy phase responses, and can be
unstable. However they can be significantly more efficient and have impulses responses of
infinite length.
A recursive system is specified by two vectors a[] and b[]: the b[] coefficients are
convolved with the current and past input samples, while the a[] coefficients are convolved
with the past output samples.
Here is a diagram of the idea:
In this figure the input signal x is processed sample-by-sample into the output signal y. At
the stage shown, we are processing sample number n. To calculate output sample y(n), the
filter multiplies the current and past input samples x(n), x(n-1), x(n-2), x(n-3), , x(n-k) by
the set of b coefficients: b(0), b(1), b(2), b(3), , b(k); and sums them. The filter then
multiplies the past output samples: y(n-1), y(n-2), y(n-3), , y(n-k) by the a coefficients:
a(1), a(2), a(3), , a(k) and sums them. It then combines these to form y(n), according to this
formula:
In MATLAB, this whole process is performed by the filter() function: y = filter(b,a,x)
where as before, x is the input signal, y the output signal, and where b and a are the
coefficients that define the recursive linear system.
The MATLAB signal processing toolbox contains a number of different functions for
deigning recursive low-pass, high-pass and band-pass filters. We shall only refer to one
simple design, the Butterworth filter. This design generates filters with maximally flat
responses in the pass band, although they may not have as steep a cut-off as other designs.
[b,a]=butter(10,0.5)
y = filter(b,a,x);
soundsc(y,10000);
d. Frequency Response
To display a graph of the frequency response of a filter, use the function
recursive system with coefficients [b,a], this can be called with freq(b,a,n,Fs);
freqz().
For a
The argument n is the number of points to calculate; 512 is a suitable size. The argument
Fs allows you to scale the graph to any given sample rate. For a non-recursive system, where
only the impulse response is available, use:
freqz(h,1,512,Fs);
By default, freqz() plots both the magnitude response and the phase response. You can plot
just the magnitude response (in decibels) with this code:
[h,f] = freqz(b,a,512,Fs);
plot(f,20*log10(abs(h)));
Here h[] stores the complex response and f[] stores the frequency indices.
CHAPTER - 2
FILTERS
2.1 What is a Filter
The term filter is commonly used to refer to any device or system that take a mixture of
particles/elements (frequency components) from its input and process them according to
some specific rules to generate a corresponding set of particles /elements at its output
OR
A filter can be defined as a piece of software or hardware that takes an input signal and
processes it so as to extract and output certain desired elements of that signal. Filters can be
linear or non linear.
But we are considering the linear filters. However, we do not usually think of something as a
filter unless it can modify the sound in some way. For example, speaker wire is not
considered a filter, but the speaker is (unfortunately). The different vowel sounds in speech
are produced primarily by changing the shape of the mouth cavity, which changes the
resonances and hence the filtering characteristics of the vocal tract. The tone control circuit
in an ordinary car radio is a filter, as are the bass, midrange, and treble boosts in a stereo
preamplifier. Graphic equalizers, reverberators, echo devices, phase shifters, and
speaker crossover networks are further examples of useful filters in audio. There are also
examples of undesirable filtering, such as the uneven reinforcement of certain frequencies in
a room with bad acoustics.'' A well-known signal processing wizard is said to have remarked,
``when you think about it, everything is a filter.''
A digital filter is just a filter that operates on digital signals, such as sound represented
inside a computer. It is a computation, which takes one sequence of numbers (the input
signal) and produces a new sequence of numbers (the filtered output signal). The filters
mentioned in the previous paragraph are not digital only because they operate on signals that
are not digital. It is important to realize that a digital filter can do anything that a real-world
filter can do. That is, all the filters alluded to above can be simulated to an arbitrary degree of
precision digitally. Thus, a digital filter is only a formula for going from one digital signal to
Systems that are designed to pass some frequencies essentially undistorted and significantly
attenuate or eliminate others are referred to as frequency selective filters.
.. eqn: 2.1
Different techniques are available for the design of FIR filters, such as a commonly used
technique that utilizes the Fourier series. A very useful feature of an FIR filter is that it can
guarantee linear phase. The linear phase feature can be very useful in applications such as
speech analysis, where phase distortion can be very critical. For example, with linear phase,
all input sinusoidal components are delayed by the same amount. Otherwise, harmonic
distortion can occur.
This recursive type of equation represents an infinite impulse response (IIR) filter. The
output depends on the inputs as well as past outputs (with feedback).
of terms describing the characteristics of frequency selective filters. In particular, while the
nature of the frequencies to be passed by a frequency selective filter varies considerably from
application to application, several basic filters are wildly used and have been given names
indicative of their function
Note that each of these filters is symmetric about w=0, and thus, there appear to be two
pass bands for the high pass and band pass filters. This is the consequence of having our
adopted the use of complex exponential signal. Note that the characteristics of the continuous
time and discrete time ideal filters differ by virtue of the fact that for discrete time filters the
frequency response must be periodic with period 2pi, with low frequencies near even
multiples of pi and high frequencies near odd multiples of pi.
CHAPTER - 3
ADAPTIVE FILTERS
As we begin our study of adaptive filters it may be important to understand the meaning
of the terms adaptive and filter in a very general sense. The adjective adaptive can be
understood by considering a system which is trying to adjust itself so as to respond to some
phenomenon that is taking place in its surrounding. In other words the system tries to adjust
its parameters with a aim of meeting some well define goal or target which depends upon the
state of system as well as its surrounding .this is what adaptation means. Moreover there is a
need to have a set of steps or certain procedure by which this process of adaptation is carried
out. And finally the system that carries out or undergoes the process of adaptation is called
by the more technical name filter.
Depending upon the time required to meet the final target of the adaptation process, which
we will call the convergence time that is available to carry out the adaptation, we can have a
variety of adaptation algorithms and filter structures.
Where wi (n)s are the filter tap weights (coefficients) and N is the filter length. We refer to
the input samples, x(n-i), for i= 0,1,2..,N-1, as the filter tap inputs. The tap weights which
may vary in time are controlled by the adaptive algorithm.
Designing the filter does not require any other frequency response information or
specification. To define the self learning process the filter uses, you select the adaptive
algorithm used to reduce the error between the output signal y(k) and the desired signal d(k).
When the LMS performance criteria for e(k) have achieved its minimum value through the
iterations of the adapting algorithm, the adaptive filter is finished and its coefficients have
converged to a solution. Now the output from the adaptive filter matches closely the desired
signal d(k). When you change the input data characteristics, sometimes called the filter
environment, the filter adapts to the new environment by generating a new set of coefficients
for the new data. Notice that when e(k) goes to zero and remains there you achieve perfect
adaptation; the ideal result but not likely in the real world.
So the system has six main components to be defined:
Input signal
Desired signal
Output signal
Error signal
FILTER - Filtering process
Adaptive process - Some kind of algorithm
The coefficients of an adaptive filter are adjusted to compensate for changes in input
signal, output signal, or system parameters. Instead of being rigid, an adaptive system can
learn the signal characteristics and track slow changes. An adaptive filter can be very useful
when there is uncertainty about the characteristics of a signal or when these characteristics
change.
advancements such as noise filtering, system identification, and voice prediction. Standard
DSP techniques, however, are not enough to solve these problems quickly and obtain
acceptable results. Adaptive filtering techniques must be implemented to promote accurate
solutions and a timely convergence to that solution A number of adaptive structures have
been used for different applications in adaptive filtering.
All of the above systems are similar in the implementation of the algorithm, but different
in system Configuration. All 4 systems have the same general parts; an input x (n), a desired
result d(n), an Output y (n), an adaptive transfer function w(n), and an error signal e(n) which
is the difference Between the desired output u(n) and the actual output y(n). In addition to
these parts, the system Identification and the inverse system configurations have an unknown
linear system u(n) that can Receive an input and give a linear output to the given input.
3.3.1 System Identification
The adaptive system identification is primarily responsible for determining a discrete
estimation of the transfer function for an unknown digital or analog system. The same input
x(n) is applied to both the adaptive filter and the unknown system from which the outputs are
compared (see figure ). The output of the adaptive filter y(n) is subtracted from the output of
the unknown system resulting in a desired signal d(n). The resulting difference is an error
signal e(n) used to manipulate the filter coefficients of the adaptive system trending towards
an error signal of zero. After a number of iterations of this process are performed, and if the
system is designed correctly, the adaptive filters transfer function will converge to, or near
to, the unknown systems transfer function. For this configuration, the error signal does not
have to go to zero, although convergence to zero is the ideal situation, to closely approximate
the given system. There will, however, be a difference between adaptive filter transfer
function and the unknown system transfer function if the error is nonzero and the magnitude
of that difference will be directly related to the magnitude of the error signal. Additionally
the order of the adaptive system will affect the smallest error that the system can obtain. If
there are insufficient coefficients in the adaptive system to model the unknown system, it is
said to be under specified. This condition may cause the error to converge to a nonzero
constant instead of zero. In contrast, if the adaptive filter is over specified, meaning that there
are more coefficients than needed to model the unknown system, the error will converge to
zero, but it will increase the time it takes for the filter to converge
The basic principle is canceling a sound wave by generating another sound wave exactly
out of
phase with the first one. The superposition of the two waves would result in silence.
which can be used to notch or cancel/reduce a sinusoidal noise signal. This structure
has Only two weights or coefficients.
E[e2(n)]=E[(d(n)-y(n)) 2] eq 3.4
b. Convergence Rate
The convergence rate determines the rate at which the filter converges to its resultant
state. Usually a faster convergence rate is a desired characteristic of an adaptive system.
Convergence rate is not, however, independent of all of the other performance
characteristics. There will be a tradeoff, in other performance criteria, for an improved
convergence rate and there will be a decreased convergence performance for an increase in
other performance. For example, if the convergence rate is increased, the stability
characteristics will decrease, making the system more likely to diverge instead of converge to
the proper solution. Likewise, a decrease in convergence rate can cause the system to become
more stable. This shows that the convergence rate can only be considered in relation to the
other performance metrics, not by itself with no regards to the rest of the system.
c. Minimum Mean Square Error
The minimum mean square error (MSE) is a metric indicating how well a system can
adapt to a given solution. A small minimum MSE is an indication that the adaptive system
has accurately modeled, predicted, adapted and/or converged to a solution for the system. A
very large MSE usually indicates that the adaptive filter cannot accurately model the given
system or the initial state of the adaptive filter is an inadequate starting point to cause the
adaptive filter to converge. There are a number of factors which will help to determine the
minimum MSE including, but not limited to; quantization noise, order of the adaptive
system, measurement noise, and error of the gradient due to the finite step size.
d. Computational Complexity
Computational complexity is particularly important in real time adaptive filter
applications.
When a real time system is being implemented, there are hardware limitations that may affect
the performance of the system. A highly complex algorithm will require much greater
hardware resources than a simplistic algorithm.
e. Stability
Stability is probably the most important performance measure for the adaptive system. By
the nature of the adaptive system, there are very few completely asymptotically stable
systems that can be realized. In most cases the systems that are implemented are marginally
stable, with the stability determined by the initial conditions, transfer function of the system
and the step size of the input.
f. Robustness
The robustness of a system is directly related to the stability of a system. Robustness is a
measure of how well the system can resist both input and quantization noise.Filter Length
The filter length of the adaptive system is inherently tied to many of the other performance
measures. The length of the filter specifies how accurately a given system can be modeled by
the adaptive filter. In addition, the filter length affects the convergence rate, by increasing or
decreasing computation time, it can affect the stability of the system, at certain step sizes, and
it affects the minimum MSE. If a filter length of the system is increased, the number of
computations will increase, decreasing the maximum convergence rate. Conversely, if the
filter length is decreased, the number of computations will decrease, increasing the maximum
convergence rate. For stability, due to an increase in length of the filter for a given system,
you may add additional poles or zeroes that may be smaller than those that already exist. In
this case the maximum step size, or maximum convergence rate, will have to be decrease to
maintain stability. Finally, if the system is under specified, meaning there are not enough
pole and/or zeroes to model the system, the mean square error will converge to a nonzero
constant. If the system is over specified, meaning it has too many poles and/or zeroes for the
system model, it will have the potential to converge to zero, but increased calculations will
affect the maximum convergence rate possible.
CHAPTER - 4
4.2 Telecommunications
Telecommunications is about transferring information from one location to another. This
includes many forms of information: telephone conversations, television signals, computer
files, and other types of data. To transfer the information, you need a channel between the
two locations. This may be a wire pair, radio signal, optical fiber, etc. Telecommunications
companies receive payment for transferring their customer's information, while they must
pay to establish and maintain the channel. The financial bottom line is simple: the more
information they can pass through a single channel, the more money they make. DSP has
revolutionized the telecommunications industry in many areas:
Signaling tone generation and detection, frequency band shifting, filtering to remove power
line hum, etc. Three specific examples from the telephone network will be discussed here:
multiplexing, compression, and echo control.
4.2.1 Echo control
Echoes are a serious problem in long distance telephone connections. When you speak
into a telephone, a signal representing your voice travels to the connecting receiver, where a
portion of it returns as an echo. If the connection is within a few hundred miles, the elapsed
time for receiving the echo is only a few milliseconds. The human ear is accustomed to
hearing echoes with these small time delays, and the connection sounds quite normal. As the
distance becomes larger, the echo becomes increasingly noticeable and irritating. The delay
can be several hundred milliseconds for intercontinental communications, and is particularity
objectionable. Digital Signal Processing attacks this type of problem by measuring the
returned signal and generating an appropriate anti signal to cancel the offending echo. This
same technique allows speakerphone users to hear and speak at the same time without
fighting audio feedback (squealing). It can also be used to reduce environmental noise by
canceling it with digitally generated anti noise.
4.2.2 Echo Location
A common method of obtaining information about a remote object is to bounce a wave off
of it.
For example, radar operates by transmitting pulses of radio waves, and examining the
received signal for echoes from aircraft. In sonar, sound waves are transmitted through the
water to detect submarines and other submerged objects. Geophysicists have long probed the
earth by setting off explosions and listening for the echoes from deeply buried layers of rock.
While these application shave a common thread, each has its own specific problems and
needs. Digital Signal Processing has produced revolutionary changes in all three areas.
Radar is an acronym for Radio Detection And Ranging. In the simplest radar system, a
radio transmitter produces a pulse of radio frequency energy a few microseconds long. This
pulse is fed into a highly directional antenna, where the resulting radio wave propagates
away at the speed of light. Aircraft in the path of this wave will reflect a small portion of the
energy back toward a receiving antenna, situated near the transmission site. The distance to
the object is calculated from the elapsed time between the transmitted pulse and the received
echo. The direction to the object is found more simply; you know where you pointed the
directional antenna when the echo was received. The operating range of a radar system is
determined by two parameters: how much energy is in the initial pulse, and the noise level of
the radio receiver.
Unfortunately, increasing the energy in the pulse usually requires making the pulse
longer. In turn, the longer pulse reduces the accuracy and precision of the elapsed time
measurement. This result in a conflict between two important parameters: the ability to detect
objects at long range, and the ability to accurately determine an object's distance. DSP has
revolutionized radar in three areas, all of which relate to this basic problem. First, DSP can
compress the pulse after it is received, providing better distance determination without
reducing the operating range. Second, DSP can filter the received signal to decrease the
noise. This increases the range, without degrading the distance determination. Third, DSP
enables the rapid selection and generation of different pulse shapes and lengths. Among other
things, this allows the pulse to be optimized for a particular detection problem. Now the
impressive part: much of this is done at a sampling rate comparable to the radio frequency
used, at high as several hundred megahertz! When it comes to radar, DSP is as much about
high-speed hardware design as it is about algorithms.
caused by the propagation time over long distances and/or the digital encoding of the
transmitted signals.
Fig 4.2: Sources of acoustic echo in a room when using a hand-free telephone
signal to obtain the near end signal alone. However, the transfer function is unknown in
practice, and so it must be identified. The solution to this problem is to use an adaptive filter
The method used to cancel the echo signal is known as adaptive filtering. Adaptive filters are
dynamic filters which iteratively alter their characteristics in order to achieve an optimal
desired output. An adaptive filter algorithmically alters its parameters in order to minimize a
unction of the difference between the desired output d(n) and its actual output y(n).
This function is known as the cost function of the adaptive algorithm. Figure shows a
block diagram of the adaptive echo cancellation system implemented throughout this thesis.
Here the filter H(n) represents the impulse response of the acoustic environment, W(n)
represents the adaptive filter used to cancel the echo signal. The adaptive filter aims to equate
its output y(n) to the desired output d(n) (the signal reverberated within the acoustic
environment). At each iteration the error signal, e(n)=d(n)-y(n), is fed back into the filter,
where the filter characteristics are altered accordingly.
In a room with no propagation delay and no room impulse response (i.e. a studio with
dampening walls and the microphone placed with no distance from the loudspeaker) the
solution would simply be to subtract the input (far-end talk), which is readily available, from
the output signal picked up by the microphone, which consists of both near-end talk and farend talk. After the subtraction the output signal would consist of near-end talk only. This is
Computational requirements:
Here the issues of concern include
(a) The number of operations (i.e., multiplications, divisions, and additions/
subtractions) required to make one complete iteration of the algorithm.
(b) The size of memory locations required to store the data and the program,
(c) The investment required to program the algorithm on a computer.
error (i.e., the mean square value of the difference between the desired response and the
transversal filter output).
This cost function is precisely a second order function of the tap weights in the transversal
filter. To develop a recursive algorithm for updating the tap weights of the adaptive
transversal filter, we proceed in two stages, First, we use an iterative procedure to solve the
Wiener Hopf equations (i.e., the Matrix equation defining the optimum Wiener solution); the
iterative procedure is based on the method of steepest descent, which is a well known
technique in optimization theory. This method required the use of a gradient vector, the value
of which depends on two parameters: the correlation Matrix of the tap inputs in the
transversal filter and the cross correlation vector between the desired response and the same
tap inputs. Next, we use instantaneous values for this correlation, so as to derive an estimate
for the gradient vector, making it assume a stochastic character in general. The resulting
algorithm is widely known as the least mean square (LMS) algorithm, , the essence of which
for the case of a transversal filter operating on real valued data may be described as
Where the error signal is defined as the difference between some desired response and the
actual response of the transversal filter produced by the tap input vector.
The LMS algorithm is a linear adaptive filter algorithm, which in general consists of two
basic processes.
1. A filter process: which involves
a. Computing the output of a linear filter in response to an input signal.
b. Generating an estimation error by comparing this output with a desired response.
2. An adaptive process: which involves the automatic adjustment of the Parameters of the
filter in
accordance with the estimation error.
The combination of these two processes working together constitutes a feedback loop;
First, we have a transversal filter, around which the LMS algorithm is built. This component
is responsible for performing the filtering process. Second, we have a mechanism for
performing the adaptive control process on the tap weights of the transversal filter. With each
iteration of the LMS algorithm, the filter tap weights of the adaptive filter are updated
according to the following formula (Farhang-Boroujeny 1999).
filter tap weight vector at time n. The parameter is known as the step size parameter and is
a small positive constant. This step size parameter controls the influence of the updating
factor.
Selection of a suitable value for is imperative to the performance of the LMS algorithm, if
the value is too small the time the adaptive filter takes to converge on the optimal solution
will be too long; if is too large the adaptive filter becomes unstable and its output diverges.
Is the simplest to implement and is stable when the step size parameter is selected
appropriately. This requires prior knowledge of the input signal which is not feasible for the
echo cancellation system.
As the negative gradient vector points in the direction of steepest descent for the Ndimensional quadratic cost function, each recursion shifts the value of the filter coefficients
closer toward their optimum value, which corresponds to the minimum achievable value of
the cost function, (n).The LMS algorithm is a random process implementation of the
steepest descent algorithm. Here the expectation for the error signal is not known so the
instantaneous value is used as an estimate. The steepest descent algorithm then becomes
The gradient of the cost function,(n), can alternatively be expressed in the following
form.
Substituting
into
the
this
steepest
descent algorithm of
Eqn , we arrive at the
recursion for the
3. The tap weights of the FIR vector are updated in preparation for the next
iteration, by
The main reason for the LMS algorithms popularity in adaptive filtering is its
Computational simplicity, making it easier to implement than all other commonly use
adaptive algorithms. For each iteration the LMS algorithm requires 2N additions and 2N+1
multiplications (N for calculating the output, y(n), one for 2e(n) and an additional N for the
scalar by vector multiplication).
will affect its performance. The normalized least mean square algorithm (NLMS) is an
extension of the LMS algorithm which bypasses this issue by selecting a different step size
value, (n), for each iteration of the algorithm. This step size is proportional to the inverse of
the total expected energy of the instantaneous values of the coefficients of the input vector
x(n). This sum of the expected energies of the input samples is also equivalent to the dot
product of the input vector with itself, and the trace of input vectors auto-correlation matrix.
Simple to implement and computationally efficient. Shows very good attenuation and
variable step size allows stable performance with non-stationary signals. This was the
obvious choice for real time implementation.
4.8.1 Derivation of the NLMS algorithm
This derivation of the normalized least mean square algorithm .To derive the NLMS
algorithm we consider the standard LMS recursion, for which we select a variable step size
parameter, (n). This parameter is selected so that the error value, e + (n), will be minimized
using the updated filter tap weights, w(n+1), and the current input vector, x(n).
This (n) is then substituted into the standard LMS recursion replacing , resulting
in the following.
Here the value of is a small positive constant in order to avoid division by zero when the
values of the input vector are zero. This was not implemented in the real time implementation
as in practice the input signal is never allowed to reach zero due to noise from the
microphone and from the AD codec on the Texas Instruments DSK. The parameter is a
constant step size value used to alter the convergence rate of the NLMS algorithm, it is
within the range of 0<<2, usually being equal to 1.
chosen based on the current input values, the NLMS algorithm shows far greater stability
with unknown signals. This combined with good convergence speed and relative
computational simplicity makes the NLMS algorithm ideal for the real time adaptive echo
cancellation system. The code for both the Matlab and TI Code composer studio applications
2. An error signal is calculated as the difference between the desired signal and
the filter output.
4. The filter tap weights are updated in preparation for the next iteration. Each iteration
of the NLMS algorithm requires 3N+1 multiplications, this is only N more than the
standard LMS algorithm .This is an acceptable increase considering the gains in
stability and echo attenuation achieved.
CHAPTER - 5
INTRODUCTION TO MATLAB
5.1 What Is MATLAB?
MATLAB
Algorithm development
Data acquisition
MATLAB
MATLAB
Programming language
Graphics
2-D graphics
3-D graphics
Color and lighting
Animation
Computation
Linear algebra
Signal processing
Quadrature
Etc
Tool boxes
1. Signal processing
2. Image processing
3. Control systems
4. Neural Networks
5. Communications
6. Robust control
7. Statistics
External interface
Interface with C and
FORTRAN
Programs
The command window is where the user types MATLAB commands and expressions at
the prompt (>>) and where the output of those commands is displayed. MATLAB defines the
workspace as the set of variables that the user creates in a work session. The workspace
browser shows these variables and some information about them. Double clicking on a
variable in the workspace browser launches the Array Editor, which can be used to obtain
information and income instances edit certain properties of the variable.
The current Directory tab above the workspace tab shows the contents of the current
directory, whose path is shown in the current directory window For example, in the windows
operating system the path might be as follows: C:\MATLAB\Work, indicating that directory
work is a subdirectory of the main directory MATLAB; WHICH IS INSTALLED IN
DRIVE C. clicking on the arrow in the current directory window shows a list of recently used
paths. Clicking on the button to the right of the window allows the user to change the current
directory.
MATLAB uses a search path to find M-files and other MATLAB related files, which are
organize in directories in the computer file system. Any file run in MATLAB must reside in
the current directory or in a directory that is on search path. By default, the files supplied
with MATLAB and math works toolboxes are included in the search path. The easiest way to
see which directories are on the search path The easiest way to see which directories are soon
the search paths, or to add or modify a search path, is to select set path from the File menu
the desktop, and then use the set path dialog box. It is good practice to add any commonly
used directories to the search path to avoid repeatedly having the change the current
directory.
The Command History Window contains a record of the commands a user has entered in
the command window, including both current and previous MATLAB sessions. Previously
entered MATLAB commands can be selected and re-executed from the command history
window by right clicking on a command or sequence of commands. This action launches a
menu from which to select various options in addition to executing the commands.
5.4 IMPLEMENTATIONS
Arithmetic operations
Entering Matrices
The best way for you to get started with MATLAB is to learn how to handle matrices.
Start MATLAB and follow along with each example.
You can enter matrices into MATLAB in several different ways:
Enter an explicit list of elements.
Load matrices from external data files.
Generate matrices using built-in functions.
Create matrices with your own functions in M-files.
Start by entering Drers matrix as a list of its elements. You only have to follow a few basic
conventions:
Separate the elements of a row with blanks or commas.
Use a semicolon, to indicate the end of each row.
Surround the entire list of elements with square brackets, [ ].
To enter matrix, simply type in the Command Window
A = [16 3 2 13; 5 10 11 8; 9 6 7 12; 4 15 14 1]
MATLAB displays the matrix you just entered:
A=
16 3 2 13
5 10 11 8
9 6 7 12
4 15 14 1
This matrix matches the numbers in the engraving. Once you have entered the matrix, it is
automatically remembered in the MATLAB workspace. You can refer to it simply as A. Now
that you have A in the workspace,
Sum, transpose, and diag
You are probably already aware that the special properties of a magic square have to do
with the various ways of summing its elements. If you take the sum along any row or