Professional Documents
Culture Documents
BASIC ALGORITHMS
Analysis of the performance of a given DSP-based controller for different types of source
noise and different ANC algorithms is an integral part of successful and optimal design
methodology. In this chapter we will discuss some algorithms which are used in active noise
control systems.
2.1 LEAST MEAN SQUARE ALGORITHM :
The Least Mean Square (LMS) algorithm [18], introduced by Widrow and Hoff in 1959 is an
adaptive algorithm which uses a gradient-based method of steepest decent. LMS algorithm
uses the estimates of the gradient vector from the available data. LMS incorporates an
iterative procedure that makes successive corrections to the weight vector in the direction of
the negative of the gradient vector which eventually leads to the minimum mean square error.
Compared to other algorithms LMS algorithm is relatively simple; it does not require
correlation function calculation nor does it require matrix inversions.
Consider a transversal filter with tap inputs ) 1 ( ),..., 1 ( ), ( + M n u n u n u and
corresponding set of tap weights ) ( ),..., ( ), (
1 1 0
n w n w n w
M
. The tap inputs represent
samples drawn from a wide-sense stationary stochastic process of zero mean and correlation
matrix R. in addition to these inputs, the filter is supplied with a desired response
) (n d
that
provides a frame of reference for the optimum filtering action. Figure-2.1 depicts the filtering
action described herein.
The vector of tap inputs at times n is denoted by u(n) , and the corresponding estimate of the
desired response at the filter output is denoted by
) | (
n
n d , where
n
is the space spanned
by the tap inputs ) 1 ( ),..., 1 ( ), ( + M n u n u n u . By comparing this estimate with the
desired response ) (n d
we produce an estimation error denoted by ) (n e . We may write
) ( ) ( ) (
) | (
) ( ) (
n u n w n d
n
n d n d n e
H
=
=
.(2.1)
Where the term ) ( ) ( n u n w
H
is the inner product of the tap-weight vector ) (n w and the tap-
input vector ) (n u .The expanded form of the tap-weight vector is described by
T
M
n w n w n w )] ( ),..., ( ), ( [
1 1 0
And that of the tap-input vector is described by
If the tap-input vector ) (n u and the desired response ) (n d are jointly stationary, then the
mean-square error or cost function ) (n , at time n is a quadratic function of the tap-weight
vector, so we may write
) ( ) ( ) ( ) ( ) (
2
n Rw n w n w p p n w n
H H H
+ = o
(2.2)
Where
2
o = variance of the desired response ) (n d
p =cross correlation vector between the tap-input vector ) (n u and the desired response ) (n d
R=correlation matrix of the tap-input vector ) (n u
The gradient vector is given by
1
z
1
z
1
z
) (
*
0
n w
) (
*
1
n w
) (
*
2
n w
M
) (
*
1
n w
M
. . .
.....
Weight
control
mechanism
.
.
.
Figure-2.1
Structure of adaptive transversal filter
Input
u(n)
u(n-1)
u(n-M+2)
u(n-m-1)
Desired
response
d(n)
+
) | (
n
n d
) ( 2 2
) (
) (
) (
) (
.
.
.
) (
) (
) (
) (
) (
) (
) (
) (
) (
1 1
1 1
0 0
n Rw p
n b
n
j
n a
n
n b
n
j
n a
n
n b
n
j
n a
n
n
M M
+ =
c
c
+
c
c
c
c
+
c
c
c
c
+
c
c
= V
(
(
(
(
(
(
(
(
(
Where, in the expanded column vector
) (
) (
n a
n
k
c
c
and
k
b
n
c
c ) (
are the partial derivatives of the
cost function
) (n with respect to the real part ) (n a
k
and the imaginary part ) (n b
k
of the kth
tap weight ) (n w
k
, respectively, with k =1,2,3,,M-1.
If it were possible to make exact measurement of the gradient vector ) (n V at each iteration
n, and if the step-size parameter is suitably chosen, then the tap weight vector computed by
using the steepest-descent algorithm would indeed converge to the optimum wiener solution.
In reality, however the exact measurement of the gradient vector are not possible.
To estimate the gradient vector ) (n V , the most obvious strategy is to substitute estimates of
the correlation matrix R and the cross-correlation vector p in the formula of Eq. (2.1) which
is reproduced here for convenience:
) ( 2 2 ) ( n Rw p n + = V .(2.3) The
simplest choice of estimators is to use instantaneous estimates for R and p that are based on
sample values of the tap-input vector and desired response, defined respectively
) ( ) ( ) (
n u n u n R
H
= ..(2.4)
And ) ( * ) ( ) ( n d n u n p = .(2.5)
Correspondingly the instantaneous estimate of the gradient vector is
). ( ) ( ) ( 2 ) ( * ) ( 2 ) (
n w n u n u n d n u n J
H
+ = V ..(2.6)
Generally this estimate is biased, because the tap-weight estimate vector ) ( n w is a random
vector that depends on the tap-input vector ) (n u .
Now substituting the estimate of eq.(2.6) for the gradient vector ) (n V in the eq.(2.1), we
get a new recursive relation for updating the tap-weight vector:
)]. ( ) ( ) ( * )[ ( ) ( ) 1 ( n w n u n d n u n w n w
H
+ = + ..(2.7)
Equivalently the result can be written in the form of three basic relations as:
i. Filter output: ) ( ) ( ) ( n u n w n y = ...(2.8)
ii. Estimation error or error signal ). ( ) ( ) ( n y n d n e = .....(2.9)
iii. Tap-weight adaptation: ). ( * ) ( ) ( ) 1 ( n e n u n w n w + = + ...(2.10)
Equation (2.8) (2.9) define the estimation error ) (n e the computation of which is based on
the current estimate of the tap-weight vector ) ( n w the second term ( ) ( ) n e n u * on the
right hand side of the equation(2.10 )represents the adjustment that is applied to the current
estimate of the tap-weight vector,
) ( n w .
The iterative procedure started with an initial guess ) 0 ( w .
.
Convergence and Stability of the LMS algorithm
The LMS algorithm initiated with some arbitrary value for the weight vector is seen to
converge and stay stable for
max
1
0
+ =
+ c
c
.(2.16)
Setting this result equal to zero and solving for the optimum value ) 1 ( + n w , we obtain
). ( *
2
1
) ( ) 1 ( n u n w n w + = + .(2.17)
- Solving for the unknown parameter by substituting the result of previous step i.e
eq.(2.17) in eq.(2.13) then we get
.
2
) (
2
1
) ( ) (
) ( ) (
2
1
) ( ) (
) ( ) ( *
2
1
) (
) ( ) 1 ( ) (
n u n u n w
n u n u n u n w
n u n u n w
n u n w n d
H
H H
H
H
+ =
+ =
+ =
+ =
|
.
|
\
|
Then solving for ,we obtain
2
) (
) ( 2
n u
n e
= ..(2.18)
Where ) ( ) ( ) ( ) ( n u n w n d n e
H
= ...(2.19)
is the error signal.
- By combining the results of previous 2 steps the optimal value of the incremental
change, ). 1 ( + n w o specifically from eq. (2.17)&(2.18), we have
) ( * ) (
) (
1
) ( ) 1 ( ) 1 (
2
n e n u
n u
n w n w n w
=
+ = + o
....(2.20)
In order to exercise control over the change in the tap-weight vector from one iteration to the
next without changing the direction of the vector, we introduce a positive real scaling factor
denoted by .
~
. That is, we redefine the change simply as
). ( * ) (
) (
~
) ( ) 1 ( ) 1 (
2
n e n u
n u
n w n w n w
o
=
+ = +
....(2.21)
Equivalently, we write
) ( * ) (
) (
~
) ( ) 1 (
2
n e n u
n u
n w n w
+ = + ..(2.22)
Indeed this is the desired recursion for computing the M-by-1 tap weight vector in the
normalized LMS algorithm. Eq.(2.22) clearly shows the reason for using the term
normalized: the product vector ) ( * ) ( n e n u is normalized with respect to the squared
Euclidean norm of the tap-input vector ) (n u .
Comparing the eq.(2.22) with eq.(2.10), we may make the following observation
The adaptation constant
~
for the normalized LMS filter is dimensionless, whereas
the adaptation constant for the LMS filter has the dimension of inverse power.
Setting
2
) (
~
) (
n u
n
= ....(2.23)
We may view the normalized LMS filter as an LMS filter with a time varying step size
parameter.
Most importantly, the normalized LMS algorithm exhibits a rate of convergence that
is potentially faster than that of the standard LMS algorithm for both uncorrelated and
correlated input data.
An issue of possible concern is that, in overcoming the gradient noise amplification problem
associated with the conventional LMS filter, the normalized LMS filter introduces a problem
of its own, namely that when the tap-input vector ) (n u is small, numerically difficulties may
arise because then we have to divide by a small value for the squared norm
2
) (n u . To
overcome this problem, we modify eq.(2.22) slightly to produce
). ( * ) (
) (
~
) ( ) 1 (
2
n e n u
n u
n w n w
+
+ = +
o
....(2.24)
Where . 0 > o for 0 = o , eq. (2.24) reduces to the form given in eq.(2.22).
2.3 FILTERED-X LMS ALGORITHM:
The introduction of the secondary-path transfer function into a controller using the standard
LMS algorithm shown in Fig. 1.4 will generally cause instability [30]. This is because the
error signal is not correctly aligned in time with the reference signal, due to the presence of
) ( z S
.
There are a number of possible schemes that can be used to compensate for the effect
of ) ( z S of . The first solution is to place an inverse filter,
) (
1
z S
in series with ) ( z S
to
remove its effect. The second solution is to place an identical filter in the reference signal
path to the weight update of the LMS algorithm, which realizes the so-called filtered-X LMS
(FXLMS) algorithm [9]. Since an inverse does not necessarily exist for ) ( z S the FXLMS
algorithm is generally the most effective approach. The FXLMS algorithm was independently
derived by Widrow in the context of adaptive control and Burgess [18] for ANC applications.
The placement of the secondary-path transfer function following the digital filter ) ( z W
controlled by the LMS algorithm is shown in Fig.1. 4. The residual signal is expressed as
)] ( ) ( [ * ) ( ) ( ) ( x x n w n s n d n e
T
=
.....(2.25)
Where n is the time index, ) (n s is the impulse response of secondary path ) ( z S ,*denotes
linear convolution,
T
L
n w n w n w n w )] ( ),..., ( ), ( [ ) (
1 1 0
=
and )] 1 ( ),..., 1 ( ), ( [ ) ( + = L n x n x n x n x are the coefficient and signal vectors
of ) ( z W respectively, and L is the filter order. The filter ) ( z W
must be of sufficient order
to accurately model the response of the physical system.
Assuming a mean square cost function )] ( [ ) (
2
n e E n = the adaptive filter minimizes the
instantaneous squared error
) ( ) (
2
n e n = (2.26)
using the steepest descent algorithm, which updates the coefficient vector in the negative
gradient direction with step size .
) (
2
) ( ) 1 ( n n w n w
V = + .................................................................................. (2.27)
Where ) (
2
n e n e n e n V = V = V . From (2.25), we have,
) ( ' ) ( * ) ( ) ( n x n s n s n e = = V where )] 1 ( ' )... 1 ( ' ) ( ' [ ) ( ' + = L n x n x n x n x and
) ( * ) ( ) ( ' n x n s n x = .Therefore, the gradient estimate becomes
) ( ) ( ' 2 ) (
n e n x n = V . (2.28)
Substituting (2.28) into (2.27), we have the FXLMS algorithm
) ( ) ( ' ) ( ) 1 ( n e n x n w n w + = + .... (2.29)
In practical ANC applications, ) ( z S is unknown and must be estimated by an additional filter
) (
z S
.
Therefore, the filtered reference signal is generated by passing the reference signal
through this estimate of the secondary path
) ( * ) ( ) ( ' n x n s n x = ..(2.30)
Where ) (
z S
.
Within the limit of slow adaptation, the algorithm will converge with
nearly
o
90 of phase error between ) (
x(n)
d(n)
x(n) y(n)
e(n
)
_
+
Figure-2.3 equivalent diagram for
slow adaptation and
) ( ) (
z S z S =
P(z)
W(z) S(z)
) ( n s
LMS
d(n) x(n)
x(n)
e(n)
y(n)
y(n)
Figure-2.2 Block diagram of ANC
using the FXLMS algorithm
Consider the case in which the control ) ( z W filter is changing slowly, so that the order of
) ( z W and ) ( z S in Fig. 2.1 can be commuted [9]. If ) ( ) (
z S
is replaced by a delay .This special case of the FXLMS algorithm is also known as
the delayed LMS algorithm. The upper bound for the step size depends on the delay .
Therefore, efforts should be made to keep the delay small, such as decreasing the distance
between the error sensor and the secondary source and reducing the delay in electrical
components.
2.4 FUNCTIONAL LINK ARTIFICIAL NEURAL NETWORK (FLANN)
Because of nonlinear signal processing and learning capability, artificial neural networks
(ANNs) have become a powerful tool for many complex applications including functional
approximation, nonlinear system identification and control, pattern recognition and
classification, and optimization. The ANNs are capable of generating complex mapping
between the input and the output space and thus, arbitrarily complex nonlinear decision
boundaries can be formed by these networks.
In contrast to the static systems that are described by algebraic equations, the dynamic
systems are described by difference or differential equations. It has been reported that even if
only the outputs are available for measurement, under certain assumptions, it is possible to
identify the dynamic system from the delayed inputs and outputs using an multilayer
perceptron (MLP) structure [5].
Originally, the functional link ANN (FLANN) may be conveniently used for function
approximation and pattern classification with faster convergence rate and lesser
computational load than an MLP structure. The FLANN is basically a flat net and the need of
the hidden layer is removed and hence, the BP learning algorithm used in this network
becomes very simple .The functional expansion effectively increases the dimensionality of
the input vector and hence the hyper planes generated by the FLANN provides greater
discrimination capability in the input pattern space. The functional link artificial neural
network (FLANN) is a useful alternative to the multilayer artificial neural network
(MLANN). It has the advantage of involving less computational complexity and a simple
structure for hardware implementation. The basic principle, the characteristic feature, and the
mathematical treatment of FLANN are discussed next.
In FLANN [1], the functional link concept is used, which acts on an element of a pattern or
the entire pattern itself and generates a set of linearly independent functions. By this process,
no new ad hoc information is inserted into the process but the representation gets enhanced. It
has been reported that the functional link approach simplifies the learning algorithms and
unifies the architecture for all types of nets [14] [16]. The new net is capable of
accommodating three tasks: supervised learning, associative storage and recall, and
unsupervised learning.
In the functional link model an element N k x
k
s s 1 , is expanded to M l x f
k l
s s 1 ), ( .
In the case of a trigonometric expansion of order P, each input element
k
x is mapped to
P i x i x i
k
x s s 1 )}, cos( ), sin( , { t t .
Let us consider the problem of learning with a flat net, which is a net with no hidden layers.
Let X be the Q input patterns each with N elements. The net configuration has one output.
For the qth pattern, the input components are N i x
q
i
s s 1 ,
) (
and the corresponding output
is
) ( q
y . The connecting weights are N i
i
w s s 1 , and the threshold is denoted by .Thus
) ( q
y given by Q q w x y
N
i
i
q
i
q
,... 2 , 1 ,
1
=
+ =
=
o
....(2.32)
arranging all Q patterns, we can write
(
(
(
(
(
(
(
(
(
(
(
(
(
(
=
o
N
i
Q
N
Q
q
N
q
N
N
Q
q
w
w
w
w
x x
x x
x x
x x
y
y
y
y
.
.
.
.
.
.
1 ...
.
.
1 ...
.
.
1 ...
1 ...
.
.
.
.
.
.
2
1
) ( ) (
1
) ( ) (
1
) 2 ( ) 2 (
1
) 1 ( ) 1 (
1
) (
) (
) 2 (
) 1 (
.....(2.33)
Or in matrix form Xw y= .(2.34)
The dimension of X is
if and
then y X w
1
=
....(2.35)
Thus, finding the weights for a flat net consists of solving a system of simultaneous linear
equations for the weights w.
If ) 1 ( + < N Q , X can be portioned into a functional matrix
F
X of dimension Q Q
By setting, 0 ...
2 1
= = = = =
+ +
o
N Q Q
w w w , and since , w may be expressed
as ..(2.36)
) 1 ( + N Q
) 1 ( + = N Q 0 ) ( = X Det
y X w
F
1
=
0 ) ( = X Det
this equation (2.29)yields only one solution . But if the matrix X is not partitioned explicitly
,then(2.28) yields a large number of solutions, all of which satisfy the given constraint.
But if ) 1 ( + > N Q , then we have y Xw = .....................................................(2.37)
Where X, w, y are of dimension 1 1 ) 1 ( ), 1 ( + + andQ N N Q respectively .By the functional
expansion scheme, a column of X is enhanced from (N+1) elements to M, producing a matrix
S so that Q M > .under this circumstance, we have
..(2.38)
Where S,
F
w ,y are of dimension respectively .
If Q M = and ,
then ..(2.39)
Equation (2.39) is an exact flat net solution. However, if Q M > , the solution is similar to
that of (29) .This analysis indicates that the functional expansion model always
yields a flat net solution if a sufficient number of additional functions is used in the
expansion.
This figure-2.4 represents a simple FLANN structure, which is essentially a flat net with no
hidden layer. In FLANN, N inputs are fed to the functional expansion block to generate M
(
(
(
(
(
(
(
(
N
x
x
x
.
.
.
2
1
F
u
n
c
t
i
o
n
a
l
e
x
p
a
n
s
i
o
n
.
.
.
.
.
.
X
S
W
1
w
M
w
1
s
2
s
M
s
) (n y
Figure-2.4 Structure of FLANN
1 , 1 , Q M M Q
0 ) ( = S Det
y S w
F
1
=
y X w
1
=
y Sw
F
=
functionally expanded signals which are linearly combined with the M element weight vector
to generate a single output.
The trigonometric basis function is given by
)} cos( ), sin( ),..., 2 cos( ), 2 sin( ), cos( ), sin( , { x P x P x x x x x s t t t t t t =
Where P=order of expansion,
) 1 2 ( + = P N M M, N are length of functionally enhanced vector S and input vector X.
Trigonometric Functional Expansion-Based ANC System
The FLANN employing trigonometric functional expansion with finite memory N and P a
finite order can be described by the following input-output relationship.
=
=
M
i i
s
i
w n y
1
) ( .....(2.40)
Where ) 1 2 ( + = P N M
N k M i for
k
x FE
i
s s s s s = 1 , 1 ), ( ....(2.41)
And
(.) FE
stands for functional expansion . In vector form,(2.40) is represented as
s w n y
T
= ) ( ...(2.42)
Where
T
P N N N P P
s s s s s s s s s s s ] ,..., , ,..., ,..., , , ,..., , , [
1 2 , 2 , 1 , 1 2 , 2 3 , 2 2 , 2 1 2 , 1 3 , 1 2 , 1 1 , 1 + + +
=
(2.43)
T
M
w w w w w ] ,..., , , [
3 2 1
= ..(2.44)
And
T
[.] denotes transpose of a matrix . In general, we may describe the signal elements of s
as
)
)
=
=
jodd j
i
x l
jeven j
i
x l
j
i
x
j i
s
, 1 ), cos(
, 1 ), sin(
1 ,
,
t
t ...(2.45)
Where P l s s 1 . Denoting x as a vector of N elements at instant, we can write
T
N n x n x n x x )] 1 ( ),..., 1 ( ), ( [ + =
.(2.46)
s in terms of x(n) as
T
N n x P N n x P
N n x N n x N n x
n x P n x P
n x n x n x n x P
n x P n x n x n x s
)]} 1 ( ) 1 2 cos[( )], 1 ( ) 1 2 sin[(
)],..., 1 ( cos[ )], 1 ( sin[ ), 1 (
)],..., 1 ( ) 1 2 cos[( )], 1 ( ) 1 2 sin[(
)],..., 1 ( cos[ )], 1 ( sin[ ), 1 ( )], ( ) 1 2 [cos(
)], ( ) 1 2 sin[( )],..., ( cos[ )], ( sin[ ), ( {
+ + + +
+ + +
+ +
+
+ =
t t
t t
t t
t t t
t t t
..(2.47)