Professional Documents
Culture Documents
59-69, 2012
Printed in Romania
1. INTRODUCTION
Adaptive filtering is one of the most important issues in the
field of digital signal processing and, has been extensively
applied in areas like noise cancellation, channel equalization,
linear prediction, system identification and etc. In the latter
the adaptive filter is used to provide a model that represents
the best fit to an unknown system. Selecting the correct order
and estimating the parameters of the adaptive filter is a
fundamental issue in system identification. In practice, most
of the systems are nonlinear to some degree and recursive in
nature, hence nonlinear system identification has attracted
much attention in the field of science and engineering (Panda
et al., 2007). While theory and design of linear adaptive
filters based on FIR structures is well developed and widely
applied in practice, the same is not true for more general
classes of adaptive systems such as linear infinite impulse
response adaptive filters (IIR) and nonlinear adaptive systems
(Krusienski and Jenkins, 2005). In the last decades,
substantial effort has been done to use IIR adaptive filtering
techniques as an alternative to adaptive FIR filters. The main
advantages of IIR filters are that, firstly, due to the pole-zero
structure they are more suitable to model physical systems
and hence, the problem of system identification can also be
viewed as a problem of adaptive IIR filtering
(Venayagamoorthy, 2010). Secondly, they require less
number of coefficients than FIR filters to achieve the same
level of performance, so they involve less computational
burden compared to FIR structures (Netto et al., 1995).
Alternatively, with same number of coefficients, an adaptive
IIR filter performs better than an adaptive FIR filter (panda et
al., 2011). However, these good characteristics come along
with some possible drawbacks: both linear IIR structures and
nonlinear structures tend to produce multi-modal error
surfaces with respect to the parameters for which
conventional derivative based learning algorithms may fail to
60
a 0 a1 z 1 a 2 z 2 ... a m z m
b0 b1 z 1 b2 z 2 ... bn z n
(2)
y ( n)
bk y n k
k 1
a un k
k
(3)
k 0
(4)
(5)
(6)
61
J E d n y n
2
1
N
(7)
n 1
3. FIREFLY OPTIMIZATION
Firefly algorithm is a novel meta-heuristic optimization
algorithm which was first developed by Xin-She Yang at
Cambridge University in 2008. As a nature-inspired
algorithm it imitates the Social behaviour of fireflies and the
phenomenon of bioluminescent communication. Since its
emergence in (Yang 2008), it has been successfully applied
to many engineering optimization problems. Digital image
compression, multi-objective and non-convex economic
dispatching are two real-world applications of firefly
algorithm (Horng and Jiang, 2010; Apostolopoulos and
Vlachos, 2011; Yang et al. 2012). Recent studies show that it
is a powerful tool for discrete time problems. Contrary to the
most meta-heuristic algorithms such as Ant Colony firefly
can efficiently deal with stochastic test functions (Yang,
2010).
3.1 Behaviour of fireflies
The flashing light of fireflies which is produced by a
bioluminescence process constitutes a signalling system
among them for attracting mating partners or potential preys.
It is interesting to know that there are about two thousand
species of fireflies around the world. Each, has its own
pattern of flashing. As we know, the light intensity at a
particular distance r from the light source obeys the inverse
square law. That is to say, the light intensity I decreases as
the distance r increases in terms of I 1 . Furthermore, the
r2
m 1
(8)
62
rij x i x j
i ,k
x j ,k
(10)
k 1
xi xi 0 e r x j xi i
2
(11)
where, the second term is due to the attraction. The third term
is randomization, with 0 ,1 being the randomization
parameter, and i a vector of numbers drawn from a Gaussian
distribution or uniform distribution.
3.4 Modified Firefly Algorithm (MFA)
The basic firefly algorithm is very efficient. It is suitable for
parallel implementation because different fireflies can work
almost independently. Furthermore, the fireflies aggregate
more closely around each optimum; so that it is possible to
adjust its parameters such that it can outperform the basic
PSO (Yang, 2010). But we can see from the simulation
results that the solutions are still changing as the optima are
approaching. To improve the solution quality we tried to
reduce the randomness so as the algorithm could converge to
the optimum more quickly. We defined the randomization
parameter as the following form (12), as u can see
decreases gradually as the optima are approaching.
0 e t
(12)
where, t 0, t max is the pseudo time for simulation and
t max is the maximum number of generations. 0 is the initial
randomization parameter while is the final value In addition
we added an extra term i xi g best to the updating
formula. In the simple version of the firefly algorithm (FA),
the current global best g best , is only used to decode the final
best solutions. The modified updating formula is shown in
(13).
xi xi 0 e r x j xi i i xi g best
2
(13)
y 1 y 2 ... y N T
y k 1 y k 2 ... y k N T
, k 1, 2 ,... M
5) In this step, each of the desired output is compared
with corresponding estimated output and k errors
are produced. The mean square error corresponding
to kth firefly is determined by using the relation:
6) MSE ( k ) 1 Y Yk
T Y Y
(14)
X i (t 1) X i (t ) Vi (t 1)
(15)
(16)
63
5. SIMULATION RESULTS
In this section, three benchmark systems are considered to
confirm the efficiency of our proposed method in parameter
identification of IIR and nonlinear systems. For the sake of a
more comprehensive comparison, in addition to FA and MFA
simulations are done using three versions of PSO, and a
standard version of GA, as well. GA and PSO are two of the
most known evolutionary optimization algorithms and have
been an active area of research during the past decade
(Kennedy and Eberhart, 1995; Holland, 1975; Goldberg,
1989). Two performance criterion mean square error (MSE)
and mean square deviation (MSD) are used to compare the
performance of the aforementioned approaches. MSE is the
mean square error between the desired signal and adaptive
filters output as given in (7) in section 2 and MSD
corresponds to mean square deviation between the actual
coefficients and the estimated coefficients which is defined
as:
MSD
1
p
p 1
i i
(17)
i 0
is the
where, is the desired parameter vector,
estimated parameter vector and P is the total number of
parameters to be estimated.
w w
max
f f avg
f f avg
(18)
5.1 Example 1
The transfer function of the plant is given by, (Shynk,1989a)
H ( z)
0.05 0.4 z 1
1 1.13141 z 1 0.25 z 2
(19)
a 0 a1 z 1
b0 b1 z 1 b2 z 2
a0
b0 b1 z 1
(20a)
(20b)
5.2 Example 2
64
y f y k 1, u k 1
(21)
0.2 y k 1 0.5 y k 1 u k 1 u k 1
2
(22a)
a 0 a1 z 1
y k
H ( z)
u k
b0 b1 z 1 b2 z 2
(22b)
5.3 Example 3
There are many applications where audio signals, or video
signals are subjected to nonlinear processing, and which
require nonlinear adaptive compensation to achieve the
proper system identification and parameter extraction (Lee
and El-Sharkawi, 2002). For example a generic
communication system is shown in Fig.6. Many nonlinear
systems can be represented by the Wiener-Hammerstein
model of fig.6. In this example an LNL cascade system taken
from (Krusienski and Jenkins, 2005; Mathews and Sicuranze,
65
(23)
H c ( z )
(0.2025 0.288 z 1 0.2025 z 2 ) (0.2025 0.00341z 1 0.2025 z 2 )
(1 1.01z 1 0.5861z 2 ) (1 0.6591z 1 0.01498 z 2 )
(24)
Two cases are considered for the LNL (WeinerHammerstein) adaptive filter structure, in case 1 the linear
and nonlinear parts have the same order as the actual system
and in case 2, the reduced order structure is given as follows:
d d1z 1 d 2 z 2 d 3 z 3
H c ( z ) 0
1 e1z 1 e2 z 2 e3 z 3
(25)
a a1 z 1 a2 z 2 a3 z 3
H B ( z ) 0
1 b1 z 1 b2 z 2 b3 z 3
(26)
4
nonlinearity c H B ( z )
(27)
66
6. SENSITIVITY ANALYSIS
To study the effect of parameters and population size on our
proposed methods, three experiments are carried out in this
section.
Table 1. MSE
Examples
MSE
1
Best
(2nd Order IIR) Average
Std. dev.
1
Best
(1st Order IIR) Average
Std. dev.
2
Best
(Nonlinear)
Average
Std. dev.
2
Best
(2nd Order IIR) Average
Std. dev.
3
Best
(Actual Order) Average
Std. dev.
3
Best
(Reduced Order) Average
Std. dev.
MFA
4.7408e-5
1.3853 e-4
3.0581e-5
0.1635
0.1776
0.0082
7.5040e-4
7.8023e-4
2.1070e-5
0.0258
0.0272
0.0011
6.3312e-7
9.3079e-7
2.7981e-7
7.1403e-7
1.1093e-5
7.0102e-6
FA
4.9576e-5
2.9318e-4
1.1404e-4
0.1708
0.1810
0.0079
7.5594e-4
8.3626e-4
2.6030e-5
0.0250
0.0287
0.0018
6.4407e-7
1.1744e-6
5.0960e-7
6.1310e-6
4.8427e-5
6.2210e-5
SPSO2011
0.0031
0.007
0.0031
0.1953
0.2102
0.0092
1.3986e-3
1.966 e-3
5.8656e-4
0.0396
0.0635
0.0186
3.0011e-5
6.6718e-5
6.3202e-5
3.6668e-5
9.4331e-5
3.8596e-5
PSOW
7.1567e-4
8.3594e-4
3.5959e-4
0.1930
0.2010
0.0045
8.5867e-4
8.9683e-4
2.8878e-5
0.0280
0.0291
8.8168e-4
5.9814e-6
2.8865e-5
5.3127e-5
1.1303e-5
8.0449e-5
8.4725e-5
PSOLW
8.8742e-4
9.5719e-4
2.1835e-4
0.1941
0.2012
0.0060
9.4805e-4
9.9582e-4
1.9684e-5
0.0289
0.0305
0.0012
1.0064e-6
8.7322e-6
8.9497e-6
2.3216e-6
4.4544e-5
7.8793e-5
GA
9.7564e-4
0.0195
0.0136
0.1062
0.1985
0.0242
8.5842e-4
8.6946e-4
1.1691e-5
0.0282
0.0304
0.0017
4.2699e-5
1.8414e-4
7.4256e-5
1.9479e-4
6.0501e-4
3.6791e-4
Table 2. MSD
Examples
1
(2nd Order IIR)
2
(Nonlinear)
3
(Actual Order)
MSD
Best
Average
Std. dev.
Best
Average
Std. dev.
Best
Average
Std. dev.
MFA
3.1425e-6
4.6392e-5
5.3854e-5
8.5999e-7
1.0047e-5
8.0897e-6
0.0028
0.0062
0.0051
FA
1.0250e-6
6.6524e-5
7.4581e-5
2.1800e-6
1.4573e-5
1.3455e-5
0.0012
0.0072
0.0110
SPSO2011
9.7541e-4
0.0044
0.0032
0.0027
0.0061
0.0044
0.0341
0.0585
0.0112
PSOW
1.3750e-7
7.5330e-5
2.1903e-4
1.0066e-6
3.8469e-5
4.6887e-5
0.0232
0.0421
0.0095
PSOLW
1.2500e-8
7.9323e-5
2.6667e-4
4.8133e-6
7.9675e-5
1.1677e-4
0.0166
0.0284
0.0086
GA
3.1109e-4
0.0196
0.0132
1.9360e-5
1.6351e-4
2.2962e-4
0.1171
0.1647
0.0315
67
time
Best
Average
Std. dev.
1
Best
(1st Order IIR) Average
Std. dev.
2
Best
(Nonlinear)
Average
Std. dev.
2
Best
(2nd Order IIR) Average
Std. dev.
3
Best
(Full Order)
Average
Std. dev.
3
Best
(Reduced Order) Average
Std. dev.
MFA
9.6921
10.1982
0.3404
3.3583
3.4574
0.0665
17.1116
17.2752
0.2369
4.0286
4.1159
0.0624
36.2201
36.8455
0.4689
32.3354
33.1410
0.7970
FA
11.3306
11.9361
0.3864
3.2734
3.3915
0.0874
16.4762
16.9559
0.51195
3.7758
3.8250
0.0500
34.8042
35.5898
0.4706
31.5373
31.8433
0.2126
Best
Average
Std. dev.
Best
Average
Std. dev.
Best
Average
Std. dev.
Pop.Size
=5
5.511e-5
1.512e-4
8.664e-5
7.659e-4
7.872e-4
2.214e-5
7.456e-7
1.302e-6
5.772e-7
Pop.Size
=25
6.830e-5
1.431e-4
6.357e-5
7.504e-4
7.802e-4
2.107e-5
7.640e-7
1.477e-6
6.761e-7
Pop.Size
=50
4.740e-5
1.385e-4
3.058e-5
7.535e-4
7.752e-4
1.573e-5
6.431e-7
9.407e-7
2.798e-7
Pop.Size
=100
8.390e-5
1.362e-4
4.329e-5
7.339e-4
7.868e-4
2.261e-5
6.551e-7
1.267e-6
6.018e-7
1
2
IIR
MSE
Best
Average
Std.dev.
0.1
0.3
0.5
0.9
4.957e-5
2.931e-4
1.140e-4
4.408e-4
0.001
5.921e-4
0.001
0.004
0.002
0.007
0.026
0.011
Best
Average
Std.dev.
8.053e-4
8.401e-4
2.092e-5
7.559e-4
8.362e-4
2.603e-5
7.932e-4
8.400e-4
2.411e-5
8.252e-4
8.505e-4
2.603e-5
0.0267
0.0293
0.0017
0.025
0.028
0.001
0.028
0.029
9.838e-4
0.028
0.029
0.001
6.440e-7
1.174e-6
5.096e-7
6.945e-7
1.975e-6
1.332e-6
6.820e-7
2.225e-6
1.648e-6
6.989e-7
2.414e-6
2.290e-6
2
Best
Nonlinear Average
Std. dev.
3
Best
Average
Std. dev.
SPSO2011
6.9663
7.2421
0.2365
5.3330
5.5708
0.2453
19.8583
20.6419
1.13718
6.7679
6.8628
0.0698
39.4432
42.8841
3.5035
33.1299
34.2243
1.0661
PSOW
7.7660
7.9313
0.1038
6.7254
6.9379
0.1328
20.4498
21.0125
0.41903
7.4658
8.4828
0.4635
42.4370
44.0122
1.5265
39.9935
41.5944
1.1204
PSOLW
6.0719
6.2099
0.0867
5.0447
5.2868
0.1323
18.6293
18.8453
0.17340
6.6034
6.7033
0.0679
40.1864
41.9984
1.6044
34.4355
36.0785
1.1718
GA
99.2362
1.0203e+2
1.6149
99.3974
1.0149e+2
1.1300
361.8733
363.6922
2.2848
3.4089e+2
3.4914e+2
7.4158
2.3139e+2
2.3371e+2
2.0094
2.2778e+2
2.2895e+2
1.03104
Example
0.3
1.171e-4
1.972e-4
5.281e-5
0.025
0.027
0.001
7.504e-4
7.802e-4
2.107e-5
8.994e-7
1.619e-6
7.987e-7
0.5
9.353e-5
2.113e-4
8.153e-5
0.027
0.028
7.989e-4
7.607e-4
7.854e-4
1.622e-5
9.268e-7
2.138e-6
1.263e-6
0.9
1.102e-4
2.289e-4
9.521e-5
0.028
0.028
5.303e-4
7.618e-4
7.878e-4
2.089e-5
1.067e-6
2.503e-6
1.682e-6
7. CONCLUSIONS
The Firefly optimization algorithm has the advantages of
being easy to understand and simple to implement, so that, it
can be used for wide variety of optimization tasks. To our
knowledge, this is the first report of applying firefly
algorithm in adaptive system identification task. In order to
enhance the searching quality of the algorithm we have
performed two modifications, first by reducing the
randomness and second by hybridizing the algorithm with
PSO by adding the global search component of PSO to the
updating formula. The proposed algorithm was used for
identification of three benchmark systems: a linear IIR, a
nonlinear IIR and an LNL system, respectively.
68
69
characteristics of the LMS adaptive filter. Proceedings of
the IEEE, 64, pp. 1151-1162.
Yang, X-S. (2008). Nature-Inspired Metaheuristic Algorithm
Luniver Press.
Yang, X.-S.,(2010). Firefly algorithm, stochastic test
functions and design optimization, Int. J. Bio-inspired
Computation, Vol. 2, pp.78
78-84
Yang, X. S., (2010). Engineering Optimization: An
introduction with Metaheuristic Applications, John
Wiley & Sons.
Yang, X.S., Hosseini, S.S., Gandomi, A.H., (2012). Firefly
algorithm for solving non-convex economic dispatch
problems with valve loading effect, Applied Soft
Vol.
12(3),
1180-1186.
Computing,