Professional Documents
Culture Documents
Applications
Abstract: Particle swarm optimization is a type of stochastic optimization, evolutionary and
simulating algorithm inspired from the nature. PSO has gained wide appeal due to its ease of
implementation, special characteristic of memory and potentiality for specialization and
hybridization. Particle swarm optimization has few parameters to adjust, presents extensive
review of literature available on concept, development and modification of Particle swarm
optimization. This study comprises a snapshot of particle swarm optimization from the authors
perspective, including variations in the algorithm, modifications and refinements introduced to
prevent swarm stagnation and hybridization of PSO with other heuristic algorithms. Issues
related to parameter tuning are also discussed.
1.
Introduction
Conventional computing paradigms often have difficulty in dealing with real world problems,
such as those characterized by noisy or incomplete data or multimodality, because of their
inflexible construction. Natural computing paradigms seem to be a suitable replacement in
solving such problems. These working together. They have inspired several natural computing
paradigms that can be used where conventional computing techniques perform unsatisfactorily.
In the past few decades several nature inspired optimization algorithms have been developed that
are based on the nature inspired analogy. These are mostly population based meta-heuristics also
called general purpose algorithms because of their applicability to a wide range of problems.
Global optimization techniques are fast growing tools that can overcome most of the limitations
found in derivative-based techniques. Some popular global optimization algorithms include
Genetic Algorithms (GA) (Holland, 1992), Particle Swarm Optimization (PSO) (Kennedy and
Eberhart, 1995), Differential Evolution (DE) (Price and Storn, 1995), Evolutionary Programming
(EP) (Maxfield and Fogel, 1965), Ant Colony Optimization (ACO) (Dorigo et al., 1991) etc. A
chronological order of the development of some of the popular nature inspired meta-heuristics is
given in Fig. 1. These algorithms have proved their mettle in solving complex and intricate
optimization problems arising in various fields.
PSO is a well known and popular search strategy that has gained widespread appeal amongst
researchers and has been shown to offer good performance in a variety of application domains,
with potential for hybridization and specialization. It is a simple and robust strategy based on the
social and cooperative behavior shown by various species like flock of bird, school of fish etc.
PSO and its variants have been effectively applied to a wide range of benchmark as well as real
life optimization problems.
As PSO has undergone many changes in its structure and tried with many parameter adjustment
methods, a large number of works have been carried out in the past decade. This paper presents a
state of the art review of the previous studies which have proposed modified versions of particle
swarm optimization and applied them on various optimization problems of different fields. The
review demonstrated that the particle swarm optimization and its modified versions have
1
widespread application in complex optimization domains, and is currently a major research topic,
offering an alternative to the more established evolutionary computation techniques that may be
applied in many of the same domains.
Kennedy and Eberhart considering the behavior of swarms in nature, such as birds, fish, etc.
developed the PSO algorithm. The PSO has particles driven from natural swarms with
communications based on evolutionary computations. PSO combines self-experiences with
social experiences. In a PSO system, multiple candidate solutions coexist and collaborate
simultaneously. Each solution called a particle, flies in the problem search space looking for the
optimal position to land. A particle, during the generations, adjusts its position according to its
own experience as well as the experience of neighboring particles. PSO system combines local
search method (through self experience) with global search methods (through neighboring
experience), attempting to balance exploration and exploitation. A particle status on the search
space is characterized by two factors: its position and velocity. The new velocity and position of
particle will be updated according to the following equations:
2
(1)
(2)
4.
5.
6.
As PSO was applied to solve problems in different domains, several considerations have been
taken into account to facilitate the convergence and prevent an explosion of the swarm. A few
important and interesting modifications in the basic structure of PSO focus on limiting the
maximum velocity, adding inertia weight and constriction factor. Details are discussed in the
following:
2.1
The basic model of PSO was found with particles being accelerated out of search space.
Motivated to control the search space, Eberhart et al. 1996 put forward a clamping scheme that
limited the speed of each particle to a range [-vmax, vmax] with vmax usually being somewhere
between 0.1 and 1.0 times the maximum position of the particle.
However, by far the most problematic characteristic of PSO is its tendency to converge,
prematurely, on early best solutions. Many strategies have been developed in attempts to
overcome this but by far the most popular are inertia and constriction. The inertia term, w was
introduced by Shi and Eberhart 1998b. The inertia weight controls the exploration of search
space. The velocity update function with inertia weight is given in equation 3.
.
(
)
(
)
(3)
With further analysis the other strategies to adjust inertia weight were introduced. The adaption
of inertia weight using fuzzy system (Eberhart and Shi 2000) was reported to significantly
improve PSO performance. The optimal strategy is to initially set w to 0.9 and reduce it linearly
to 0.4, allowing initial exploration followed by acceleration toward an improved global optimum.
Another effective strategy is to use an inertia weight with a random component, rather than timedecreasing (Eberhart and Shi 2001) values and successfully used = U(0.5, 1).
Clercs analysis of the iterative system led him to propose a strategy for the placement of
constriction coefficients on the terms of the formulas; Clerc and Kennedy (2002) noted that
there can be many ways to implement the constriction coefficient. One of the simplest methods
of incorporating it is the following:
)}
where 1 2 , 4
(4)
(5)
When Clercs constriction method is used, is commonly set to 4.1, 1 = 2, and the constant
multiplier is approximately 0.7298.
The constricted particles will converge without using any Vmax at all. However, subsequent
experiments and applications (Eberhart and Shi 2000) concluded that a better approach to use as
a prudent rule of thumb is to limit Vmax to Xmax, the dynamic range of each variable on each
dimension, in conjunction with (4) and (5).
3.
PSO : A Review
As a member of stochastic search algorithms, PSO has two major drawbacks (Lovbjerg, 2002).
The first drawback is that the PSO has a problem-dependent performance. This dependency is
usually caused by the way parameters are set, i.e. assigning different parameter settings to PSO
will result in high performance variance. In general, no single parameter setting exists which can
be applied to all problems and performs dominantly better than other parameter settings. The
common way to deal with this problem is to use self-adaptive parameters. The second drawback
of PSO is its premature character, i.e. it could converge to local minimum. According to
Angeline 1998a&b, although PSO converges to an optimum much faster than other evolutionary
algorithms, it usually cannot improve the quality of the solutions as the number of iterations is
increased. PSO usually suffers from premature convergence when high multi-modal problems
are being optimized
Several efforts have been attempted on PSO to overcome or reduce the impact of the
disadvantages. Some of them have already been discussed, including inertia weight, the
constriction factor and so on. Further modifications as well as hybridization of the PSO with
another kind of optimization algorithm are discussed in this section.
The changes attempted in PSO methodology can be categorized into
1. Parameter Setting
4
2. Changes in methodology
3. Hybridization with other methodologies
3.1 Parameter Setting
In solving problems with PSO, the properties affect their performance. The property of PSO
depend on the parameter setting and hence users need to find the apt value for the parameters to
optimize the performance. The interactions between the parameters have a complex behavior and
so each parameter will give a different effect depending on the value set for others. Without prior
knowledge of the problem, parameter setting is difficult and time consuming. The two major
ways of parameter setting are through parameter tuning and parameter control (Eiben and Smith
2003). Parameter tuning is the commonly practiced approach that amounts to find appropriate
values for the parameters before running the algorithm. Parameter control steadily modifies the
control parameter values during the run. This could be achieved either through deterministic or
adaptive or self-adaptive techniques (Boyabalti and Sabuncuoglu 2007). A brief discussion of
adjustment of parameters of particle swarm optimization is summed up from the literature
review.
The issue of parameter setting has been relevant since the beginning of the EA research and
practice. Eiben et al. 2007 emphasized that the parameter values greatly determinate the success
of an EA in finding an optimal or near-optimal solution to a given problem. Choosing
appropriate parameter values is a demanding task. There are many methods to control the
parameter setting during an EA run. Parameters are not independent but trying all combinations
is impossible and the found parameter values are appropriate only for the tested problem
instances. Nannen et al. 2008 state that major problem of parameter tuning is weak
understanding of the effects of EA parameters on the algorithm performance.
3.1.1 Inertia Weight Adaption Mechanisms
Inertia weight parameter was originally introduced by Shi and Eberhart in 1998a. A range of
constant values were set for w and showed that large values of w, i.e. w> 1.2, resulted in weak
exploration and with low values of this parameter, i.e. w< 0.8, PSO tends to traps in local
optima. They suggest that with w within the range [0.8, 1.2], PSO finds the global optimum in a
reasonable number of iterations. Shi and Eberhart 1998 analyzed the impact of the inertia weight
and maximum velocity on the performance of the PSO.
A random value of inertia weight is used to enable the PSO to track the optima in a dynamic
environment in Eberhart and Shi 2001.
()
(6)
where rand() is a random number in [0,1]; w is then a uniform random variable in the range
[0.5,1].
Most of the PSO variants use time-varying inertia weight strategies in which the value of the
inertia weight is determined based on the iteration number. These methods can be either linear or
5
non-linear and increasing or decreasing. A linear decreasing inertia weight was introduced and
was shown to be effective in improving the fine-tuning characteristic of the PSO. In this method,
the value of w is linearly decreased from an initial value (wmax) to a final value (wmin) according
to the following equation[ref5,8,20,24,31,32,56,58,67,92]:
(
(7)
where Iterationi is the current iteration of the algorithm and Iterationmax is the maximum number
of iterations the PSO is allowed to continue. This strategy is very common and most of the PSO
algorithms adjust the value of inertia weight using this updating scheme.
Chatterjee and Siarry 2006 proposed a nonlinear decreasing variant of inertia weight in which at
each iteration of the algorithm, w is determined based on the following equation:
(
)
(
}(
(8)
where n is the nonlinear modulation index. Different values of n result in different variations of
inertia weights all of which start from wmax and end at wmin.
Feng et al.2007 use a chaotic model of inertia weight in which a chaotic term is added to the
linearly decreasing inertia weight. The proposed w is as follows.
(
(9)
where z=4z(1z). The initial value of z is selected randomly within the range (0,1).Lei et al.
2006 used a Sugeno function as inertia weight decline curve.
(10)
Where is iter/itermax and s is a constant larger than 1.
In Alireza and Hamidreza 2011, every particle in swarm dynamically adjusts inertia weight
according to feedback taken from particles best memories considering a measure called
adjacency index (AI), which characterizes the nearness of individual fitness to the real optimal
solution. Based on this index, every particle could decide how to adjust the values of inertia
weight.
(
(11)
aggregation degree of the swarm (Xiaolei 2011). This provides the algorithm with dynamic
adaptability, enhances the search ability and convergence performance of the algorithm.
An adaptive inertia weight integer programming particle swarm optimization was proposed in
Chunguo et al. 2012, to solve integer programming problem. Based on Grey relational analysis,
two grey-based parameter automation strategies for particle swarm optimization (PSO) was
proposed (Liu and Yeh 2012). One is for the inertia weight and the other is for the acceleration
coefficients. By the proposed approaches, each particle has its own inertia weight and
acceleration coefficients whose values are dependent upon the corresponding grey relational
grade. The adaption of inertia weight is as follows.
( )
(12)
An adaptive parameter tuning of particle swarm optimization based on velocity information was
proposed by Gang Xu 2013.This algorithm introduces the velocity information which is defined
as the average absolute value of velocity of all the particles based on which the inertia weight is
dynamically adjusted.
( )
* ( )
( )
* ( )
(13)
where w(t) is the inertia weight; wmax is the largest inertia weight and wmin the smallest inertia
weight; w is the step size of the inertia weight.
Arumugam and Rao 2008, used the ratio of the global best fitness and the average of particles
local best fitness to determine the inertia weight in each iteration.
(
(14)
In the adaptive particle swarm optimization algorithm proposed by Panigrahi et al. 2008,
different inertia weights were assigned to different particles based on the ranks of the particles.
(
(15)
where Ranki is the position of the ith particle when the particles are ordered based on their
particle best fitness, and Total population is the number of particles. The rational of this
approach is that the positions of the particles are adjusted in a way that highly fitted particles
move more slowly compared to the lowly fitted ones.
Nickabadi et al. 2011, proposed an adaptive inertia weight methodology based on the success
rate of the swarm as follows.
( )
) ( )
(16)
Where Ps(t) is the success percentage of the swarm. The success percentage is the success
average of the particles in the swarm. The success of the particles is based on the distance of the
particle from the optima.
3.1.2 Acceleration coefficients Adaption Mechanisms
The acceleration coefficients c1 (cognitive) and c2 (social) in PSO represents the stochastic
acceleration terms that pull each particles towards the pBest and gBest positions. Suitable finetuning of cognitive and social parameters c1 and c2 may result in faster convergence of the
algorithm and alleviate the risk of settling in one of the local minima.
Initially the values of c1 and c2 were fixed to constant value of 2 (James and Kennedy 1995).
Instead of having a fixed value of acceleration factors Suganthan and Zhao 2009 through
empirical studies suggested that acceleration coefficients should not be always equal to 2 for
obtaining better solutions. From the literature it could be noted that majority of works had the
values of c1 and c2 set to 2. The next level of values for c1 and c2 were found to be around 1.49.
Few works were based on the values of c1 and c2 as 2.05 each. .
As c1 is said to the cognitive factor, making the swarm to search the optima effectively
(Exploration) and c2 the social factor, helping in movement of the swarm toward the optima
(convergence), it is evident from literature that larger values of c1 and smaller values of c2 at
initial stage helps in fixing up the global optima and smaller values of c1 and larger values of c2
at later stages makes the convergence easier.
To alter the values of c1 and c2 during evolution many adaptive methods have been proposed in
literature. To overcome the premature convergence of particle swarm optimization the evolution
direction of each particle is redirected dynamically by adjusting the two sensitive parameters i.e.
acceleration coefficients of PSO in the evolution process (Ying et al. 2012) as.
(
(17)
(18)
(19)
(20)
To improve optimizing efficiency and stability new metropolis coefficients were given by Jie
and Deyun 2008, which represent fusion of simulated annealing and particle swarm
optimization. Metropolis coefficients Cm1 and Cm2 vary according to distance between current
and best position and according to generation of particles. This method gives better results in less
number of iteration steps and less time. These coefficients are given as
[
(21)
(22)
The acceleration coefficients are adapted in Ratnaweera et al. 2004 dynamically and in
Arumugam et al. 2008 by balancing the cognitive and the social components. The acceleration
coefficients are adjusted based on evolution state estimator derived from the Euclidean distance
by Zhan et al. 2009 to perform global search over entire search space with faster convergence.
The adaptive methodologies adjust the parameters based on strategies independent of the data
used in the applications.
Yau-Tarng et al. 2011 proposed an adaptive fuzzy particle swarm optimization algorithm
utilizing fuzzy set theory to adjust PSO acceleration coefficients adaptively, and was thereby
able to improve the accuracy and efficiency of searches. In PSO the acceleration coefficient c1
decreasing linearly while c2 increases linearly over time is proposed in Jing and David 2012,
Gang et al. 2012. This promotes more explorations in the early stages, while encouraging
convergence to a good optimum near the end of the optimization process, by attracting particles
more towards the global best positions.
( )
(23)
( )
(24)
c1,min, c1,max, c2,min, and c2,max can be set to be 0.5, 2.5, 0.5, and 2.5, respectively. Itermax is the
maximum number of iterations to be done.
To improve the convergence speed, Xingjuan et al. 2008 proposed a new setting about social
coefficient by introducing an explicit selection pressure, in which each particle decides its search
direction toward the personal memory or swarm memory. The dispersed social coefficient of
particle j at time t is set as follows:
( )
( )
(25)
where cup and clow are two predefined numbers, and c2,j (t) represents the social coefficient of
particle j at time t . Gradej (t) a new index based on the performance differences (fitness values).
Liu and Yeh 2012 proposed an adaptive method based on Grey relational analysis for adjusting
the acceleration coefficients during evolution. The adaption of acceleration coefficients are as
follows.
( )
(26)
(27)
where Cmax Cfinal Cmin, and Cfinal represents the final value of the acceleration coefficient c2.
3.2 Modification in Methodology
Like other evolutionary computation techniques, particle swarm optimization also has drawback
of premature convergence. To reach an optimum value particle swarm optimization depends on
interaction between particles. If this interaction is restrained algorithms searching capacity will
be limited, thereby requiring long time to come out of local optimums.
To overcome the shortcomings of PSO many modifications were made in the methodology as
altering the velocity update function, fitness function etc. The changes proposed in the velocity
update function noted in literature are listed in table 1.
Table 1. Modification in Velocity Update
Modification Strategy
Equation
Acceleration The
acceleration ( )
concept
produced by the near
neighborhood attraction
( )
force and repulsion
force considered in
( )
velocity updation
Diversity
The direction of the
particles movement is
controlled
by
the
diversity information of
{
the particles
Chaotic
An improved logistic
Maps
map, namely a double[ (
bottom
map,
was
applied for local search
(
(
( ( )
Reference
( ) Liu et al.
2012
( ))
( )
( )
( )
( )
( )
[
)]
Lu and
Lin
2011
Yang et
al. 2012
)
)
(
( )
( ))
( )
( )
( ))
( )
( ))
Ref13,14,
ref15
ref16
&31
10
Detecting
Particle
Swarm
Euclidean
PSO
Modified
PSO
Diversitymaintained
QPSO
Detecting
particles
search in approximate
spiral
trajectories
created by the new
velocity
updating
formula in order to find
better solutions.
If the global best fitness
has not been updated for
K times, velocities of
particles will get an
interference factor to
make most of particles
fly out of the local
optima but the best one
is kept continuing to do
local search
Better
optimization
results are achieved by
splitting the cognitive
component
of
the
general PSO into two
different components,
called good experience
component and bad
experience component.
Is based on the analysis
of Quantum based PSO
and integrates a
diversity control
strategy into QPSO to
enhance the global
search ability of the
particle swarm.
Zhu and
Pu 2009
(
(
)
(
Zang
and
Teng
2009
(
(
(
(
(
)
)
)
)
(
(
Deepa
and
Suguma
-ran
2011
Jun Sun
et
al.
2012
Ji et al. 2010 proposed a bi swarm PSO with co-operative co evolution. In this swarm consists of
two parts, first swarm is generated randomly in the whole search space, and the second swarm is
generated periodically centering towards largest and smallest bound of the best and worst
particle of the first swarm in all directions. Two swarms share information from each other
during each generation. Updation of velocity and position follow SPSO equations (1) and (2).
Then best particles of swarm one is compared with those of two and this way the best particle is
found out. Experiments show that this method performs better than SPSO regarding
convergence, speed and precision.
Multi-Swarm Self-Adaptive and Cooperative Particle Swarm Optimization (MSCPSO) based on
four sub-swarms are employed to avoid falling into local optimum, improve the diversity and
achieve better solution. Particles in each sub-swarms share the only global historical best
optimum to enhance the cooperative capability. Besides, the inertia weight of a particle in each
sub-swarms were modified, which is subject to the fitness information of all particles, and the
adaptive strategy is employed to control the influence of the historical information to create a
more potential search ability. To effectively keep the balance between the global exploration and
the local exploitation, the particle in each swarm takes advantage of the shared information to
maintain cooperation with each other and guides its own evaluation. On the other hand, in order
to increase the diversity of the particles and avoid falling into a local optimum, diversity
operation is adopted to guide the particles to jump out of the local optimum and achieve the
global best position smoothly.
Another approach to multiswarm particle swarm optimization was proposed, in which the
initialized population of particles is divide into n number of groups randomly, every group is
regarded as a new population, which update their velocity and positions synchronously, thus n G
bests are obtained, then again these groups are combined to give one population. Now it becomes
easy to calculate real optimum value out of n G bests (Li and Xiao, 2008). This helps in
effective exploration of global search area. Particles quality is always estimated based on fitness
value and not on dimensional behaviour, but some particles may have different dimensional
behaviour in spite of same fitness value. Also in SPSO each updation of velocity is considered as
overall improvement, while it is possible that some particle may move away from the solution
with this updating. To beat these problems a new parameter called particle distribution degree
(dis(s)) was introduced (Zu et al. 2008).
( )
{(
} ]
(28)
here s is swarm, dim is dimensionality of problem, N is equal separation size of particle swarm,
I dimension, l separation area. Particles crowd together more centrally the bigger the dis(s) is.
Another approach given by Wang et al. 2009 for avoiding local convergence in case of
multimodal and multidimensional problems is called group decision particle swarm optimization
(GDPSO). It takes into account every particles information for making group decision in early
stages (implying human intelligence in which everybodys individual talent and intelligence
makes up for lack of experience) then at later stages original decision making of particle swarm
13
optimization is used. Thus in GDPSO search space is enlarged and diversity is increased to solve
high dimension functions.
3.3 Hybridization with other Methodologies
Like other evolutionary computation techniques, particle swarm optimization faces problem
regarding their local search abilities in optimization problems. More specifically, although PSO
is capable of detecting the region of attraction of the global optimizer fast, once there, PSO
suffers to perform a refined local search to compute the optimum with high accuracy, unless
specific procedures are incorporated in their operators. This triggered the development of
Memetic Algorithms (MAs), which incorporate local search components. MAs constitute a class
of metaheuristics that combines population-based optimization algorithms with local search
procedures.
Recently many local search methods have been incorporated with PSO. Table 2 lists the various
search methods combined with PSO from the literature.
Table 2. Local Search Methods applied in PSO
Local Search
Chaos
Searching
Technique
Strategy
Transform the variable of problems from the solution space to
chaos space and then perform search to find out the solution by
virtue of the randomicity, orderliness and ergodicity of the chaos
variable. Logistic map used
Chaotic Local A well-known logistic equation is employed for the hybrid PSO.
Search
+ The logistic equation is defined as follows.
Roulette
(
)
Wheel
where is the control parameter, x is a variable and n = 0,1,2,...,
Mechanism
. Although the above equation is deterministic, it exhibits chaotic
dynamics when = 4 and x0{0,0.25,0.5,0.75,1} .
Reference
Wan et al
2012
Adaptive
Local Search
Wang et
al. 2012
Intelligent
multiple
search
methods
Xia 2012
Mengqi
et al.
2012
Reduced
Variable
Neighborhood
Search
Expanding
Neighborhood
Search
Zulal and
Erdogan
2010
Marinakis
and
Marinaki
2010
Jia et al.
2011
Variable
Neighborhood
Descend
algorithm
Two
Stage
Hill Climbing
Goksal,
F. P., et
al. 2012
Simulated
Annealing
Extremal
optimization
Ahandani
et
al.
2012
Safaeia et
al. 2012
Chen et
al. 2010
Hybridization is a growing area of intelligent systems research, which aims to combine the
desirable properties of different approaches to mitigate their individual weaknesses. A natural
evolution of the population based search algorithms like that of PSO can be achieved by
integrating the methods that have already been tested successfully for solving complex problems
(Thangaraj et al., 2011). Researchers have enhanced the performance of PSO by incorporating in
it the fundamentals of other popular techniques like selection, mutation and crossover of GA and
DE.
Many developments combining PSO with Genetic Algorithms were tried out for balancing
between exploration and exploitation. Table 3 lists the works in literature on combination of PSO
and GA
15
Reference
Vikas et
al. 2007
Kuo et al.
2012
Mengqi
et
al.
2012
Wu 2011
(
where 1 and 2 denote the parameter of operator set, and i
are stochastic variables observing Cauchy distribution.
GA+PSO
The algorithm is initialized by a set of random particles which is
flown through the search space. In order to get approximate
nondominated solutions PND, an evolution of this particle is
performed
Crossover rate 0.9, Mutation rate 0.7
Selection operator : Stochastic universal sampling
Crossover operator Single point, Mutation operator Real-value
PSO +GA
First, the algorithm is initialized by a set of random particles
which travel through the search space. During this travel an
evolution of these particles is performed by integrating PSO and
GA.
Crossover rate 0.9, Mutation rate 0.7
Selection operator Stochastic universal sampling
Crossover operator Single point, Mutation operator Real-value
PSO + Mutation Research employs the specialist recombination operator for
+
crossover operator. Three types of mutation operator are utilized
Mousa et
al. 2012
Abd-ElWaheda
2011
Ahandani
et
al.
16
where is a random number usually distributed between 0 and 1. EPSOM gives better results
than random inertia weight and linearly decreasing inertia weight.
The study carried out by Paterlini and Krink 2006 has shown that DE is much better than PSO in
terms of giving accurate solutions to numerical optimization problems. On the other hand, the
study by Angeline 1998 has shown that PSO is much faster in identifying the promising region
of the global minimizer but encounters problems in reaching the global minimizer. The
complementary strengths of the two algorithms have been integrated to give efficient and reliable
PSO hybrid algorithms.
The DE evolution steps namely mutation, crossover and selection are performed in each iteration
of the PSO to the best particles positions by Epitropakis et al. 2012. The Differential evolution
point generation scheme (mutation and crossover rules) are applied to the new particles in the
swarm generated by PSO in each iteration in Ali and Kaelo 2008. When DE is introduced to
PSO at each iteration, the computational cost will increase sharply and at the same time the fast
convergence ability of PSO may be weakened. In order to perfectly integrate PSO with DE, DE
was introduced to PSO only at specified interval of iterations. In this interval of iterations, the
PSO swarm serves as the population for DE algorithm, and the DE is executed for a number of
generations (Sedki and Ouazar 2012).
Shi et al. 2011 proposed a cellular particle swarm optimization, hybridizing cellular automata
and particle swarm optimization for function optimization. In the proposed methodology, a
mechanism of CA is integrated in the velocity update to modify the trajectories of particles to
avoid being trapped in the local optimum. The proposal employed a CA mechanism to improve
the performance of PSO by devising two versions of cellular particle swarm optimization:
CPSO-inner and CPSO-outer. The CPSO-inner uses the information inside the particle swam to
interact by considering every particle as a cell; whereas, the CPSO-outer enables cells that
belong to the particle swarm to communicate with cells outside the particle swarm with every
potential solution defined as a cell and every particle of the swarm defined as a smart-cell.
17
A novel hybrid algorithm based on particle swarm optimization and ant colony optimization
called hybrid ant particle optimization algorithm (HAP) to find global minimum was proposed
by Kiran et al. 2012. In the proposed method, ACO and PSO work separately at each iteration
and produce their solutions. The best solution is selected as the global best of the system and its
parameters are used to select the new position of particles and ants at the next iteration. Thus, the
ants and particles are motivated generating new solutions by using the system solution
parameters.
To exploit the sensitivity of Nelder Mean Simplex Method to initial solutions present, the good
global search ability of particle swarm optimization is combined with NMSM. This gives a
hybrid particle swarm optimization (hPSO). Choice of initial points in simplex search method is
predetermined but PSO has random initial points (particles). PSO proceeds towards best
decreasing difference between current and best position, whereas simplex search method evolves
by moving away from a point which has worst performance. On each iteration worst particle is
replaced by a new particle generated by one iteration of NMSM. Then all particles are again
updated by PSO. The PSO and NMSM are performed iteratively (Ouyang et al. 2009).
Hsu and Gao 2008 further for effectively solving multi dimensional problems suggested a hybrid
approach incorporating NMSM along with existence of a centre particle in PSO. With
exploitation property of PSO and exploration property of NMSM, the centre particle which
dwells near optima attracting many particles convergence, improves the accuracy of PSO further.
Pan et al. 2008 have combined PSO with simulated annealing and swarm core evolutionary
particle swarm optimization to improve local search ability of PSO. In PSO with simulated
annealing particle moves to next position not directly with a comparison criterion of best position
but with some probability function controlled by temperature. In swarm core evolutionary
particle swarm optimization, the particle swarm was divided into three sub swarms, as per the
distance as core, near and far and assigned different tasks. This process works better in cases
where optima change frequently.
PSO converges early in highly discrete problems and traps into the local optimum solution. In
order to improve PSO, proposed an improved optimization hybrid swarm algorithm called the
particle-bee algorithm (PBA) combining Bee algorithm with PSO, that imitates a particular
intelligent behavior inspired of bird and honey bee swarms and integrates their advantages (Lien
and Chen 2012).
4. Applications of PSO
The PSO is an important algorithm in optimization and for the reason of its high adaptability;
PSO has many applications in diverse sciences such as medical, financial, economics, security
and military, biological, system identification etc. As a state-of-art the research work on the PSO
application in some fields such as electrical engineering and mathematics are extensive, but in
other fields for example chemical and civil engineering are exceptional. In mathematics PSO
finds application in the field of multi modal function, multi objective and constrained
optimization, salesman problem, data mining, modelling etc. To quote examples of engineering
18
fields, PSO can be used in material engineering, electronics (antenna, image and sound analysis,
sensors and communication), in computer science and engineering (visuals, graphics, games,
music, animation), in mechanical engineering (robotics, dynamics, fluids), in industrial
engineering (in job and resource allocation, forecasting, planning, scheduling, sequencing,
maintenance, supply chain management), traffic management in civil engineering and chemical
process in chemical engineering. In electrical engineering PSO finds uses in generation,
transmission, state estimation, unit commitment, fault detection and recovery, economic load
dispatch, control application, in optimal use of electrical motor, structuring and restructuring of
network, neural network and fuzzy systems and Renewable Energy Systems (RES). As a method
to find optima of complex search processes through iteration of each particle of population PSO
can provide answers to planning, designing and control of RES (Yang et al. 2007, Wai et al.
2011).
5. Conclusions
Like other evolutionary algorithms, PSO has become an important tool for optimization and
other complex problem solving. It is an interesting and intelligent computational technique for
finding global minima and maxima with high capability or multimodal functions and practical
applications. Particle swarm optimization works on theory of cooperation and competition
between particles. Many applications of PSO are given in the literature, like neural fuzzy
networks, optimization of artificial neural networks, computational biology, image processing
and medical imaging, optimization of electricity generation and network routing, financial
forecasting etc.
Although there are few challenges yet remaining to overcome, such as dynamic problems, pass
up stagnation, handle constraint and multiple objectives, and these are important research points
apparent from the literature. So the drawbacks to be worked with are its tendency of particles to
converge at local optima, slow speed of convergence, and search space being very large. PSO
sometimes cannot effectively and accurately solve non linear equations. Hybridizing with other
algorithm generally demands higher number of functions to be evaluated. Most of the approaches
proposed such as inertia weight method, adoptive variation and hybrid PSO do solve premature
convergence problem but there is a problem of low convergence speed. Therefore no generalized
solution can be given applicable to all type of problems. Yet PSO is a promising method working
in direction of simulation and optimization of difficult engineering and other problems. To
overcome the problem of stagnation of particles in search space, to improve efficiency, to
achieve better adjustability, adoptability and vigour of parameters different researchers are taking
it up as an active research topic and coming up with new ideas applicable for different problems.
Further analysis of the comparative potency of PSO, and the problems in using a PSO based
system are needed.
References
1. Abd-El-Waheda W F, A.A. Mousab, M.A. El-Shorbagy, Integrating particle swarm
optimization with genetic algorithms for solving nonlinear optimization problems,
Journal of Computational and Applied Mathematics 235 (2011) 14461453
19
20
18. Dorigo, M., V. Maniezzo and A. Colorni, Positive feedback as a search strategy.
Technical Report 91-016, Dipartimento di Elettronica, Politecnico di Milano, IT,
1991,pp: 91-106.
19. Eberhart, R. C., & Shi, Y. Comparing inertia weights and constriction factors in particle
swarm optimization. In Proceedings of the IEEE congress on evolutionary computation
(CEC), 2000 (pp. 8488), San Diego, CA. Piscataway: IEEE.
20. Eberhart, R. C., & Shi, Y. Tracking and optimizing dynamic systems with particle
swarms. In Proceedings of the IEEE congress on evolutionary computation (CEC) (pp.
94100), Seoul, Korea. Piscataway: IEEE, 2001.
21. Eberhart, R. C., Simpson, P. K., & Dobbins, R. W. (1996). Computational intelligence
PC tools. Boston: Academic Press.
22. Eiben A. E. and Michalewicz Z. and Schoenauer M. and Smith J. E., Parameter Control
in Evolutionary Algorithms , Lobo F.G. and Lima C.F. and Michalewicz Z. (eds.) ,
Parameter Setting in Evolutionary Algorithms , Springer , 2007 , pp. 19-46
23. Eiben A. E., Smith J. E., Introduction to Evolutionary Computing, Springer-Verlag,
Berlin Heidelberg New York, 2003
24. Epitropakis M G, V.P. Plagianakos, M.N. Vrahatis, Evolving cognitive and social
experience in particle swarm optimization through differential evolution, in: IEEE, 2010.
25. Epitropakis M G, V.P. Plagianakos, M.N. Vrahatis, Evolving cognitive and social
experience in Particle Swarm Optimization through Differential Evolution: A hybrid
approach, Information Sciences 216 (2012) 5092
26. Feng Y, G. Teng, A. Wang, Y.M. Yao, Chaotic inertia weight in particle swarm
optimization, in: Second International Conference on Innovative Computing, Information
and Control (ICICIC 07), 2007, pp. 4751475.
27. Gang Xu, An adaptive parameter tuning of particle swarm optimization algorithm,
Applied Mathematics and Computation 219 (2013) 45604569
28. Goksal, F. P., et al. A hybrid discrete particle swarm optimization for vehicle routing
problem with simultaneous pickup and delivery. Computers & Industrial Engineering
(2012), doi:10.1016/j.cie.2012.01.005
29. Guan-Chun Luh , Chun-Yi Lin, Optimal design of truss-structures using particle swarm
optimization, Computers and Structures, 89 (2011) 22212232
30. He S, Wu QH et al. A particle swarm optimizer with passive congregation. Biosystems
2004; 78(13):135147.
31. Holland, J.H., Adaptation in natural and artificial systems: An introductory analysis with
applications to biology, control, and artificial intelligence: The MIT Press,1992.
32. Hongbing Zhu, Chengdong Pu, Euclidean Particle Swarm Optimization, 2009 Second
International Conference on Intelligent Networks and Intelligent Systems,669-672
33. Hongfeng Wang, Ilkyeong Moon, Shenxiang Yang, Dingwei Wang, A memetic particle
swarm optimization algorithm for multimodal optimization problems, Information
Sciences 197 (2012) 3852
34. Hsu C C, C.H. Gao, Particle swarm optimization incorporating simplex search 969 and
center particle for global optimization, in: Conference on Soft Computing 970 in
Industrial Applications, Muroran, Japan, 2008.
35. Ji H, J. Jie, J. Li, Y. Tan, A bi-swarm particle optimization with cooperative coevolution, international conference on computational aspects of social networks, in:
IEEE, 2010.
21
36. Jie X, Deyun X, New metropolis coefficients of particle swarm optimization, in: IEEE,
2008.
37. Jing Cai 1, W. David Pan, On fast and accurate block-based motion estimation algorithms
using particle swarm optimization, Information Sciences ,197 (2012) 5364
38. Jiuzhong Zhang n, XuemingDing , A Multi-Swarm Self-Adaptive and Cooperative
Particle Swarm Optimization, Engineering Applications of Artificial Intelligence 24
(2011) 958967
39. Jun Sun, Xiaojun Wu, Wei Fang, Yangrui Ding, Haixia Long, Webo Xu, Multiple
sequence alignment using the Hidden Markov Model trained by an improved quantumbehaved particle swarm optimization, Information Sciences 182 (2012) 93114
40. Kennedy, J. and R. Eberhart, 1995. Particle swarm optimization. Proceedings of the IEEE
International Conference on Neural Networks, Piscataway, pp: 1942-1948.
41. Kuo R J , Y.J. Syu, Zhen-Yao Chen, F.C. Tien, Integration of particle swarm
optimization and genetic algorithm for dynamic clustering, Information Sciences 195
(2012) 124140
42. Lei K, Y. Qiu, Y. He, A new adaptive well-chosen inertia weight strategy to
automatically harmonize global and local search ability in particle swarm optimization,
in: ISSCAA, 2006.
43. Li J, X. Xiao, Multi swarm and multi best particle swam optimization algorithm, in:
IEEE, 2008.
44. Li-Chuan Lien, Min-Yuan Cheng, A hybrid swarm intelligence based particle-bee
algorithm for construction site layout optimization, Expert Systems with Applications 39
(2012) 96429650
45. Lili Liu , Shengxiang Yang , Dingwei Wang, Force-imitated particle swarm optimization
using the near-neighbor effect for locating multiple optima, Information Sciences, 182
(2012) 139155
46. Liu E, Y. Dong, J. Song, X. Hou, N. Li, A modified particle swarm optimization
algorithm, in: International Workshop on Geosciences and Remote Sensing, 2008, pp.
666669
47. Li-Yeh Chuang a, Sheng-Wei Tsai b, Cheng-Hong Yang, Chaotic catfish particle swarm
optimization for solving global numerical optimization problems, Applied Mathematics
and Computation 217 (2011) 69006916
48. Li-Yeh Chuang, Sheng-Wei Tsai, and Cheng-Hong Yang, Catfish Particle Swarm
Optimization, 2008 IEEE Swarm Intelligence Symposium,
49. Lovbjerg, M., Improving particle swarm optimization by hybridization of stochastic
search heuristics and self-organized criticality. Master's Thesis, Department of Computer
Science, University of Aarhus, 2002.
50. Maeda Y, N. Matsushita, S. Miyoshi, H. Hikawa, On simultaneous perturbation particle
swarm optimization, in: CEC 2009 IEEE, Proceedings on Eleventh Conference on
Congress on Evolutionary Computation, 2009.
51. Maxfield, A.C.M. and L. Fogel, Artificial intelligence through a simulation of evolution.
Biophysics and Cybernetics Systems: Proceedings of the Second Cybernetics Sciences.
Spartan Books, Washington DC, 1965.
52. Mengqi Hu, Teresa Wu, Jeffery D. Weir, An intelligent augmentation of particle swarm
optimization with multiple adaptive methods, Information Sciences 213 (2012) 6883
22
53. Min-Rong Chen, Xia Li, Xi Zhang, Yong-Zai Lu, A novel particle swarm optimizer
hybridized with extremal optimization, Applied Soft Computing 10 (2010) 367373
54. Min-Shyang Leu, Ming-Feng Yeh, Grey particle swarm optimization, Applied Soft
Computing 12 (2012) 29852996
55. Moayed Daneshyari, Gary G. Yen, Constrained Multiple-Swarm Particle Swarm
Optimization Within a Cultural Framework, IEEE Transactions on Systems, Man, and
CyberneticsPart A: Systems and Humans, VoL. 42, No. 2, March 2012, 475-490.
56. Morteza Alinia Ahandani, Mohammad Taghi Vakil Baghmisheh, Mohammad Ali
Badamchi Zadeh, Sehraneh Ghaemi , Hybrid particle swarm optimization transplanted
into a hyper-heuristic structure for solving examination time tabling problem, Swarm and
Evolutionary Computation 7 (2012) 2134
57. Mousaa A A, M.A. El-Shorbagy, W.F. Abd-El-Wahed, Local search based hybrid
particle swarm optimization algorithm for multiobjective optimization, Swarm and
Evolutionary Computation 3 (2012) 114
58. Mustafa Servet Kran, Mesut Gunduz, Omer Kaan Baykan, A novel hybrid algorithm
based on particle swarm and ant colony optimization for finding the global minimum,
Applied Mathematics and Computation 219 (2012) 15151521
59. Nannen V, S. K. Smit and A. E. Eiben, Costs and benefits of tuning parameters of
evolutionary algorithms, Proceedings of the 10th International Conference on Parallel
Problem Solving from Nature, PPSN X, Dortmund, Germany, 2008, pp. 528538.
60. Nima Safaeia, Reza Tavakkoli-Moghaddam, Corey Kiassat, Annealing-based particle
swarm optimization to solve the redundant reliability problem with multiple component
choices, Applied Soft Computing 12 (2012) 34623471
61. Ouyang A, Y. Zhou, Q. Luo, Hybrid particle swarm optimization algorithm for solving
systems of nonlinear equations, in: IEEE International Conference on Granular
Computing, 2009, pp. 460465.
62. Pan G, Q. Dou, X. Liu, Performance of two improved particle swarm optimization in
dynamic optimization environments, in: Proceedings of the Sixth International
Conference on Intelligent Systems Design and Applications, 2006
63. Panigrahi B.K, V.R. Pandi, S. Das, Adaptive particle swarm optimization approach for
static and dynamic economic load dispatch, Energy Conversion and Management 49
(2008) 14071415.
64. Paterlini S, T. Krink, Differential evolution and particle swarm optimization in partitional
clustering, Computational Statistics and Data Analysis 50 (1) (2006) 12201247
65. Price, K. and R. Storn, 1995. Differential Evolution-a simple and efficient adaptive
scheme for global optimization over continuous spaces. International Computer Science
Institute-Publications.
66. Qi Wu, Hybrid forecasting model based on support vector machine and particle swarm
optimization with adaptive and Cauchy mutation, Expert Systems with Applications 38
(2011) 90709075
67. Ratnaweera, A., Halgamuge, S.K., Watson, H.C., 2004. Self-organizing hierarchical
particle swarm optimizer with time-varying acceleration coefficients. IEEE Transactions
on Evolutionary Computation 8 (3), 240255.
68. Sedki A, D. Ouazar, Hybrid particle swarm optimization and differential evolution for
optimal design of water distribution systems, Advanced Engineering Informatics 26
(2012) 582591
23
69. Shi, Y., & Eberhart, R. C. A modified particle swarm optimizer. In Proceedings of the
IEEE international conference on evolutionary computation (pp. 6973). Piscataway:
IEEE, (1998).
70. Shing Wa Leung, Shiu Yin Yuen, Chi Kin Chow, Parameter control system of
evolutionary algorithm that is aided by the entire search history, Applied Soft Computing
12 (2012) 30633078
71. Shuguang Zhao, Ponnuthurai N. Suganthan: Diversity enhanced particle swarm optimizer
for global optimization of multimodal problems. IEEE Congress on Evolutionary
Computation 2009: 590-597
72. Thangaraj, R., M. Pant, A. Abraham and P. Bouvry, Particle swarm optimization:
Hybridization perspectives and experimental illustrations. Applied Mathematical.
Computation, 217, 2011, 5208-5226
73. Vikas Singh, Deepak Singh, Ritu Tiwari, Discrete Optimization Problem Solving with
three Variants of Hybrid Binary Particle Swarm Optimization, BADS11, June 14,
2011,ACM, 43-48
74. Voss M S, Principle component particle swarm optimization, in: IEEE Congress 977 on
Evolutionary Computation, vol. 1, 2005, pp. 298305.
75. Wai R J, S. Cheng, Y.-C. Chen, 6th IEEE on Industrial Electronics and Applications
(ICIEA), 2011.
76. Wan Z, GuangminWang, BinSun, A hybrid intelligent algorithm by combining particle
swarm optimization with chaos searching technique for solving nonlinear bilevel
programming problems, swarm and Evolutionary Computation (2012), http://dx.doi.org/
10.1016/j.swevo. 2012.08.001
77. Wang L, Z. Cui, J. Zeng, Particle swarm optimization with group decision making, in:
Ninth International Conference on Hybrid Intelligent Systems, 2009.
78. Wei J, L. Guangbin, L. Dong, Elite particle swarm optimizaion with mutation, in: Asia
Simulation Conference 7th Intl. Conf. on Sys. Simulation and Scientific Computing,
IEEE, 2008, pp. 800803.
79. Wong W K, S.Y.S. Leung, Z.X. Guo, Feedback controlled particle swarm optimization
and its application in time-series prediction, Expert Systems with Applications 39 (2012)
85578572
80. Xiaohua Xia, Particle Swarm Optimization Method Based on Chaotic Local Search and
Roulette Wheel Mechanism, Physics Procedia 24 (2012) 269 275
81. Xiaolei Wang, Conflict Resolution in Product Optimization Design based on Adaptive
Particle Swarm Optimization, Procedia Engineering 15 (2011) 4920-4924
82. Xingjuan Cai, Zhihua Cui, Jianchao Zeng, Ying Tan, Dispersed particle swarm
optimization, Information Processing Letters, 105 (2008) 231235
83. Xu JJ, Xin ZH. An extended particle swarm optimizer. Parallel and Distributed
Processing Symposium, 2005. Proceedings of the 19th IEEE International, Denver, CO,
U.S.A., 2005.
84. Yang B, Y. Chen, Z. Zhao, IEEE International Conference on Survey on Applications of
Particle Swarm Optimization in Electric Power Systems, May 30 2007June 1 2007,
2007.
85. Yang Shi, Hongcheng Liu, Liang Gao, Guohui Zhang, Cellular particle swarm
optimization, Information Sciences, 181 (2011) 44604493
24
86. Yannis Marinakis, Magdalene Marinaki, Georgios Dounias, A hybrid particle swarm
optimization algorithm for the vehicle routing problem, Engineering Applications of
Artificial Intelligence 23 (2010) 463472
87. Yannis Marinakis, Magdalene Marinaki, A Hybrid Multi-Swarm Particle Swarm
Optimization algorithm for the Probabilistic Traveling Salesman Problem, Computers &
Operations Research 37 (2010) 432 - 442
88. Yau-Tarng Juang, Shen-Lung Tung, Hung-Chih Chiu, Adaptive fuzzy particle swarm
optimization for global optimization of multimodal functions, Information Sciences 181
(2011) 45394549
89. Ying Wang, Jianzhong Zhou, Chao Zhou, Yongqiang Wang, Hui Qin, Youlin Lu, An
improved self-adaptive PSO technique for short-term hydrothermal scheduling, Expert
Systems with Applications, Volume 39, Issue 3,2012, pp 2288-2295.
90. Ying-Nan Zhang, Hong-Fei Teng, Detecting particle swarm optimization, , Concurrency
and Computation: Practice and Experience, 2009; 21:449473
91. Ying-Nan Zhang,Qing-Ni Hu and Hong-Fei Teng, Active target particle swarm
optimization, Concurrency and Computation: Practice and Experience, 2007; 20:2940
92. Zu W, Y.l. Hao, H.t. Zeng, W.Z. Tang, Enhancing the particle swarm optimization based
on equilibrium of distribution, in: Control and Decision Conference, China, 2008, pp.
285289.
25