You are on page 1of 25

Review of Particle Swarm Optimization: Basic Concepts, Variants and

Applications
Abstract: Particle swarm optimization is a type of stochastic optimization, evolutionary and
simulating algorithm inspired from the nature. PSO has gained wide appeal due to its ease of
implementation, special characteristic of memory and potentiality for specialization and
hybridization. Particle swarm optimization has few parameters to adjust, presents extensive
review of literature available on concept, development and modification of Particle swarm
optimization. This study comprises a snapshot of particle swarm optimization from the authors
perspective, including variations in the algorithm, modifications and refinements introduced to
prevent swarm stagnation and hybridization of PSO with other heuristic algorithms. Issues
related to parameter tuning are also discussed.
1.

Introduction

Conventional computing paradigms often have difficulty in dealing with real world problems,
such as those characterized by noisy or incomplete data or multimodality, because of their
inflexible construction. Natural computing paradigms seem to be a suitable replacement in
solving such problems. These working together. They have inspired several natural computing
paradigms that can be used where conventional computing techniques perform unsatisfactorily.
In the past few decades several nature inspired optimization algorithms have been developed that
are based on the nature inspired analogy. These are mostly population based meta-heuristics also
called general purpose algorithms because of their applicability to a wide range of problems.
Global optimization techniques are fast growing tools that can overcome most of the limitations
found in derivative-based techniques. Some popular global optimization algorithms include
Genetic Algorithms (GA) (Holland, 1992), Particle Swarm Optimization (PSO) (Kennedy and
Eberhart, 1995), Differential Evolution (DE) (Price and Storn, 1995), Evolutionary Programming
(EP) (Maxfield and Fogel, 1965), Ant Colony Optimization (ACO) (Dorigo et al., 1991) etc. A
chronological order of the development of some of the popular nature inspired meta-heuristics is
given in Fig. 1. These algorithms have proved their mettle in solving complex and intricate
optimization problems arising in various fields.
PSO is a well known and popular search strategy that has gained widespread appeal amongst
researchers and has been shown to offer good performance in a variety of application domains,
with potential for hybridization and specialization. It is a simple and robust strategy based on the
social and cooperative behavior shown by various species like flock of bird, school of fish etc.
PSO and its variants have been effectively applied to a wide range of benchmark as well as real
life optimization problems.
As PSO has undergone many changes in its structure and tried with many parameter adjustment
methods, a large number of works have been carried out in the past decade. This paper presents a
state of the art review of the previous studies which have proposed modified versions of particle
swarm optimization and applied them on various optimization problems of different fields. The
review demonstrated that the particle swarm optimization and its modified versions have
1

widespread application in complex optimization domains, and is currently a major research topic,
offering an alternative to the more established evolutionary computation techniques that may be
applied in many of the same domains.

Figure 1: Popular Nature Inspired meta-heuristics in chronological order


The remainder of this paper is organized as follows. In Section 2, the basic concepts of PSO are
explained along with its further evolution. Section 3 reviews the existing literature in PSO. In
section 4 the wide application areas of PSO is presented followed by conclusions in section 5.
2.

Concept of particle swarm optimization and its Evolution

Kennedy and Eberhart considering the behavior of swarms in nature, such as birds, fish, etc.
developed the PSO algorithm. The PSO has particles driven from natural swarms with
communications based on evolutionary computations. PSO combines self-experiences with
social experiences. In a PSO system, multiple candidate solutions coexist and collaborate
simultaneously. Each solution called a particle, flies in the problem search space looking for the
optimal position to land. A particle, during the generations, adjusts its position according to its
own experience as well as the experience of neighboring particles. PSO system combines local
search method (through self experience) with global search methods (through neighboring
experience), attempting to balance exploration and exploitation. A particle status on the search
space is characterized by two factors: its position and velocity. The new velocity and position of
particle will be updated according to the following equations:
2

(1)
(2)

is the new velocity after updation,


is the velocity of the particle before updation,
are random numbers generated within the range of [0,1],
is the cognitive learning
factor,
the social learning factor,
the particles individual best (i.e., local-best position
or its experience) and global best (the best position among all particles in the population) particle
respectively and is the current particles position in search space.
where

The algorithm steps of PSO is given as


1.
2.
3.

4.
5.
6.

Initialize a population of particles with random velocities and positions on d dimensions


in the problem space.
For each particle evaluate the objective fitness function in d variables
Compare the particles fitness value with particles pBest value. If current value is better
than pBest, then set pBest value equal to the current value and the pBest location equal to
the current location in d dimensional search space.
Compare fitness evaluation with the populations overall previous best. If current value is
better than gBest, then reset gBest to the current particles array index and value.
Change velocity and position of the particles with equation 1 and 2, respectively
Loop to step 2 until the criterion is met, usually a good fitness or a maximum number of
iterations.

As PSO was applied to solve problems in different domains, several considerations have been
taken into account to facilitate the convergence and prevent an explosion of the swarm. A few
important and interesting modifications in the basic structure of PSO focus on limiting the
maximum velocity, adding inertia weight and constriction factor. Details are discussed in the
following:
2.1

Evolution of PSO models

The basic model of PSO was found with particles being accelerated out of search space.
Motivated to control the search space, Eberhart et al. 1996 put forward a clamping scheme that
limited the speed of each particle to a range [-vmax, vmax] with vmax usually being somewhere
between 0.1 and 1.0 times the maximum position of the particle.
However, by far the most problematic characteristic of PSO is its tendency to converge,
prematurely, on early best solutions. Many strategies have been developed in attempts to
overcome this but by far the most popular are inertia and constriction. The inertia term, w was
introduced by Shi and Eberhart 1998b. The inertia weight controls the exploration of search
space. The velocity update function with inertia weight is given in equation 3.
.
(
)
(
)
(3)

With further analysis the other strategies to adjust inertia weight were introduced. The adaption
of inertia weight using fuzzy system (Eberhart and Shi 2000) was reported to significantly
improve PSO performance. The optimal strategy is to initially set w to 0.9 and reduce it linearly
to 0.4, allowing initial exploration followed by acceleration toward an improved global optimum.
Another effective strategy is to use an inertia weight with a random component, rather than timedecreasing (Eberhart and Shi 2001) values and successfully used = U(0.5, 1).
Clercs analysis of the iterative system led him to propose a strategy for the placement of
constriction coefficients on the terms of the formulas; Clerc and Kennedy (2002) noted that
there can be many ways to implement the constriction coefficient. One of the simplest methods
of incorporating it is the following:

)}

where 1 2 , 4

(4)

(5)

When Clercs constriction method is used, is commonly set to 4.1, 1 = 2, and the constant
multiplier is approximately 0.7298.
The constricted particles will converge without using any Vmax at all. However, subsequent
experiments and applications (Eberhart and Shi 2000) concluded that a better approach to use as
a prudent rule of thumb is to limit Vmax to Xmax, the dynamic range of each variable on each
dimension, in conjunction with (4) and (5).
3.

PSO : A Review

As a member of stochastic search algorithms, PSO has two major drawbacks (Lovbjerg, 2002).
The first drawback is that the PSO has a problem-dependent performance. This dependency is
usually caused by the way parameters are set, i.e. assigning different parameter settings to PSO
will result in high performance variance. In general, no single parameter setting exists which can
be applied to all problems and performs dominantly better than other parameter settings. The
common way to deal with this problem is to use self-adaptive parameters. The second drawback
of PSO is its premature character, i.e. it could converge to local minimum. According to
Angeline 1998a&b, although PSO converges to an optimum much faster than other evolutionary
algorithms, it usually cannot improve the quality of the solutions as the number of iterations is
increased. PSO usually suffers from premature convergence when high multi-modal problems
are being optimized
Several efforts have been attempted on PSO to overcome or reduce the impact of the
disadvantages. Some of them have already been discussed, including inertia weight, the
constriction factor and so on. Further modifications as well as hybridization of the PSO with
another kind of optimization algorithm are discussed in this section.
The changes attempted in PSO methodology can be categorized into
1. Parameter Setting
4

2. Changes in methodology
3. Hybridization with other methodologies
3.1 Parameter Setting
In solving problems with PSO, the properties affect their performance. The property of PSO
depend on the parameter setting and hence users need to find the apt value for the parameters to
optimize the performance. The interactions between the parameters have a complex behavior and
so each parameter will give a different effect depending on the value set for others. Without prior
knowledge of the problem, parameter setting is difficult and time consuming. The two major
ways of parameter setting are through parameter tuning and parameter control (Eiben and Smith
2003). Parameter tuning is the commonly practiced approach that amounts to find appropriate
values for the parameters before running the algorithm. Parameter control steadily modifies the
control parameter values during the run. This could be achieved either through deterministic or
adaptive or self-adaptive techniques (Boyabalti and Sabuncuoglu 2007). A brief discussion of
adjustment of parameters of particle swarm optimization is summed up from the literature
review.
The issue of parameter setting has been relevant since the beginning of the EA research and
practice. Eiben et al. 2007 emphasized that the parameter values greatly determinate the success
of an EA in finding an optimal or near-optimal solution to a given problem. Choosing
appropriate parameter values is a demanding task. There are many methods to control the
parameter setting during an EA run. Parameters are not independent but trying all combinations
is impossible and the found parameter values are appropriate only for the tested problem
instances. Nannen et al. 2008 state that major problem of parameter tuning is weak
understanding of the effects of EA parameters on the algorithm performance.
3.1.1 Inertia Weight Adaption Mechanisms
Inertia weight parameter was originally introduced by Shi and Eberhart in 1998a. A range of
constant values were set for w and showed that large values of w, i.e. w> 1.2, resulted in weak
exploration and with low values of this parameter, i.e. w< 0.8, PSO tends to traps in local
optima. They suggest that with w within the range [0.8, 1.2], PSO finds the global optimum in a
reasonable number of iterations. Shi and Eberhart 1998 analyzed the impact of the inertia weight
and maximum velocity on the performance of the PSO.
A random value of inertia weight is used to enable the PSO to track the optima in a dynamic
environment in Eberhart and Shi 2001.
()

(6)

where rand() is a random number in [0,1]; w is then a uniform random variable in the range
[0.5,1].
Most of the PSO variants use time-varying inertia weight strategies in which the value of the
inertia weight is determined based on the iteration number. These methods can be either linear or
5

non-linear and increasing or decreasing. A linear decreasing inertia weight was introduced and
was shown to be effective in improving the fine-tuning characteristic of the PSO. In this method,
the value of w is linearly decreased from an initial value (wmax) to a final value (wmin) according
to the following equation[ref5,8,20,24,31,32,56,58,67,92]:
(

(7)

where Iterationi is the current iteration of the algorithm and Iterationmax is the maximum number
of iterations the PSO is allowed to continue. This strategy is very common and most of the PSO
algorithms adjust the value of inertia weight using this updating scheme.
Chatterjee and Siarry 2006 proposed a nonlinear decreasing variant of inertia weight in which at
each iteration of the algorithm, w is determined based on the following equation:
(

)
(

}(

(8)

where n is the nonlinear modulation index. Different values of n result in different variations of
inertia weights all of which start from wmax and end at wmin.
Feng et al.2007 use a chaotic model of inertia weight in which a chaotic term is added to the
linearly decreasing inertia weight. The proposed w is as follows.
(

(9)

where z=4z(1z). The initial value of z is selected randomly within the range (0,1).Lei et al.
2006 used a Sugeno function as inertia weight decline curve.
(10)
Where is iter/itermax and s is a constant larger than 1.
In Alireza and Hamidreza 2011, every particle in swarm dynamically adjusts inertia weight
according to feedback taken from particles best memories considering a measure called
adjacency index (AI), which characterizes the nearness of individual fitness to the real optimal
solution. Based on this index, every particle could decide how to adjust the values of inertia
weight.
(

(11)

where is a positive constant in the range (0, 1].


To resolve the conflicts in Engineering optimization design of highly complex and nonlinear
constraints, inertia weight is adaptively changed according to the current evolution speed and
6

aggregation degree of the swarm (Xiaolei 2011). This provides the algorithm with dynamic
adaptability, enhances the search ability and convergence performance of the algorithm.
An adaptive inertia weight integer programming particle swarm optimization was proposed in
Chunguo et al. 2012, to solve integer programming problem. Based on Grey relational analysis,
two grey-based parameter automation strategies for particle swarm optimization (PSO) was
proposed (Liu and Yeh 2012). One is for the inertia weight and the other is for the acceleration
coefficients. By the proposed approaches, each particle has its own inertia weight and
acceleration coefficients whose values are dependent upon the corresponding grey relational
grade. The adaption of inertia weight is as follows.
( )

(12)

An adaptive parameter tuning of particle swarm optimization based on velocity information was
proposed by Gang Xu 2013.This algorithm introduces the velocity information which is defined
as the average absolute value of velocity of all the particles based on which the inertia weight is
dynamically adjusted.
( )

* ( )

( )

* ( )

(13)

where w(t) is the inertia weight; wmax is the largest inertia weight and wmin the smallest inertia
weight; w is the step size of the inertia weight.
Arumugam and Rao 2008, used the ratio of the global best fitness and the average of particles
local best fitness to determine the inertia weight in each iteration.
(

(14)

In the adaptive particle swarm optimization algorithm proposed by Panigrahi et al. 2008,
different inertia weights were assigned to different particles based on the ranks of the particles.
(

(15)

where Ranki is the position of the ith particle when the particles are ordered based on their
particle best fitness, and Total population is the number of particles. The rational of this
approach is that the positions of the particles are adjusted in a way that highly fitted particles
move more slowly compared to the lowly fitted ones.
Nickabadi et al. 2011, proposed an adaptive inertia weight methodology based on the success
rate of the swarm as follows.
( )

) ( )

(16)

Where Ps(t) is the success percentage of the swarm. The success percentage is the success
average of the particles in the swarm. The success of the particles is based on the distance of the
particle from the optima.
3.1.2 Acceleration coefficients Adaption Mechanisms
The acceleration coefficients c1 (cognitive) and c2 (social) in PSO represents the stochastic
acceleration terms that pull each particles towards the pBest and gBest positions. Suitable finetuning of cognitive and social parameters c1 and c2 may result in faster convergence of the
algorithm and alleviate the risk of settling in one of the local minima.
Initially the values of c1 and c2 were fixed to constant value of 2 (James and Kennedy 1995).
Instead of having a fixed value of acceleration factors Suganthan and Zhao 2009 through
empirical studies suggested that acceleration coefficients should not be always equal to 2 for
obtaining better solutions. From the literature it could be noted that majority of works had the
values of c1 and c2 set to 2. The next level of values for c1 and c2 were found to be around 1.49.
Few works were based on the values of c1 and c2 as 2.05 each. .
As c1 is said to the cognitive factor, making the swarm to search the optima effectively
(Exploration) and c2 the social factor, helping in movement of the swarm toward the optima
(convergence), it is evident from literature that larger values of c1 and smaller values of c2 at
initial stage helps in fixing up the global optima and smaller values of c1 and larger values of c2
at later stages makes the convergence easier.
To alter the values of c1 and c2 during evolution many adaptive methods have been proposed in
literature. To overcome the premature convergence of particle swarm optimization the evolution
direction of each particle is redirected dynamically by adjusting the two sensitive parameters i.e.
acceleration coefficients of PSO in the evolution process (Ying et al. 2012) as.
(

(17)

(18)

d1 and d2 are constant factors, MaxGen is the maximum generation number.


Multi swarm and multi best particle swarm optimization was proposed by Li and Xiao, 2008,
according to author advantage of information at every position of particle should be taken unlike
SPSO in which use of information at best position only is made (P best and G best), for this
author suggests new values of C1, C2 instead of a constant value of 2 to be replaced in SPSO
equation, as

(19)

(20)

Pfit is fitness value of P best and Gfit is fitness value of G best.


8

To improve optimizing efficiency and stability new metropolis coefficients were given by Jie
and Deyun 2008, which represent fusion of simulated annealing and particle swarm
optimization. Metropolis coefficients Cm1 and Cm2 vary according to distance between current
and best position and according to generation of particles. This method gives better results in less
number of iteration steps and less time. These coefficients are given as
[

(21)

(22)

The acceleration coefficients are adapted in Ratnaweera et al. 2004 dynamically and in
Arumugam et al. 2008 by balancing the cognitive and the social components. The acceleration
coefficients are adjusted based on evolution state estimator derived from the Euclidean distance
by Zhan et al. 2009 to perform global search over entire search space with faster convergence.
The adaptive methodologies adjust the parameters based on strategies independent of the data
used in the applications.
Yau-Tarng et al. 2011 proposed an adaptive fuzzy particle swarm optimization algorithm
utilizing fuzzy set theory to adjust PSO acceleration coefficients adaptively, and was thereby
able to improve the accuracy and efficiency of searches. In PSO the acceleration coefficient c1
decreasing linearly while c2 increases linearly over time is proposed in Jing and David 2012,
Gang et al. 2012. This promotes more explorations in the early stages, while encouraging
convergence to a good optimum near the end of the optimization process, by attracting particles
more towards the global best positions.
( )

(23)

( )

(24)

c1,min, c1,max, c2,min, and c2,max can be set to be 0.5, 2.5, 0.5, and 2.5, respectively. Itermax is the
maximum number of iterations to be done.
To improve the convergence speed, Xingjuan et al. 2008 proposed a new setting about social
coefficient by introducing an explicit selection pressure, in which each particle decides its search
direction toward the personal memory or swarm memory. The dispersed social coefficient of
particle j at time t is set as follows:
( )

( )

(25)

where cup and clow are two predefined numbers, and c2,j (t) represents the social coefficient of
particle j at time t . Gradej (t) a new index based on the performance differences (fitness values).

Liu and Yeh 2012 proposed an adaptive method based on Grey relational analysis for adjusting
the acceleration coefficients during evolution. The adaption of acceleration coefficients are as
follows.
( )

(26)
(27)

where Cmax Cfinal Cmin, and Cfinal represents the final value of the acceleration coefficient c2.
3.2 Modification in Methodology
Like other evolutionary computation techniques, particle swarm optimization also has drawback
of premature convergence. To reach an optimum value particle swarm optimization depends on
interaction between particles. If this interaction is restrained algorithms searching capacity will
be limited, thereby requiring long time to come out of local optimums.
To overcome the shortcomings of PSO many modifications were made in the methodology as
altering the velocity update function, fitness function etc. The changes proposed in the velocity
update function noted in literature are listed in table 1.
Table 1. Modification in Velocity Update
Modification Strategy
Equation
Acceleration The
acceleration ( )
concept
produced by the near
neighborhood attraction
( )
force and repulsion
force considered in
( )
velocity updation
Diversity
The direction of the
particles movement is
controlled
by
the
diversity information of
{
the particles
Chaotic
An improved logistic
Maps
map, namely a double[ (
bottom
map,
was
applied for local search
(
(

Acceleration Swarm behavior or


Coefficient
neighborhood behavior
C3
is considered in velocity
updation

( ( )

Reference
( ) Liu et al.
2012

( ))
( )

( )
( )

( )
( )
[

)]

Lu and
Lin
2011

Yang et
al. 2012

)
)
(

( )

( ))

( )

( )

( ))

( )
( ))

Ref13,14,
ref15
ref16
&31

10

Detecting
Particle
Swarm

Euclidean
PSO

Modified
PSO

Diversitymaintained
QPSO

Detecting
particles
search in approximate
spiral
trajectories
created by the new
velocity
updating
formula in order to find
better solutions.
If the global best fitness
has not been updated for
K times, velocities of
particles will get an
interference factor to
make most of particles
fly out of the local
optima but the best one
is kept continuing to do
local search
Better
optimization
results are achieved by
splitting the cognitive
component
of
the
general PSO into two
different components,
called good experience
component and bad
experience component.
Is based on the analysis
of Quantum based PSO
and integrates a
diversity control
strategy into QPSO to
enhance the global
search ability of the
particle swarm.

Zhu and
Pu 2009

(
(

)
(

Zang
and
Teng
2009

(
(

(
(
(

)
)

)
)

(
(

Deepa
and
Suguma
-ran
2011

Jun Sun
et
al.
2012

A parallel asynchronous PSO (PAPSO) algorithm was proposed to enhance computational


efficiency (Kho et al. 2006). This design follows a master/slave paradigm. The master processor
holds the queue of particles ready to send to slave processors and performs all decision-making
processes such as velocity/position updates and convergence checks. It does not perform any
function evaluations. The slave processors repeatedly evaluate the analysis function using the
particles assigned to them.
A new approach of extending PSO to solve optimization problems by using the feedback control
mechanism (FCPSO) was proposed by Wong et al. 2012. The proposed FCPSO consists of two
major steps. First, by evaluating the fitness value of each particle, a simple particle evolutionary
11

fitness function is designed to control parameters involving acceleration coefficient, refreshing


gap, learning probabilities and number of the potential exemplars automatically. By such a
simple particle evolutionary fitness function, each particle has its own search parameters in a
search environment.
Chuang and Sai 2006 proposed Catfish particle swarm optimization (CatfishPSO) a novel
optimization algorithm. Catfish particle were incorporated into the linearly decreasing weight
particle swarm optimization (LDWPSO). The catfish particles initializes a new search from the
extreme points of the search space when the gbest fitness value (global optimum at each
iteration) has not been changed for a given time, which results in further opportunities to find
better solutions for the swarm by guiding the whole swarm to promising new regions of the
search space, and accelerating convergence.
Chaotic maps were introduced into catfish particle swarm optimization (CatfishPSO), to increase
the search capability of CatfishPSO via the chaos approach (Chuang et al. 2011). Simple
CatfishPSO relies on the incorporation of catfish particles into particle swarm optimization
(PSO). The introduced catfish particles improve the performance of PSO considerably. Unlike
other ordinary particles, the catfish particles initialize a new search from extreme points of the
search space when the gbest fitness value (global optimum at each iteration) has not changed for
a certain number of consecutive iterations.
Liu et al. 2008 suggested two improvement strategies. In first approach at start, solution is
initialized in a limited range then according to limit of searching range certain steps of size
proportional to population size are selected to distribute the points uniformly. This way
combination of uniform and random solutions in initial iterations enhances diversity. In second
approach author suggests that speed and location updating is not necessary after each iteration.
Coordinates can be assigned to each optimal fitness value, i.e. a certain incremental or
detrimental value is intercepted from every dimension of each optimal solution then it is
compared with right fitness value.
According to Maeda et al. 2009 one of the reason of premature convergence of PSO is its same
intensity of search all along the process and the process being one as a whole without being
divided into exclusive segments. Based on hierarchical control concept of control theory two
layer particle swarm optimization was introduced. In this one swarm is at the top and L number
of swarm are at the bottom layer in which parallel computation takes place, thereby increasing
number of particles as l multiplied by number of particles in each of L swarms, thus diversity is
increased. When the velocity of particles on bottom layer is less than marginal value and the
position of particles cannot be updated with velocity particle velocity need to be updated and reinitialized without considering former strategy, to avoid premature convergence.
Voss 2005 introduced a new principle component particle swarm optimization (PCPSO) which
could be an economic alternative for large number of engineering problems. It reduces time
complexity of high dimensional problems. In PCPSO particles fly in two separate axial spaces at
same time, one in original space and other being rotated space wise. Then new z locations are
mapped into x space, using a parameter which defines fraction of considered rotated space flight.
Now their weighted average is found out. Pbest and Gbest are found and updated using
covariance.
12

Ji et al. 2010 proposed a bi swarm PSO with co-operative co evolution. In this swarm consists of
two parts, first swarm is generated randomly in the whole search space, and the second swarm is
generated periodically centering towards largest and smallest bound of the best and worst
particle of the first swarm in all directions. Two swarms share information from each other
during each generation. Updation of velocity and position follow SPSO equations (1) and (2).
Then best particles of swarm one is compared with those of two and this way the best particle is
found out. Experiments show that this method performs better than SPSO regarding
convergence, speed and precision.
Multi-Swarm Self-Adaptive and Cooperative Particle Swarm Optimization (MSCPSO) based on
four sub-swarms are employed to avoid falling into local optimum, improve the diversity and
achieve better solution. Particles in each sub-swarms share the only global historical best
optimum to enhance the cooperative capability. Besides, the inertia weight of a particle in each
sub-swarms were modified, which is subject to the fitness information of all particles, and the
adaptive strategy is employed to control the influence of the historical information to create a
more potential search ability. To effectively keep the balance between the global exploration and
the local exploitation, the particle in each swarm takes advantage of the shared information to
maintain cooperation with each other and guides its own evaluation. On the other hand, in order
to increase the diversity of the particles and avoid falling into a local optimum, diversity
operation is adopted to guide the particles to jump out of the local optimum and achieve the
global best position smoothly.
Another approach to multiswarm particle swarm optimization was proposed, in which the
initialized population of particles is divide into n number of groups randomly, every group is
regarded as a new population, which update their velocity and positions synchronously, thus n G
bests are obtained, then again these groups are combined to give one population. Now it becomes
easy to calculate real optimum value out of n G bests (Li and Xiao, 2008). This helps in
effective exploration of global search area. Particles quality is always estimated based on fitness
value and not on dimensional behaviour, but some particles may have different dimensional
behaviour in spite of same fitness value. Also in SPSO each updation of velocity is considered as
overall improvement, while it is possible that some particle may move away from the solution
with this updating. To beat these problems a new parameter called particle distribution degree
(dis(s)) was introduced (Zu et al. 2008).
( )

{(

} ]

(28)

here s is swarm, dim is dimensionality of problem, N is equal separation size of particle swarm,
I dimension, l separation area. Particles crowd together more centrally the bigger the dis(s) is.
Another approach given by Wang et al. 2009 for avoiding local convergence in case of
multimodal and multidimensional problems is called group decision particle swarm optimization
(GDPSO). It takes into account every particles information for making group decision in early
stages (implying human intelligence in which everybodys individual talent and intelligence
makes up for lack of experience) then at later stages original decision making of particle swarm
13

optimization is used. Thus in GDPSO search space is enlarged and diversity is increased to solve
high dimension functions.
3.3 Hybridization with other Methodologies
Like other evolutionary computation techniques, particle swarm optimization faces problem
regarding their local search abilities in optimization problems. More specifically, although PSO
is capable of detecting the region of attraction of the global optimizer fast, once there, PSO
suffers to perform a refined local search to compute the optimum with high accuracy, unless
specific procedures are incorporated in their operators. This triggered the development of
Memetic Algorithms (MAs), which incorporate local search components. MAs constitute a class
of metaheuristics that combines population-based optimization algorithms with local search
procedures.
Recently many local search methods have been incorporated with PSO. Table 2 lists the various
search methods combined with PSO from the literature.
Table 2. Local Search Methods applied in PSO
Local Search
Chaos
Searching
Technique

Strategy
Transform the variable of problems from the solution space to
chaos space and then perform search to find out the solution by
virtue of the randomicity, orderliness and ergodicity of the chaos
variable. Logistic map used
Chaotic Local A well-known logistic equation is employed for the hybrid PSO.
Search
+ The logistic equation is defined as follows.
Roulette
(
)
Wheel
where is the control parameter, x is a variable and n = 0,1,2,...,
Mechanism
. Although the above equation is deterministic, it exhibits chaotic
dynamics when = 4 and x0{0,0.25,0.5,0.75,1} .

Reference
Wan et al
2012

Adaptive
Local Search

Wang et
al. 2012

Intelligent
multiple
search
methods

Two different LS operators are considered


Cognition-Based Local Search (CBLS)
Random Walk with Direction Exploitation (RWDE)
Non-uniform mutation-based method
In the non-uniform mutation-based method, the dth dimension of
the solution xi g is randomly picked to be mutated to generate a
new solution as
(
)
{
(
)

Xia 2012

Mengqi
et al.
2012

where i is the current iteration index of PSO; Ud and Ld are the


upper and lower bounds of xig,d; r is a uniform random number
from (0, 1).
Adaptive sub-gradient method
14

In the sub-gradient method, a new solution xig is generated as

Reduced
Variable
Neighborhood
Search
Expanding
Neighborhood
Search

where ig is the sub-gradient of the objective function; i is the


step size
RVNS is a variation of the Variable Neighborhood
Search(VNS). VNS becomes RVNS if LocalSearch step is
removed. RVNS is usually preferred as a general optimization
method for problems where exhaustive local search is expensive
Based on a method called Circle Restricted Local Search Moves
(CRLSM) and, in addition, a number of local search phases.

Chaotic and A Logistic Chaotic map is employed.


Gaussian local
(
)
( )
search
where j is the jth Chaotic variable in the kth generation. When
= 4, Logistic function comes into a thorough chaos state.

Zulal and
Erdogan
2010
Marinakis
and
Marinaki
2010
Jia et al.
2011

Variable
Neighborhood
Descend
algorithm
Two
Stage
Hill Climbing

Is an enhanced local improvement strategy which is commonly


used as a subordinate in Variable Neighborhood Search

Goksal,
F. P., et
al. 2012

The proposed local search employs two stages to find a better


member.

Simulated
Annealing
Extremal
optimization

Metropolis-Hastings strategy, is the key idea behind the


simulated annealing (SA) algorithm
EO successively updates extremely undesirable variables of a
single sub-optimal solution, assigning them new, random values.

Ahandani
et
al.
2012
Safaeia et
al. 2012
Chen et
al. 2010

Hybridization is a growing area of intelligent systems research, which aims to combine the
desirable properties of different approaches to mitigate their individual weaknesses. A natural
evolution of the population based search algorithms like that of PSO can be achieved by
integrating the methods that have already been tested successfully for solving complex problems
(Thangaraj et al., 2011). Researchers have enhanced the performance of PSO by incorporating in
it the fundamentals of other popular techniques like selection, mutation and crossover of GA and
DE.
Many developments combining PSO with Genetic Algorithms were tried out for balancing
between exploration and exploitation. Table 3 lists the works in literature on combination of PSO
and GA

15

Table 3. Hybrid of PSO and GA


Combination
Strategy
Binary PSO + After updating position and velocity, each particle will be sent
Crossover
to the crossover step where a crossover probability is generated
which will decide the crossover step should be performed or not
on the particle. This process is iterated until the result is
obtained or maximum number of evaluation is achieved.
One Point Crossover is chosen.tt
PSO
+ A parent is generated through PSO, and that parent goes
crossover
+ through crossover and mutation operators in GA to generate
Mutation
another parent. Finally, the next iterative parent is generated by
elitist selection. Crossover rate: 60%, Mutation rate: 1%.
PSO
An extended Cauchy mutation is employed to increase the
+Intelligent
diversity of the swarm. In the extended Cauchy mutation
Mulitple
operator, a randomly selected dimension d of a randomly
Search
+ selected particle m will be mutated as
Cauchy
( )
Mutation
where i is the scale parameter of Cauchy distribution.
PSO+ Cauchy The version adopting adaptive and Cauchy mutation operator
Mutation
+ can be represented as follows:
Adaptive
(
)
( (
))
(
)
Mutation
(
)

Reference
Vikas et
al. 2007

Kuo et al.
2012
Mengqi
et
al.
2012

Wu 2011

(
where 1 and 2 denote the parameter of operator set, and i
are stochastic variables observing Cauchy distribution.
GA+PSO
The algorithm is initialized by a set of random particles which is
flown through the search space. In order to get approximate
nondominated solutions PND, an evolution of this particle is
performed
Crossover rate 0.9, Mutation rate 0.7
Selection operator : Stochastic universal sampling
Crossover operator Single point, Mutation operator Real-value
PSO +GA
First, the algorithm is initialized by a set of random particles
which travel through the search space. During this travel an
evolution of these particles is performed by integrating PSO and
GA.
Crossover rate 0.9, Mutation rate 0.7
Selection operator Stochastic universal sampling
Crossover operator Single point, Mutation operator Real-value
PSO + Mutation Research employs the specialist recombination operator for
+
crossover operator. Three types of mutation operator are utilized

Mousa et
al. 2012

Abd-ElWaheda
2011

Ahandani
et
al.
16

Recombination for the DPSO algorithm.


2012
NS1: select a random exam and move it to the new non-conflict
time slot.
NS2: select a random pair of time slot and swap all the exams
in one time slots with all the exams in the other time slot.
NS3: select a set of random exams and order them by a
considered graph coloring heuristic ,then assign non-conflict
timeslots to them.
Elite particle swarm optimization with mutation (EPSOM) was suggested by Wei et al. 2008.
According to author, after some initial iteration each particle is aligned according to its fitness
value. Particles of higher fitness value are sorted out to form a different swarm and give a better
convergence. Mutation operator as given below is introduced to avoid decreasing diversity and
increasing chance of being trapped in local minima.
(

where is a random number usually distributed between 0 and 1. EPSOM gives better results
than random inertia weight and linearly decreasing inertia weight.
The study carried out by Paterlini and Krink 2006 has shown that DE is much better than PSO in
terms of giving accurate solutions to numerical optimization problems. On the other hand, the
study by Angeline 1998 has shown that PSO is much faster in identifying the promising region
of the global minimizer but encounters problems in reaching the global minimizer. The
complementary strengths of the two algorithms have been integrated to give efficient and reliable
PSO hybrid algorithms.
The DE evolution steps namely mutation, crossover and selection are performed in each iteration
of the PSO to the best particles positions by Epitropakis et al. 2012. The Differential evolution
point generation scheme (mutation and crossover rules) are applied to the new particles in the
swarm generated by PSO in each iteration in Ali and Kaelo 2008. When DE is introduced to
PSO at each iteration, the computational cost will increase sharply and at the same time the fast
convergence ability of PSO may be weakened. In order to perfectly integrate PSO with DE, DE
was introduced to PSO only at specified interval of iterations. In this interval of iterations, the
PSO swarm serves as the population for DE algorithm, and the DE is executed for a number of
generations (Sedki and Ouazar 2012).
Shi et al. 2011 proposed a cellular particle swarm optimization, hybridizing cellular automata
and particle swarm optimization for function optimization. In the proposed methodology, a
mechanism of CA is integrated in the velocity update to modify the trajectories of particles to
avoid being trapped in the local optimum. The proposal employed a CA mechanism to improve
the performance of PSO by devising two versions of cellular particle swarm optimization:
CPSO-inner and CPSO-outer. The CPSO-inner uses the information inside the particle swam to
interact by considering every particle as a cell; whereas, the CPSO-outer enables cells that
belong to the particle swarm to communicate with cells outside the particle swarm with every
potential solution defined as a cell and every particle of the swarm defined as a smart-cell.
17

A novel hybrid algorithm based on particle swarm optimization and ant colony optimization
called hybrid ant particle optimization algorithm (HAP) to find global minimum was proposed
by Kiran et al. 2012. In the proposed method, ACO and PSO work separately at each iteration
and produce their solutions. The best solution is selected as the global best of the system and its
parameters are used to select the new position of particles and ants at the next iteration. Thus, the
ants and particles are motivated generating new solutions by using the system solution
parameters.
To exploit the sensitivity of Nelder Mean Simplex Method to initial solutions present, the good
global search ability of particle swarm optimization is combined with NMSM. This gives a
hybrid particle swarm optimization (hPSO). Choice of initial points in simplex search method is
predetermined but PSO has random initial points (particles). PSO proceeds towards best
decreasing difference between current and best position, whereas simplex search method evolves
by moving away from a point which has worst performance. On each iteration worst particle is
replaced by a new particle generated by one iteration of NMSM. Then all particles are again
updated by PSO. The PSO and NMSM are performed iteratively (Ouyang et al. 2009).
Hsu and Gao 2008 further for effectively solving multi dimensional problems suggested a hybrid
approach incorporating NMSM along with existence of a centre particle in PSO. With
exploitation property of PSO and exploration property of NMSM, the centre particle which
dwells near optima attracting many particles convergence, improves the accuracy of PSO further.
Pan et al. 2008 have combined PSO with simulated annealing and swarm core evolutionary
particle swarm optimization to improve local search ability of PSO. In PSO with simulated
annealing particle moves to next position not directly with a comparison criterion of best position
but with some probability function controlled by temperature. In swarm core evolutionary
particle swarm optimization, the particle swarm was divided into three sub swarms, as per the
distance as core, near and far and assigned different tasks. This process works better in cases
where optima change frequently.
PSO converges early in highly discrete problems and traps into the local optimum solution. In
order to improve PSO, proposed an improved optimization hybrid swarm algorithm called the
particle-bee algorithm (PBA) combining Bee algorithm with PSO, that imitates a particular
intelligent behavior inspired of bird and honey bee swarms and integrates their advantages (Lien
and Chen 2012).
4. Applications of PSO
The PSO is an important algorithm in optimization and for the reason of its high adaptability;
PSO has many applications in diverse sciences such as medical, financial, economics, security
and military, biological, system identification etc. As a state-of-art the research work on the PSO
application in some fields such as electrical engineering and mathematics are extensive, but in
other fields for example chemical and civil engineering are exceptional. In mathematics PSO
finds application in the field of multi modal function, multi objective and constrained
optimization, salesman problem, data mining, modelling etc. To quote examples of engineering
18

fields, PSO can be used in material engineering, electronics (antenna, image and sound analysis,
sensors and communication), in computer science and engineering (visuals, graphics, games,
music, animation), in mechanical engineering (robotics, dynamics, fluids), in industrial
engineering (in job and resource allocation, forecasting, planning, scheduling, sequencing,
maintenance, supply chain management), traffic management in civil engineering and chemical
process in chemical engineering. In electrical engineering PSO finds uses in generation,
transmission, state estimation, unit commitment, fault detection and recovery, economic load
dispatch, control application, in optimal use of electrical motor, structuring and restructuring of
network, neural network and fuzzy systems and Renewable Energy Systems (RES). As a method
to find optima of complex search processes through iteration of each particle of population PSO
can provide answers to planning, designing and control of RES (Yang et al. 2007, Wai et al.
2011).
5. Conclusions
Like other evolutionary algorithms, PSO has become an important tool for optimization and
other complex problem solving. It is an interesting and intelligent computational technique for
finding global minima and maxima with high capability or multimodal functions and practical
applications. Particle swarm optimization works on theory of cooperation and competition
between particles. Many applications of PSO are given in the literature, like neural fuzzy
networks, optimization of artificial neural networks, computational biology, image processing
and medical imaging, optimization of electricity generation and network routing, financial
forecasting etc.
Although there are few challenges yet remaining to overcome, such as dynamic problems, pass
up stagnation, handle constraint and multiple objectives, and these are important research points
apparent from the literature. So the drawbacks to be worked with are its tendency of particles to
converge at local optima, slow speed of convergence, and search space being very large. PSO
sometimes cannot effectively and accurately solve non linear equations. Hybridizing with other
algorithm generally demands higher number of functions to be evaluated. Most of the approaches
proposed such as inertia weight method, adoptive variation and hybrid PSO do solve premature
convergence problem but there is a problem of low convergence speed. Therefore no generalized
solution can be given applicable to all type of problems. Yet PSO is a promising method working
in direction of simulation and optimization of difficult engineering and other problems. To
overcome the problem of stagnation of particles in search space, to improve efficiency, to
achieve better adjustability, adoptability and vigour of parameters different researchers are taking
it up as an active research topic and coming up with new ideas applicable for different problems.
Further analysis of the comparative potency of PSO, and the problems in using a PSO based
system are needed.
References
1. Abd-El-Waheda W F, A.A. Mousab, M.A. El-Shorbagy, Integrating particle swarm
optimization with genetic algorithms for solving nonlinear optimization problems,
Journal of Computational and Applied Mathematics 235 (2011) 14461453

19

2. Ahmad Nickabadi, Mohammad Mehdi Ebadzadeh, Reza Safabakhsh, A novel particle


swarm optimization algorithm with adaptive inertia weight, Applied Soft Computing 11
(2011) 36583670
3. Aise Zulal SEVKLI, Fatih Erdogan SEVILGEN, StPSO: Strengthened Particle Swarm
Optimization, Turkey Journal of Electrical Engineering & Computer Science, Vol.18,
No.6, 2010,1095-1114
4. Ali m M, P. Kaelo , Improved particle swarm algorithms for global optimization, Applied
Mathematics and Computation 196 (2008) 578593
5. Alireza Alfi , Hamidreza Modares, System identification and control using adaptive
particle swarm optimization, Applied Mathematical Modelling, 35 (2011) 12101221
6. Amaresh Sahu, Sushanta Kumar Panigrahi, Sabyasachi Pattnaik, Fast Convergence
Particle Swarm Optimization for Functions Optimization, Procedia Technology 4 ( 2012 )
319 324
7. Angeline P J, Evolutionary optimization versus particle swarm optimization: philosophy
and performance differences, Evolutionary Computation VII. Lecture Notes in Computer
Science, 1447, Springer, Berlin, 1998, pp. 601610
8. Angeline, P.J., Using selection to improve particle swarm optimization. The IEEE
International Conference on Evolutionary Computation Proceedings Anchorage, AK ,
USA, 1998b pp: 84-89.
9. Arumugam M s, M.V.C. Rao, On the improved performances of the particle swarm
optimization algorithms with adaptive parameters, cross-over operators and root mean
square (RMS) variants for computing optimal control of a class of hybrid systems,
Applied Soft Computing (2008) 324336.
10. Boyabalti, O,Sabuncuoglu, I., 2007. Parameter Selection In Genetic Algorithms. System,
Cybernatics &Informatics. Volume 2-Number 4, Pp. 78-83
11. Byung-Il Koh, Alan D. George, Raphael T. Haftka, Benjamin J. Fregly, Parallel
asynchronous particle swarm optimization, International Journal for Numerical Methods
in Engineering 2006; 67:578595
12. Chatterjee A, P. Siarry, Nonlinear inertia weight variation for dynamic adaption in
particle swarm optimization, Computer and Operations Research 33 (2006) 859871,
March 2006.
13. Cheng-Hong Yang, Sheng-Wei Tsai, Li-Yeh Chuan, Cheng-Huei Yang, An improved
particle swarm optimization with double-bottom chaotic maps for numerical
optimization, Applied Mathematics and Computation, 219 (2012) 260279
14. Chunguo Fei1, Fang Ding1, Xinlong Zhao, Network Partition of Switched Industrial
Ethernet by Using Novel Particle Swarm Optimization, Procedia 24 (2012), 1493-1499
15. Clerc, M., & Kennedy, J. The particle swarmexplosion, stability, and convergence in a
multidimensional complex space. IEEE Transaction on Evolutionary Computation, 6(1),
(2002) 5873.
16. Deepa S N, G. Sugumaran, Model order formulation of a multivariable discrete system
using a modified particle swarm optimization approach, Swarm and Evolutionary
Computation 1 (2011) 204212
17. DongLi Jia, GuoXin Zheng, BoYang Qu, Muhammad Khurram Khan, A hybrid particle
swarm optimization algorithm for high-dimensional problems, Computers & Industrial
Engineering 61 (2011) 11171122

20

18. Dorigo, M., V. Maniezzo and A. Colorni, Positive feedback as a search strategy.
Technical Report 91-016, Dipartimento di Elettronica, Politecnico di Milano, IT,
1991,pp: 91-106.
19. Eberhart, R. C., & Shi, Y. Comparing inertia weights and constriction factors in particle
swarm optimization. In Proceedings of the IEEE congress on evolutionary computation
(CEC), 2000 (pp. 8488), San Diego, CA. Piscataway: IEEE.
20. Eberhart, R. C., & Shi, Y. Tracking and optimizing dynamic systems with particle
swarms. In Proceedings of the IEEE congress on evolutionary computation (CEC) (pp.
94100), Seoul, Korea. Piscataway: IEEE, 2001.
21. Eberhart, R. C., Simpson, P. K., & Dobbins, R. W. (1996). Computational intelligence
PC tools. Boston: Academic Press.
22. Eiben A. E. and Michalewicz Z. and Schoenauer M. and Smith J. E., Parameter Control
in Evolutionary Algorithms , Lobo F.G. and Lima C.F. and Michalewicz Z. (eds.) ,
Parameter Setting in Evolutionary Algorithms , Springer , 2007 , pp. 19-46
23. Eiben A. E., Smith J. E., Introduction to Evolutionary Computing, Springer-Verlag,
Berlin Heidelberg New York, 2003
24. Epitropakis M G, V.P. Plagianakos, M.N. Vrahatis, Evolving cognitive and social
experience in particle swarm optimization through differential evolution, in: IEEE, 2010.
25. Epitropakis M G, V.P. Plagianakos, M.N. Vrahatis, Evolving cognitive and social
experience in Particle Swarm Optimization through Differential Evolution: A hybrid
approach, Information Sciences 216 (2012) 5092
26. Feng Y, G. Teng, A. Wang, Y.M. Yao, Chaotic inertia weight in particle swarm
optimization, in: Second International Conference on Innovative Computing, Information
and Control (ICICIC 07), 2007, pp. 4751475.
27. Gang Xu, An adaptive parameter tuning of particle swarm optimization algorithm,
Applied Mathematics and Computation 219 (2013) 45604569
28. Goksal, F. P., et al. A hybrid discrete particle swarm optimization for vehicle routing
problem with simultaneous pickup and delivery. Computers & Industrial Engineering
(2012), doi:10.1016/j.cie.2012.01.005
29. Guan-Chun Luh , Chun-Yi Lin, Optimal design of truss-structures using particle swarm
optimization, Computers and Structures, 89 (2011) 22212232
30. He S, Wu QH et al. A particle swarm optimizer with passive congregation. Biosystems
2004; 78(13):135147.
31. Holland, J.H., Adaptation in natural and artificial systems: An introductory analysis with
applications to biology, control, and artificial intelligence: The MIT Press,1992.
32. Hongbing Zhu, Chengdong Pu, Euclidean Particle Swarm Optimization, 2009 Second
International Conference on Intelligent Networks and Intelligent Systems,669-672
33. Hongfeng Wang, Ilkyeong Moon, Shenxiang Yang, Dingwei Wang, A memetic particle
swarm optimization algorithm for multimodal optimization problems, Information
Sciences 197 (2012) 3852
34. Hsu C C, C.H. Gao, Particle swarm optimization incorporating simplex search 969 and
center particle for global optimization, in: Conference on Soft Computing 970 in
Industrial Applications, Muroran, Japan, 2008.
35. Ji H, J. Jie, J. Li, Y. Tan, A bi-swarm particle optimization with cooperative coevolution, international conference on computational aspects of social networks, in:
IEEE, 2010.
21

36. Jie X, Deyun X, New metropolis coefficients of particle swarm optimization, in: IEEE,
2008.
37. Jing Cai 1, W. David Pan, On fast and accurate block-based motion estimation algorithms
using particle swarm optimization, Information Sciences ,197 (2012) 5364
38. Jiuzhong Zhang n, XuemingDing , A Multi-Swarm Self-Adaptive and Cooperative
Particle Swarm Optimization, Engineering Applications of Artificial Intelligence 24
(2011) 958967
39. Jun Sun, Xiaojun Wu, Wei Fang, Yangrui Ding, Haixia Long, Webo Xu, Multiple
sequence alignment using the Hidden Markov Model trained by an improved quantumbehaved particle swarm optimization, Information Sciences 182 (2012) 93114
40. Kennedy, J. and R. Eberhart, 1995. Particle swarm optimization. Proceedings of the IEEE
International Conference on Neural Networks, Piscataway, pp: 1942-1948.
41. Kuo R J , Y.J. Syu, Zhen-Yao Chen, F.C. Tien, Integration of particle swarm
optimization and genetic algorithm for dynamic clustering, Information Sciences 195
(2012) 124140
42. Lei K, Y. Qiu, Y. He, A new adaptive well-chosen inertia weight strategy to
automatically harmonize global and local search ability in particle swarm optimization,
in: ISSCAA, 2006.
43. Li J, X. Xiao, Multi swarm and multi best particle swam optimization algorithm, in:
IEEE, 2008.
44. Li-Chuan Lien, Min-Yuan Cheng, A hybrid swarm intelligence based particle-bee
algorithm for construction site layout optimization, Expert Systems with Applications 39
(2012) 96429650
45. Lili Liu , Shengxiang Yang , Dingwei Wang, Force-imitated particle swarm optimization
using the near-neighbor effect for locating multiple optima, Information Sciences, 182
(2012) 139155
46. Liu E, Y. Dong, J. Song, X. Hou, N. Li, A modified particle swarm optimization
algorithm, in: International Workshop on Geosciences and Remote Sensing, 2008, pp.
666669
47. Li-Yeh Chuang a, Sheng-Wei Tsai b, Cheng-Hong Yang, Chaotic catfish particle swarm
optimization for solving global numerical optimization problems, Applied Mathematics
and Computation 217 (2011) 69006916
48. Li-Yeh Chuang, Sheng-Wei Tsai, and Cheng-Hong Yang, Catfish Particle Swarm
Optimization, 2008 IEEE Swarm Intelligence Symposium,
49. Lovbjerg, M., Improving particle swarm optimization by hybridization of stochastic
search heuristics and self-organized criticality. Master's Thesis, Department of Computer
Science, University of Aarhus, 2002.
50. Maeda Y, N. Matsushita, S. Miyoshi, H. Hikawa, On simultaneous perturbation particle
swarm optimization, in: CEC 2009 IEEE, Proceedings on Eleventh Conference on
Congress on Evolutionary Computation, 2009.
51. Maxfield, A.C.M. and L. Fogel, Artificial intelligence through a simulation of evolution.
Biophysics and Cybernetics Systems: Proceedings of the Second Cybernetics Sciences.
Spartan Books, Washington DC, 1965.
52. Mengqi Hu, Teresa Wu, Jeffery D. Weir, An intelligent augmentation of particle swarm
optimization with multiple adaptive methods, Information Sciences 213 (2012) 6883

22

53. Min-Rong Chen, Xia Li, Xi Zhang, Yong-Zai Lu, A novel particle swarm optimizer
hybridized with extremal optimization, Applied Soft Computing 10 (2010) 367373
54. Min-Shyang Leu, Ming-Feng Yeh, Grey particle swarm optimization, Applied Soft
Computing 12 (2012) 29852996
55. Moayed Daneshyari, Gary G. Yen, Constrained Multiple-Swarm Particle Swarm
Optimization Within a Cultural Framework, IEEE Transactions on Systems, Man, and
CyberneticsPart A: Systems and Humans, VoL. 42, No. 2, March 2012, 475-490.
56. Morteza Alinia Ahandani, Mohammad Taghi Vakil Baghmisheh, Mohammad Ali
Badamchi Zadeh, Sehraneh Ghaemi , Hybrid particle swarm optimization transplanted
into a hyper-heuristic structure for solving examination time tabling problem, Swarm and
Evolutionary Computation 7 (2012) 2134
57. Mousaa A A, M.A. El-Shorbagy, W.F. Abd-El-Wahed, Local search based hybrid
particle swarm optimization algorithm for multiobjective optimization, Swarm and
Evolutionary Computation 3 (2012) 114
58. Mustafa Servet Kran, Mesut Gunduz, Omer Kaan Baykan, A novel hybrid algorithm
based on particle swarm and ant colony optimization for finding the global minimum,
Applied Mathematics and Computation 219 (2012) 15151521
59. Nannen V, S. K. Smit and A. E. Eiben, Costs and benefits of tuning parameters of
evolutionary algorithms, Proceedings of the 10th International Conference on Parallel
Problem Solving from Nature, PPSN X, Dortmund, Germany, 2008, pp. 528538.
60. Nima Safaeia, Reza Tavakkoli-Moghaddam, Corey Kiassat, Annealing-based particle
swarm optimization to solve the redundant reliability problem with multiple component
choices, Applied Soft Computing 12 (2012) 34623471
61. Ouyang A, Y. Zhou, Q. Luo, Hybrid particle swarm optimization algorithm for solving
systems of nonlinear equations, in: IEEE International Conference on Granular
Computing, 2009, pp. 460465.
62. Pan G, Q. Dou, X. Liu, Performance of two improved particle swarm optimization in
dynamic optimization environments, in: Proceedings of the Sixth International
Conference on Intelligent Systems Design and Applications, 2006
63. Panigrahi B.K, V.R. Pandi, S. Das, Adaptive particle swarm optimization approach for
static and dynamic economic load dispatch, Energy Conversion and Management 49
(2008) 14071415.
64. Paterlini S, T. Krink, Differential evolution and particle swarm optimization in partitional
clustering, Computational Statistics and Data Analysis 50 (1) (2006) 12201247
65. Price, K. and R. Storn, 1995. Differential Evolution-a simple and efficient adaptive
scheme for global optimization over continuous spaces. International Computer Science
Institute-Publications.
66. Qi Wu, Hybrid forecasting model based on support vector machine and particle swarm
optimization with adaptive and Cauchy mutation, Expert Systems with Applications 38
(2011) 90709075
67. Ratnaweera, A., Halgamuge, S.K., Watson, H.C., 2004. Self-organizing hierarchical
particle swarm optimizer with time-varying acceleration coefficients. IEEE Transactions
on Evolutionary Computation 8 (3), 240255.
68. Sedki A, D. Ouazar, Hybrid particle swarm optimization and differential evolution for
optimal design of water distribution systems, Advanced Engineering Informatics 26
(2012) 582591
23

69. Shi, Y., & Eberhart, R. C. A modified particle swarm optimizer. In Proceedings of the
IEEE international conference on evolutionary computation (pp. 6973). Piscataway:
IEEE, (1998).
70. Shing Wa Leung, Shiu Yin Yuen, Chi Kin Chow, Parameter control system of
evolutionary algorithm that is aided by the entire search history, Applied Soft Computing
12 (2012) 30633078
71. Shuguang Zhao, Ponnuthurai N. Suganthan: Diversity enhanced particle swarm optimizer
for global optimization of multimodal problems. IEEE Congress on Evolutionary
Computation 2009: 590-597
72. Thangaraj, R., M. Pant, A. Abraham and P. Bouvry, Particle swarm optimization:
Hybridization perspectives and experimental illustrations. Applied Mathematical.
Computation, 217, 2011, 5208-5226
73. Vikas Singh, Deepak Singh, Ritu Tiwari, Discrete Optimization Problem Solving with
three Variants of Hybrid Binary Particle Swarm Optimization, BADS11, June 14,
2011,ACM, 43-48
74. Voss M S, Principle component particle swarm optimization, in: IEEE Congress 977 on
Evolutionary Computation, vol. 1, 2005, pp. 298305.
75. Wai R J, S. Cheng, Y.-C. Chen, 6th IEEE on Industrial Electronics and Applications
(ICIEA), 2011.
76. Wan Z, GuangminWang, BinSun, A hybrid intelligent algorithm by combining particle
swarm optimization with chaos searching technique for solving nonlinear bilevel
programming problems, swarm and Evolutionary Computation (2012), http://dx.doi.org/
10.1016/j.swevo. 2012.08.001
77. Wang L, Z. Cui, J. Zeng, Particle swarm optimization with group decision making, in:
Ninth International Conference on Hybrid Intelligent Systems, 2009.
78. Wei J, L. Guangbin, L. Dong, Elite particle swarm optimizaion with mutation, in: Asia
Simulation Conference 7th Intl. Conf. on Sys. Simulation and Scientific Computing,
IEEE, 2008, pp. 800803.
79. Wong W K, S.Y.S. Leung, Z.X. Guo, Feedback controlled particle swarm optimization
and its application in time-series prediction, Expert Systems with Applications 39 (2012)
85578572
80. Xiaohua Xia, Particle Swarm Optimization Method Based on Chaotic Local Search and
Roulette Wheel Mechanism, Physics Procedia 24 (2012) 269 275
81. Xiaolei Wang, Conflict Resolution in Product Optimization Design based on Adaptive
Particle Swarm Optimization, Procedia Engineering 15 (2011) 4920-4924
82. Xingjuan Cai, Zhihua Cui, Jianchao Zeng, Ying Tan, Dispersed particle swarm
optimization, Information Processing Letters, 105 (2008) 231235
83. Xu JJ, Xin ZH. An extended particle swarm optimizer. Parallel and Distributed
Processing Symposium, 2005. Proceedings of the 19th IEEE International, Denver, CO,
U.S.A., 2005.
84. Yang B, Y. Chen, Z. Zhao, IEEE International Conference on Survey on Applications of
Particle Swarm Optimization in Electric Power Systems, May 30 2007June 1 2007,
2007.
85. Yang Shi, Hongcheng Liu, Liang Gao, Guohui Zhang, Cellular particle swarm
optimization, Information Sciences, 181 (2011) 44604493

24

86. Yannis Marinakis, Magdalene Marinaki, Georgios Dounias, A hybrid particle swarm
optimization algorithm for the vehicle routing problem, Engineering Applications of
Artificial Intelligence 23 (2010) 463472
87. Yannis Marinakis, Magdalene Marinaki, A Hybrid Multi-Swarm Particle Swarm
Optimization algorithm for the Probabilistic Traveling Salesman Problem, Computers &
Operations Research 37 (2010) 432 - 442
88. Yau-Tarng Juang, Shen-Lung Tung, Hung-Chih Chiu, Adaptive fuzzy particle swarm
optimization for global optimization of multimodal functions, Information Sciences 181
(2011) 45394549
89. Ying Wang, Jianzhong Zhou, Chao Zhou, Yongqiang Wang, Hui Qin, Youlin Lu, An
improved self-adaptive PSO technique for short-term hydrothermal scheduling, Expert
Systems with Applications, Volume 39, Issue 3,2012, pp 2288-2295.
90. Ying-Nan Zhang, Hong-Fei Teng, Detecting particle swarm optimization, , Concurrency
and Computation: Practice and Experience, 2009; 21:449473
91. Ying-Nan Zhang,Qing-Ni Hu and Hong-Fei Teng, Active target particle swarm
optimization, Concurrency and Computation: Practice and Experience, 2007; 20:2940
92. Zu W, Y.l. Hao, H.t. Zeng, W.Z. Tang, Enhancing the particle swarm optimization based
on equilibrium of distribution, in: Control and Decision Conference, China, 2008, pp.
285289.

25

You might also like