Professional Documents
Culture Documents
reach better solutions [2]. Accelerating convergence speed and avoiding the local optima
have become the two most important and appealing goals in PSO research.
Since PSO was proposed, investigations have been made theoretically and experimentally to
analyze and improve PSO. Clerc and Kennedy [7] explored how PSO works from a
mathematical perspective, introduced a constriction factor v to guarantee the convergence of
PSO, and analyzed the trajectory of a single particle in both discrete time and continuous
time. Van den Bergh and Engelbrecht [8] analyzed how the inertia weight and acceleration
constants affect the trajectories of particles and provided theoretical findings on the dynamics
of the PSO systems. These studies provided theoretical supports for the research on the
improvement of PSO. In order to achieve good balanceween exploitation capability and
exploration capability, neighborhood topologies designed for particles are studied. Four
neighborhood topologies comprising circles, wheels, stars and random edges were tested in
[9].
Eberhart and Shi [17] proposed a Random Inertia Weight strategy and experimentally found
that this strategy increases the convergence of PSO in early iterations of the algorithm. In
Global-Local Best Inertia Weight [18], the Inertia Weight is based on the function of local
best and global best of the particles in each generation. It neither takes a constant value nor a
linearly decreasing time-varying value. Using the merits of chaotic optimization, Chaotic
Inertia Weight has been proposed by Feng et al. [19]. A novel rule-based classifier [10]
design method was constructed by using improvised simple swarm optimization, to mine a
thyroid gland dataset from University of California Irvine repository. An elite concept is
added to the proposed method to improve solution quality and close interval encoding is
added to efficiently represent the rule structure.
Yang Shi et al. [6] proposes a cellular particle swarm optimization, hybridizing cellular
automata and particle swarm optimization (PSO) for function optimization. In the proposed
method, a mechanism of Cellular Automata is integrated in the velocity update to modify the
trajectories of particles to avoid being trapped in the local optimum. To prevent the PSO from
premature convergence, many researchers have proposed adaptive or self-adaptive strategies
such as the adaptive variable population size method in Chen and Zhao [20], the self-adaptive
method for generating the particles velocity in Jin et al. [21], and the adaptive inertia weight
method in Nickabadi et al. [22].
This paper analyzes various methods for avoiding local optima (premature) convergence,
thereby resulting in better predictive accuracy of the mined rules. The rest of this paper is
organized as follows. In Section 2, framework of PSO is described. Then Section 3 discusses
the variations introduced in PSO for avoiding premature convergence. Section 4 compares
the results of these variants when applied for association rule mining followed by conclusion
in section 5.
2. Preliminaries
This section will briefly present the general backgrounds of association rule mining and the
particle swarm optimization method, respectively.
(1)
(2)
Each particle p, at some iteration t, has a position x (t), and a displacement velocity v(t). The
particles best (pBest) position p(t) and global best (gBest) position g(t) are stored in the
associated memory. The velocity and position are updated using equations 3 and 4
respectively.
()(
()(
(3)
(4)
Where
vi
is the particle velocity of the ith particle
xi
is the ith, or current, particle
i
is the particles number
d
is the dimension of searching space
rand ( ) is a random number in (0, 1)
c1
is the individual factor
c2
is the societal factor
pBest is the particle best
gBest is the global best
Both c1 and c2 are set to be 2 in all literature works analyzed and hence the same is adopted
here. The velocity vi of each particle is clamped to a maximum velocity vmax which is
specified by the user. vmax determines the resolution with which regions between the present
position and the target position are searched.
The pseudo code for PSO algorithm is given below
For each particle
Initialize particle position and velocity
END
Repeat
For each particle
Calculate fitness value
If the fitness value is better than its personal best
set current value as the new pBest
End
Choose the particle with the best fitness value of all as gBest
For each particle
Calculate particle velocity according equation (3)
Update particle position according equation (4)
End
Until maximum number of iterations or minimum error criteria
The initial population is selected based on fitness value. The velocity and position of all the
particles are set randomly. Based on the fitness function the importance of the particles is
evaluated. The fitness function designed is based on support and confidence of the
association rule. The objective of fitness function is maximization. The fitness function is
shown in equation 5.
( )
( )
( )
( )
(5)
Fitness (k) is the fitness value of association rule type k, confidence (x) is the confidence of
association rule type k and support(x) is the actual support of association rule type k. When
the support and confidence values are larger, then larger is the fitness value meaning that it is
an important association rule.
2.3 Predictive Accuracy
Predictive accuracy measures the effectiveness of the rules mined. The mined rules must have
high predictive accuracy.
(6)
where |X&Y| is the number of records that satisfy both the antecedent X and consequent Y,
|X| is the number of rules satisfying the antecedent X.
3. PSO and its Variants
Particle swarm optimization is based on the intelligence. PSO has no overlapping and
mutation calculation. During the development of several generations, only the most optimist
particle can transmit information onto the other particles. The speed of the searching is very
fast and it occupies the bigger optimization ability, thereby completing easily.
The swarm behaviour varies between exploratory behaviour, that is, searching a broader
region of the search-space, and exploitative behaviour, that is, a locally oriented search so as
to get closer to a (possibly local) optimum. The PSO algorithm and its parameters must be
chosen properly to balance between exploration and exploitation to avoid premature
convergence to a local optimum and yet also ensures a good rate of convergence to the
optimum. To avoid premature convergence at local optima Particle swarm optimization
variants are proposed and tested for mining association rules.
Variations have been introduced in velocity updation function to ensure convergence towards
global optima rather than local optima.
3.1 Particle Swarm Optimization with Inertia Weight
Inertia weight is added to the velocity update function and the equation 3 is modified as
()(
()(
(7)
)
(
(8)
)
(
The initial value of u0 and v0 are set to 0.1. The slight tuning of initial values of u0 and v 0
creates wide range of values with good distribution. The chaotic operator chaotic_operator(k)
= vk is designed therefore to generate different chaotic operators by tuning u0 and v0. The
value of u0 is set to two different values for generating the chaotic operators 1 and 2.
The velocity updation equation based on chaotic PSO is given in equation 9.
(
)
(
)
( )
In the gBest swarm, all the particles are neighbours of each other; thus, the position of
the best overall particle in the swarm is used in the social term of the velocity update
equation. The gBest swarms converge fast, as all the particles are attracted
simultaneously to the best part of the search space. However, if the global optimum is
not close to the best particle, it may be impossible to the swarm to explore other areas;
this means that the swarm can be trapped in local optima.
In the lBest swarm, only a specific number of particles (neighbour count) affect the
velocity of a given particle. The swarm will converge slower but can locate the global
optimum with a greater chance.
As the local best (lBest) value leads to convergence at the global optima the lBest value is
selected from neighbourhood values rather than the particles best values so far. The
neighbourhood best (lBest) selection is done as follows;
Calculate the distance of the current particle from other particles by equation 10.
(
(10)
Find the nearest m particles as the neighbour of the current particle based on distance
calculated
Choose the local optimum lBest among the neighbourhood in terms of fitness values
(11)
Where
and
are the maximum and minimum inertia weights, g is the generation
index and G is the predefined maximum number of generation.
In SAPSO2 the inertia weight adaptation is made to depend upon the values from previous
generation so as to linearly decrease its value with increasing iterations as shown in equation
12.
(
( )
(12)
) is the inertia weight for the current generation, ( ) is the inertia weight for
Where (
the previous generation,
and
are the maximum and minimum inertia weights and
G is the predefined maximum number of generation.
The steps in self adaptive PSO1 and PSO 2 are as follows.
Step1: Initialize the position and velocity of particles.
Step 2: The importance of each particle is studied utilizing fitness function. Fitness value is
evaluated using the fitness function. The objective of the fitness function is maximization.
Equation 13 describes the fitness function.
( )
( )
( )
( )
(13)
where fitness(x) is the fitness value of the association rule type x, support(x) and
confidence(x) are as described in equation 1 and 2 and length(x) is length of the association
rule type x. If the support and confidence factors are larger then, greater is the strength of the
rule with more importance.
Step 3: Get the local best and particle best for the swarm. The local best is the best fitness
attained by the individual particle till present iteration and the overall best fitness attained by
all the particles so far is the global best value.
Step 4: Set max as 0.9 and min as 0.4 and find the adaptive weights for both SAPSO1 and
SAPSO2. Update velocity of the particles using equation 5.
Step 5: Update position of the particles using equation 6.
Step 6: Terminate if the condition is met.
Step 7: Go to step 2.
3.5 Self Adaptive Chaotic Particle Swarm Optimization (SACPSO)
The major drawback of standard PSO lies in its premature convergence, especially while
handling problems with many local optima. Based on the standard PSO, a novel chaotic
operator is introduced with the expectation of keeping the local diversity, as well as
enhancing the reliability of the algorithm. The velocity of each particle is updated by the
following equation:
[
[ ]
[ ]
[ ])
[ ]
[ ])
(14)
where, chaotic_operator is an iterative value as chaotic mapping. The chaotic operators are
generated based on equation 8. The use of a fixed inertia weight does not have an impact on
the global and local search. When value is greater, it could undermine the search space's
excellent solutions, the algorithm does not even slow down the convergence. Hence, a
method of adaptive system optimization, where is made dynamic is proposed as given in
equation 11.
Attributes
Instances
4
6
3
8
16
24
1728
310
87
101
Lenses
Car Evaluation
Habermans Survival
Post-operative Patient Care
Zoo
Attribute
characteristics
Categorical
Categorical, Integer
Integer
Categorical, Integer
Categorical, Binary,
Integer
Swarm
Size
24
700
C1
C2
2
2
2
2
300
87
101
Inertia
Weight
0.2
0.4
Generations
max
min
100
100
0.9
0.9
0.4
0.4
0.4
100
0.9
0.4
0.3
100
0.9
0.4
0.3
100
0.9
0.4
Balancing between exploration and exploitation is carried out using the variants of PSO
proposed and the results for the five datasets are plotted in figures 1 to 5
.
100
95
90
85
80
75
70
65
60
55
50
45
PSO
WPSO
CPSO
NPSO
SAPSO1
SAPSO2
SACPSO
10
20
30
40
50
60
No. of Iterations
70
80
90
100
98
PSO
96
WPSO
94
CPSO
92
NPSO
90
SAPSO1
88
SAPSO2
86
84
10
20
30
40
50
60
70
80
90
100
No.of Iterations
100
90
PSO
WPSO
CPSO
NPSO
SAPSO1
SAPSO2
SACPSO
80
70
60
10
20
30
40
50
60
70
80
90
100
No. of Iterations
100
90
PSO
WPSO
CPSO
NPSO
SAPSO1
SAPSO2
SACPSO
80
70
60
50
40
10
20
30
40
50
60
70
No. of Iterations
80
90
100
Figure4. Convergence of Predictive Accuracy for Post Operative Patient Care Dataset
100
90
80
PSO
WPSO
70
CPSO
NPSO
60
SAPSO1
50
SAPSO2
SACPSO
40
10
20
30
40
50
60
70
80
90 100
No. of Iterations
95
PSO
90
CPSO
NPSO
WPSO
85
SAPSO2
SAPSO1
80
SACPSO
75
Lenses
Zoo
SACPSO perform better than the normal PSO variants CPSO, WPSO and NPSO. The
weighted PSO gives better performance for all the datasets among the chaotic PSO and
neighbourhood selection PSO.
The iteration at which maximum predictive accuracy attained for the five datasets by
applying the variants of PSO in association rule mining is shown in figure 7.
100
90
80
PSO
70
Iteration
WPSO
60
CPSO
50
NPSO
40
SAPSO1
30
SAPSO2
20
SACPSO
10
0
Lenses
Car Evaluation
Habermans
Survival
Po-opert Care
Zoo
When compared to PSO the PSO variants perform better both in terms of predictive accuracy
and balancing between exploration and exploitation. The three self adaptive methods
SAPSO1, SAPSO2 and SACPSO exhibit consistent performance for all the datasets. The
inertia weight factor performs better among the other PSO variants. The behaviour of Chaotic
PSO and neighbourhood selection in PSO varies from dataset to dataset depending on the
attributes involved and its values.
Avoiding exploitation at global search and testing on more datasets could be taken up for
further exploration.
References
1.
17. R.C. Eberhart and Y. Shi., Tracking and optimizing dynamic systems with particle
swarms, Proceedings of the 2001 Congress on Evolutionary Computation, volume 1, pp.
94100, IEEE, 2002
18. M.S. Arumugam and MVC Rao., On the performance of the particle swarm optimization
algorithm with various Inertia Weight variants for computing optimal control of a class
of hybrid systems, Discrete Dynamics in Nature and Society, 2006.
19. Y. Feng, G.F. Teng, A.X. Wang, and Y.M. Yao., Chaotic Inertia Weight in Particle
Swarm Optimization, Proceedings of the 2001 Congress on Innovative Computing,
Information and Control, pp. 475-481. IEEE, 2008.
20. D.B. Chen, C.X. Zhao, Particle swarm optimization with adaptive population size and
its application, Applied Soft Computing. Pp.3948, 2009.
21. Y.S. Jin, K. Joshua, H.M. Lu, Y.Z. Liang, B.K. Douglas, The landscape adaptive
particle swarm optimizer, Applied Soft Computing. 8, pp. 295 304, 2008.
22. A. Nickabadi, M.M. Ebadzadeh, R. Safabakhsh, A novel particle swarm optimization
algorithm with adaptive inertia weight, Appl. Soft Computing, 11,pp. 36583670, 2011.