You are on page 1of 27

82

Chapter 5
Particle Swarm Optimization Based Route Planning
In this chapter, an overview of particle swarm optimization (PSO) will be given. PSO is a
population based optimization technique and has been applied to different combinatorial
optimization problems. This chapter presents research that uses simulated particle swarm
optimization for route planning problems.
5.1 Introduction
The simulated niche based particle swarm optimization (SN-PSO) is presented that can
be applied for single route optimization or multiple route optimization problems with promising
results. SN-PSO is also tested for static and dynamic environments by keeping the goal
stationary for static environment and changing the goal during its execution for dynamic
environment. A niche based particle swarm optimization is presented for the generation of
multiple alternate routes with different environment configurations. Grid maps with different
sizes have been used for testing of the system, and the algorithm proved to be scalable and
robust. The traditional route optimization techniques focus on good solutions only and do not
exploit the solution space completely. The efficiency of the SN-PSO is tested in a mine field
simulation with different environment configurations. This chapter presents two categories of
particle swarm optimization algorithms-i.e., simple particle swarm optimization and niche based
particle swarm optimization. The simple particle swarm optimization algorithm is used for the
generation of single route, and niche based particle swarm optimization is used for the generation
of multiple routes. The mine field simulation is selected as the test bed for these two algorithms,
and detailed experimentation and results are presented in this chapter. The resultant simulated
83
niche based particle swarm optimization (SN-PSO) is explained and further optimized by
adjusting the parameter settings. Another module for generated route optimization is added in the
simulation for further repairing of routes by re-aligning the grid cells to formsmooth routes.
5.2 Basic Particle Swarm Optimization Method
Particle swarm optimization [14] is a swarm based algorithm that has been successfully
used for optimization problems. The basic motivation behind PSO is the social learning of living
societies. The members of the society improve their performance by social interaction and by
following their best performing member. The members can have global best position and local
best position. The concept of particles has been derived from bird flocking. The best performing
members gradually attain best positions with velocities in the solution space. The solution space
consists of particles, and each particle represents one solution. Each particle has a position and a
velocity. A number of predefined particles are flown in a hypothetical solution space with each
one having a certain position and velocity. Each particle keeps track of its previous best position
as a local best position and represents simple nostalgia. The best of the best position is
considered as a global best and considered to be the leader of the swarm. Each particle updates
its position and velocity with respect to the global best as well as the local best position in the
solution space. The rate for the convergence of PSO is faster than the rate for the evolutionary
algorithm due to simple update equations and no use of genetic operators. Each particle is
moving in a hypothetical space and has certain velocity. Particle swarm optimization (PSO)
algorithm has been developed by Eberhart and Kennedy [14] in 1995 as an optimization
technique inspired by flocking of birds. It is easy to implement, simple and has few parameters
and has applications in many areas. It has also been applied to many engineering optimization
problems. It actually simulates the social behavior of bird flock. This simulation has been
analyzed to incorporate nearest neighbor velocity matching, eliminate ancillary variables, and
incorporate multidimensional search and acceleration by distance [14].
PSO was first applied to real valued problems but later on, it has been extended to cover
binary and discrete problems as well. It can be categorized into algorithm, topology, parameters,
merging with other evolutionary techniques, and applications. It searches a much larger portion
of the problem space than the traditional methods. PSO has been used in evolving neural
84
networks-i.e., training neural networks using particle swarms. It has been successfully applied
for tracking dynamic systems and tackling multi objective optimization and constraint
optimization problems. PSO is a natural phenomenon and has proved to be successful
optimization technique.
The attributes of any particle in the swarm are its current position as given by the n
dimensional vector, the current velocity of the particle and the fitness of the particle. The system
is initialized with a population of random solutions-i.e., particles. Each particle is assigned a
randomized velocity. These particles are then flown through the hyper pace of potential
solutions. Each particle keeps track of the coordinates in the hyperspace for which it has
achieved the best fitness so far. The particle having the best of the best values is the leader. At
each time step, the velocity of each particle is changed as a function of its local best and the
global best positions. This change is velocity is weighted by a random term and can be
considered as acceleration. A new position in the solution space is calculated for each particle by
adding the new velocity value to each component of the particles position vector. Conceptually,
the local best resembles autobiographical memory, as each individual remembers its own
experience, and the velocity adjustment associated with the local best has been called simple
nostalgia in that the individual tends to return to the place that most satisfied it in the past. On
the other hand, global best is conceptually similar to publicized knowledge, or a group norm or
standard, which individuals seek to attain.
Consider a swarm of p particles, with each particles position representing a possible
solution point in the design problem space d. For each particle i, the position x
i
is updated in the
following manner:
x
k+1

= x
k

+:
k+1

(5.1)
Where:
k+1

is the pseudo velocity and calculated as:


:
k+1

= w
k
:
k

+ c
1
r
1
(p
k

x
k

) +c
2
r
2
(p
k
g
x
k

) (5.2)
The subscript k indicates a pseudo-time increment. The new position is dependent on the
previous position x
k

plus three other factors:


85
x
k+1

= x
k

+ w
k
:
k

+ c
1
r
1
(p
k

x
k

) + c
2
r
2
(p
k
g
x
k

) (5.3)
1. w
k
:
k

is the weighted current velocity


2. c
1
r
1
(p
k

x
k

) is the weighted deviation from self-best position


3. c
2
r
2
(p
k
g
x
k

) is the weighted deviation from global-best position


5.3 Parameter Setting for Particle Swarm Optimization
PSO requires optimum values for the parameters for the best performance for different
types of applications. The selection of the values of the parameters has to be taken carefully and
should be considered for best performance. The parameters are pbest - pid, nbest - pnd and gbest
- pgd, learning factors c
1
and c
2
, inertia weight w, and constriction factor Y [18].
pbest
Pbest - pid is the best position of the particle attained so far and can be considered as the
particles memory; one memory slot is allocated to each particle [36]. The best location does not
necessarily always depend on the value of the fitness function. To adapt to different problems,
many constraints can be applied to the definition of the best location. In certain non-linear
constrained optimization problems, the particles remember the positions in feasible space and
disregard infeasible solutions. This simple alteration successfully locates the optimum solution
for a series of benchmark problems. In some multi objective optimization problems, the best
positions are determined by a concept called Pareto dominance [36]-i.e., if solution A is not
worse than solution B in every objective dimension and is better than solution B in at least one
objective dimension, solution B is dominated by solution A. In some techniques, the memory
reset mechanism is adopted-i.e., in dynamic environments, the particles pbest will be reset to the
current value if the environment changes.
86
nbest and gbest
The best position that the neighbors of a particle have achieved so far is nbest- pnd, and
gbest - pgd is the best of nbest. The neighborhood of a particle is the social environment
encountered by it. There are two phases for the selection of the nbest. In the first phase, the
neighborhood is determined and in the second phase nbestis selected. Neighborhoods do not
change during the experimental run and defined as topological neighbors. The number of
neighbors or the size of the neighborhood affects the convergence speed of the algorithm. The
larger the size of the neighborhood, the faster the observed convergence rate of the particles.
Premature convergence or the pre convergence of the particles is prevented by keeping the size
of the neighborhood small. The value for nbest is usually selected by comparing fitness values
among neighbors. In some cases, the ratio of the fitness and the distance from other particles is
used to determine nbest.
Learning Factors
The learning factors are the constants c
1
and c
2
. They represent the weighting of the
stochastic acceleration terms that pull each particle towards the pbest and nbest positions. The
amount of tension in the system can be changed by adjustment of these constants. Low values
allow particles to move far from the target, or moving towards the target, or past the target
regions. The cognitive parameter represents the tendency of individuals to duplicate past
behaviors that have been successful in the past, and the social parameter represents the tendency
to follow the successes of others. Normally, c
1
and c
2
are taken as 2.0, so that the search cover all
surrounding regions centered at pbest and nbest. If the learning factors are the same, equal
importance is given to social searching and cognitive searching [36].
Inertia Weight
The convergence behavior of PSO depends on the inertia weight w. The maximum
velocity Vmax is a constraint that controls the global exploration ability of a particle swarm. A
larger value of Vmax facilitates global exploration, while a smaller value for Vmax encourages
87
local exploration. The concept of inertia weight has been developed to have a better control over
exploration and exploitation. An initial value of around 1.2 and a gradual decline towards 0 can
be considered a good choice for w.
Constriction Factor
The constriction factor Y controls the magnitude of the velocities in a way similar to the
parameter resulting in a variant of PSO, different from that with inertia weight. It has been
suggested that the use of Y may be necessary to ensure convergence of PSO [36].
5.4 Comparison with Evolutionary Computation Techniques
During initialization, there is a population that is made up of a certain number of
individuals, and each individual in the population is given a random solution initially. The
evolutionary computation technique has a mechanism for searching for a better solution in the
problem space and producing a better new generation. The production of the new generation is
based on the previous generation. Based on the facts given above, the PSO can be categorized in
evolutionary computation technique.
The PSO can easily be implemented and is computationally inexpensive, since it has low
requirements for memory and processing speed. It does not require gradient information of the
objective function under consideration, but only the values of PSO with primitive mathematical
operators.
PSO does not have a direct recombination operator. However, the stochastic acceleration
of a particle towards its previous best position as well as towards the best particle of the swarm
resembles the recombination procedure of evolutionary computation. The information exchange
takes place only between the particles own experience and the experience of the best particle in
the swarm, instead of being carried forward from parents selected based on their fitness to
descendants as in a genetic algorithm. PSO does not use the survival of the fittest concept and
does not utilize a direct selection function. Thus, the particles with lower fitness can survive
during the optimization and potentially visit any point of the search space.
88
5.5 Simulated Particle Swarm System
Simulated particle swarm system (SPSS) presents an optimization technique for route
planning using simulated particle swarm. SPSS is used for dynamic online route planning,
optimization of the route and proved to be an effective technique. It effectively deals with route
planning in dynamic and unknown environments cluttered with obstacles and objects. A
simulated particle swarm system (SPSS) is proposed using a modified particle swarm
optimization algorithm for dealing with online route planning. The SPSS generates and
optimizes routes in complex and large environments with constraints. The traditional route
optimization techniques focus on good solutions only and do not exploit the solution space
completely. The SPSS is proved to be an efficient technique for providing safe, short, and
feasible routes under dynamic constraints. The efficiency of the SPSS is tested in a mine field
simulation with different environment configurations.
Route planning is considered as a problem in the research of artificial intelligence. It has
been addressed for a number of years and still considered to be an area that is challenging and
requires some new technique to solve [2], [4], [31], [33], [38], [39], [43]. A simulated particle
swarm can be implemented as a simulation and these particles are computational entities. It uses
a modified PSO algorithm for generating a route from the start to the goal state. The SPSS is
implemented for both single as well as multiple routes. It is proved to be an efficient method for
combinatorial optimization problems [33]. A mine field simulation is used for its implementation
and results are obtained for comparative analysis.
5.5.1 Simple Simulated Particle Swarm System
The simulated particle swarm system is based on the particle swarm optimization
technique. Each simulated particle represents the solution in the problem solution space. The
population is initialized with the randomly generated particles, and each particle represents a
prospective route between the start and the goal state. The attributes of any particle in the swarm
are its current position as given by the n dimensional vector, the current velocity of the particle
and the fitness of the particle. The system is initialized with a population of random solutions-
i.e., particles. Each particle is assigned a randomized velocity. These particles are then flown
89
through the hyperspace of potential solutions. Each particle keeps track of the coordinates in the
hyperspace for which it has achieved the best fitness so far. The particle having the best of the
best values is the leader. At each time step, the velocity of each particle is changed as a function
of its local best and the global best positions. This change is velocity is weighted by a random
term and can be considered as acceleration. A new position in the solution space is calculated for
each particle by adding the new velocity value to each component of the particles position
vector. Conceptually, the local best resembles autobiographical memory, as each individual
remembers its own experience and the velocity adjustment associated with the local best has
been called simple nostalgia in that the individual tends to return to the place that most
satisfied it in the past [13].
5.5.2 Niche Based Simulated Particle Swarm System
The niche based simulated particle swarm system is based on some form of implicit or
explicit grouping of particles in sub swarms. A division of the initial population into further sub
populations that follow the same objective as the main population is called a sub swarm. Sub
swarm PSO can be further divided into two categories: cooperative, and competitive. When
some form of cooperation exists between sub swarms, is called cooperative PSO [3], and when
particles are in direct competition with each other, is called competitive PSO.
If an optimization algorithm has the ability to find multiple solutions, the user can select
the best one from the set of found solutions. There is then a larger probability that a global
optimum, or at least a very good local optimum, will be found. It is also the case that many
problems have more than one global optimum that needs to be located [2], [9]. Many real world
problems require all solutions, local and global, to be located.
5.6 Simulated Niche Based Particle Swarm Optimization
In single route optimization, the system tries to find a feasible and optimal solution for an
entity to move from a start node to the goal node. In multiple routes optimization, more than one
feasible and optimal route is searched from all possible areas of the environment. It is a difficult
90
job to find a safe, short, and feasible route in an environment with obstacles and optimization
constraints. This research has applied a particle swarm optimization algorithm for finding single
feasible route and a niche based particle swarm optimization algorithm for finding multiple
feasible routes. This research uses grid based environment representation with different
environment configurations for experimentation and testing.
5.7 Single-route Optimization
A simple PSO has been used for the single route optimization problem. The solution
space consists of particles that can be considered as an initial population. Each particle represents
a complete route from the start state to the goal state. The routes are categorized into feasible and
infeasible routes based on the parameters like clearness, smoothness, distance, and cost. A route
list is maintained for generated routes. The first solution in the route list represents the most
feasible route, and similarly the second solution represents the second to most feasible solution
in the list. There is a possibility that we will end up with fewer feasible solutions than the values
specified in the solution count. The route list shows only the feasible solutions.
5.7.1 Simple PSO Algorithm
Let p be the total number of particles in the swarm. The best ever fitness value of a
particle at coordinates p
i
k
is denoted by f
i
best
, and the best ever fitness value of the overall swarm
at coordinates p
g
k
by f
g
best
. At the initialization time step k=0, the particle velocities v
i
0 are
initialized to random values within the limits 0 v
0
v
max
0.
The vector v
max
0
is calculated as a
fraction of the distance between the upper and lower bounds v
max
0
= (x
UB
x
LB
) with = 0.5.
Initialize
1. Set constants k
max
, c
1
, c
2
, w
0
2. Randomly initialize particle positions x
i
0
D in R
n
for i = 1, ..., p
3. Randomly initialize particle velocities 0 v
i
0
v
max
0
for i = 1, ..., p
4. Set k = 1
91
Optimize
5. Evaluate f
i
k
using design space coordinates x
i
k
6. If f
i
k
f
i
best
then f
i
best
= f
i
k
, p
i
= x
i
k
7. If f
i
k
f
g
best
then f
g
best
= f
i
k
, p
g
= x
i
k
8. If stopping condition is satisfied then go to 3
9. Update particle velocity vector v
i
k+1
10. Update particle position vector x
i
k+1
11. Increment i. If i > p then increment k, and set i = 1
12. Repeat the steps from 1
Results and Terminate
5.7.2 Particle Encoding
The particle represents a set of nodes as a path between two points on the map. Each
particle consists of n nodes as shown in Figure5.1, where n is user defined. The first node
represents the start point and the last node represents the goal point, while the intermediate nodes
are randomly selected from the map.
node(1) node(2) node(3) node(4) ...... .. . .. node(n)
Figure5.1. Particle encoding.
In Figure5.2, an example of a particle is shown with (2, 3) as the start node and (36, 38)
as the goal node with the value of n equal to 5. The intermediate nodes (12, 22), (9, 4), and (22,
23) are randomly generated.
(2, 3) (12, 22) (9, 4) (22, 23) (36, 38)
Figure5.2. Random initialization of a particle with (2, 3) as start node and (36, 38) as goal node.
92
5.7.3 Fitness Function
For the evaluation of a route, the construction of a complete path between the nodes is
required. The A* algorithm as a shortest distance algorithm has been used for the construction of
intermediate paths between the nodes. The nodes are generated randomly between the start point
and the goal point and arranged in the same sequence as generated. After finding intermediate
paths between all the nodes, a complete route is generated that can be evaluated by the fitness
function. The fitness function of a particle p accommodates three optimization goals to minimize
cost: and is a linear combination of the distance measure, the number of obstacles, and the
number of mines as shown in equation (5.4). Each obstacle and mine is considered as a blocked
cell in the grid.
mines(p) . w p) obstacles( . w ) distance(p . w ) fitnessf(p m o d (5.4)
Here the constants w
d
, w
o
, and w
m
represent the weights on the total cost of the
paths length-i.e., distance, presence of obstacles and mines respectively.
The feasible route is considered as mine-free and obstacle-free, while an infeasible route
has mines and obstacles. The feasible route with minimum distance measure is considered as the
global best. A path list is maintained for feasible routes and can be used for visualization of
routes on the map based on user selection. The best of the best path will always remain on top of
the path list. The fitness function evaluates each particle and assigns a particular value as fitness
and divides the result into two components-i.e., feasible and infeasible routes. The feasible and
infeasible routes are further ranked based on the values of the fitness function. The infeasible
routes are those routes that have at least one mine or at least one obstacle in the path. If there is
an obstacle between two nodes, the intermediate cell or cells will be considered as blocked. The
obstacles are counted as the number of blocked cells in the particle, and similarly each mine is
evaluated as an individual cell and considered as a blocked cell.
93
5.8 Multiple-route Optimization
For multiple route optimization, a niche based PSO has been used. It produces sub-
swarms and behaves as self-organization of particles. Each sub swarm acts independently and
generates solutions to find the best solution. There is no exchange of information between
different sub-swarms. Each sub-swarm maintains a niche, independent of other sub swarms and
acts as a stable swarm. Niche based PSO starts as one main swarm and gradually generates sub
swarms for multiple solutions. As the particle converges, a sub swarm is generated by grouping
particles that are close to the potential solution. This group of particles is then removed from the
main swarm, and the process continues within the remaining sub swarm for the refinement of
solution. The main swarm gradually decomposes into sub swarms. When sub swarms no longer
improve the solutions they represent, the algorithm has reached the convergence point. The
global best position from each sub-swarm is taken as the optimum solution.
5.8.1 Niche based PSO Algorithm
1. Create and initialize n
x
-dimensional main swarm, S
j
2. Train the main swarm, S, for one iteration using the cognition-only model;
3. Update the fitness of each main swarm particle, S .X
i
;
4. For each sub-swarm S
k
do
5. Train sub-swarm particles, S
k
.X
i
, using a full PSO model;
6. Update each particle's fitness;
7. Update the swarm radius S
k
. R;
8. End For
9. If possible, merge sub-swarms;
9. Allow sub-swarms to absorb particles from the main swarm that moved into the sub-swarm;
10. If possible, create new sub-swarms;
11. until stopping condition is true;
12. Return S
k
.'Y for each sub-swarm S
k
as a solution;
94
5.8.2 Explanation for Niche based PSO Algorithm
The Niche based PSO [2] was developed to find multiple solutions to general multi-
modal problems. The basic operating principle of Niche based PSO is the self-organization of
particles into independent sub-swarms. Each sub-swarm locates and maintains a niche.
Information exchange is only within the boundaries of a sub-swarm. No information is
exchanged between sub-swarms. This independency among sub-swarms allows sub-swarms to
maintain niches. Each sub-swarm functions as a stable, individual swarm, evolving on its own,
independent of individuals in other swarms.
5.9 Experimentation
The main objective of experimentation is to implement and test the simulated niche based
particle swarm optimization method for performance. The experimentation uses simple PSO for
single route generation and SN-PSO for multiple route generation. It is compared with the
simulated ant agent system as well as with the niche particle swarm optimization technique. The
reliability, scalability and robustness are tested and found to be appropriate for route planning
systems. The experimentation phase has been divided into two parts: single route optimization
and multiple route optimizations. Two different environments have been used for
experimentations i.e. static environment and dynamic environment. We used four different size
environment configurations (2020, 4040, 6060, and 8080 grid maps) as shown in
Figure5.3.
Different percentages of obstacles and mines are used for each map. In order to
implement static experiments, the starting and ending points are fixed. The parameters are
initialized for performing experiments. The online planning phase is implemented by randomly
changing the goal during the experimental run. The population size, cells count and other
parameters for the experiment are initialized and experiments are performed for analysis.
95
(1) 2020
(3) 6060
(2) 4040
(4) 8080
Figure5.3. SN-PSO: Maps of different sizes, obstacle ratio, and complexity.
The first experiment deals with a static environment using simple PSO as shown in Table
5.1 with a population size of 20. To simulate complex environments, a high obstacle ratio and
randomly generated maps are used. C
1
and C
2
values are the two main parameters that effect the
weighted deviation from self-best position and global-best position. The SN-PSO is capable of
tracking the target that is moving and implements the dynamic environment as shown as in Table
5.3. The SN-PSO is capable of tracking the changing goal and it is capable of finding the shortest
route to the changed goal. A drop down menu maintains the generated routes. The niche based
PSO generates feasible routes for both static and dynamic environments. A different niche count
is used for experimentation and results for each experiment are reported for comparative
analysis.
96
5.9.1 Parameter Settings for Simple PSO
The typical range is 20-40 particles, depending on the problem to be optimized. We have
used a population size of 20 particles. For this problem, we have used two dimensions of
particles. The different ranges for dimension of particles can be specified. We used one range for
each dimension of the particle. The maximum change one particle can take during one iteration
can be determined by Vmax. We have tested different values for Vmax. C
1
and C
2
are usually
equal to 2. The stopping condition can be the maximum number of iterations the PS (Particle
Swarm) executes and/or the maximum fitness achieved.
5.9.2 Parameter Settings for Niche based PSO Algorithm
If the population size is 20 and niche count is 4, then we have four sub-swarms each with
a population of 5 particles. Each sub-swarm independently produces an optimized route. So for a
population size of 20 with niche count 4, we can have 4 optimized routes. The population is
initialized randomly with the selection of simple PSO as well as niche based PSO. The heuristic
function used for the static as well as the dynamic environment is the Manhattan distance [36]
and the Euclidean distance [36]. When multiple constraints are enforced, it is difficult to find a
solution in a dynamic environment. The SN-PSO implements moving target search [20], [21] in
a dynamic environment. The simulated niche based particle swarm optimization has the
capability to locate a moving target. The Swarms and sub swarms are used for dealing with
moving targets. After the map is loaded, moving target is inserted by using a given option. The
SNPSO reconfigures the planning phase for the new goal and is tested for maps with different
configurations and sizes.
5.10 Results
The experiments are implemented for static and dynamic environments. Similarly, the
experiments for single route optimization and multiple route optimizations are reported
separately. The simple PSO has been used for static route planning and compared with classical
97
optimization techniques along with SN-PSO as shown in Table 5.1. The SN-PSO performs
consistently with variable map sizes. The optimum values require an intermediate value of
number of niches. Each experiment has been conducted30 times, and the average value of the
number of cells traversed is reported. The maps are generated randomly with variable number of
mines, obstacle ratio and complexity. The red color cell symbolizes the mine, the blue color cell
represents the obstacle and the white cell is the empty cell that can be traversed. The average
number of cells traversed is reported in Table5.1 for both SN-PSO and ACO and bold face
numbers represent the best solutions.
Table 5.1 SN-PSO comparison with classical techniques for static route planning
Number of Particles=20 Comparison of Static Route Planning Optimization Strategies
Map Size
2020 2020 4040 4040 6060 6060 8080 8080
S
T
A
T
I
C
Case 1 2 3 4 5 6 7 8
Initial State (0,0) (0,0) (0,0) (0,0) (0,0) (0,0) (0,0) (0,0)
Goal State (3,18) (6,15) (1,15) (1,14) (0,18) (3,11) (2,15) (6,18)
Manhattan Distance 21 21 16 15 18 14 17 24
Cells traversed-A* 17 21 15 14 18 11 15 18
Cells traversed-SPSO 20 25 17 16 23 12 16 19
Cells traversed-ACO 20 24 17 16 23 12 16 21
Cells traversed-SN-PSO 20 25 16 15 20 11 15 18
98
Table 5.2Effect of Niche size for static &multiple route planning using simulated niche based particle
swarm system
2020 map
Start Point (0,0)
Goal Point (12,10)
Number of particles 10 10 10 20 20 20 20 30 30 30 30 40 40 40 40 100 100 100 100
Number of niche 2 5 10 4 5 10 20 2 3 5 10 2 4 8 10 2 4 10 20
Cells count by
SNPSO
28 19 26 19 20 18 24 20 19 20 22 24 20 22 21 20 19 18 19
Cells count by A* 18
40*40 map
Start Point (0,0)
Goal Point (20,25)
Number of particles 10 10 10 20 20 20 20 30 30 30 30 40 40 40 40 100 100 100 100
Number of niche 2 5 10 4 5 10 20 2 3 5 10 2 4 8 10 2 4 10 20
Cells count by
SNPSO
40 29 36 46 27 28 30 29 28 29 31 29 27 30 31 28 29 27 29
Cells count by A* 27
60*60 map
Start Point (0,0)
Goal Point (20,35)
Number of particles 10 10 10 20 20 20 20 30 30 30 30 40 40 40 40 100 100 100 100
Number of niche 2 5 10 4 5 10 20 2 3 5 10 2 4 8 10 2 4 10 20
Cells count by
SNPSO
46 53 65 44 45 39 41 51 75 43 45 47 48 46 63 46 41 39 45
Cells count by A* 38
80*80 map
Start Point (0,0)
Goal Point (30,70)
Number of particles 10 10 10 20 20 20 20 30 30 30 30 40 40 40 40 100 100 100 100
Number of niche 2 5 10 4 5 10 20 2 3 5 10 2 4 8 10 2 4 10 20
Cells count by
SNPSO
136 90 79 87 82 70 70 87 70 70 82 70 70 70 70 70 70 70 70
Cells count by A* 70
The graphs in Figures 5.4-5.7 show the cells traversed using the simulated niche based
particle swarm system for different particle size and map size. It is observed in each of the tables
that the route becomes shorter with a moderate value of number of niches. The performance
remains consistent throughout the simulation run. The experimental results for simple to
complex environments have shown the scalability and robustness of the SN-PSO.
99
Figure5.4. Route Planning using SN-PSO for 2020 map.
100
Figure5.5.Route Planning using SN-PSO for 4040 map.
101
Figure5.6.Route Planning using SN-PSO for 6060 map.
102
Figure5.7.Route Planning using SN-PSO for 8080 map.
103
The SN-PSO has been applied to a dynamic environment for presenting moving target
search as shown in Table 5.3. After the selection for the number of times to change the goal
state, the SN-PSO runs for tracking the first goal. After tracking the first goal with shortest route
the goal changes randomly to a new random configuration. The next goal appears within 10% of
the area of the previous goal. SN-PSO is capable of acquiring the moving goal with the shortest
route. The A* algorithm is considered to be an optimal search algorithm and is compared with
SN-PSO as shown in Table 5.3 and the normalized error gives a more concrete comparison
between the A* and SN-PSO algorithms as shown in Table 5.4.
Table 5.3 Dynamic route planning using simulated niche based particle swarm system
Number of Particles=20
Number of Niches = 4
Dynamic Route Planning using Simulated Niche Based Particle Swarm System
Map Size
2020 2020 4040 4040 6060 6060 8080 8080
Case 1 2 3 4 5 6 7 8
Initial State (0,0) (0,0) (0,0) (0,0) (0,0) (0,0) (0,0) (0,0)
Goal 1 (3,18) (6,15) (1,15) (1,14) (0,18) (3,11) (2,15) (6,18)
Manhattan Distance 21 21 16 15 18 14 17 24
Cells traversed-SNPSO 18 23 15 15 20 11 15 19
Cells traversed-A* 17 21 15 14 18 11 15 18
Goal 2 random (4,15) (4,16) (2,17) (4,13) (1,17) (2,10) (4,13) (5,16)
Distance from Start 19 20 19 17 18 12 17 21
Distance from Goal 1 4 3 3 4 2 2 4 3
Cells traversed-SNPSO 21 19 18 15 18 10 13 18
Cells traversed-A* 21 17 17 13 18 10 13 17
Goal 3 random (1,14) (3,15) (4,15) (2,10) (2,18) (1,7) (3,11) (8,13)
Distance from Start 15 18 19 12 20 8 14 21
Distance from Goal 2 4 2 4 5 2 4 3 6
Cells traversed-SNPSO 15 17 20 12 18 7 12 17
Cells traversed-A* 14 17 16 10 16 7 11 13
Goal 4 random (6,17) (7,13) (1,12) (1,9) (4,17) (3,4) (2,9) (5,11)
Distance from Start 23 20 13 10 21 7 11 16
Distance from Goal 3 8 2 6 2 3 5 3 5
Cells traversed-SNPSO 21 17 14 9 18 6 12 12
Cells traversed-A* 20 15 12 9 17 4 9 11
Goal 5 random (3,16) (4,11) (2,9) (2,7) (5,15) (6,1) (1,8) (3,9)
Distance from Start 19 15 11 9 20 7 9 12
Distance from Goal 4 4 5 4 3 3 6 2 4
Cells traversed-SNPSO 19 27 9 7 22 6 9 14
Cells traversed-A* 18 24 9 7 17 6 8 9
104
Table 5.4 Normalized error with A* algorithm for dynamic route planning using simulated niche based particle swarm system
Normalized Error with A* Algorithm for Dynamic Route Planning using
Simulated Niche Based Particle Swarm Optimization
Map Size Goal SNPSO A* Error
2020
(3,18) 18 17 0.058
(4,15) 21 21 0.000
(1,14) 15 14 0.071
(6,17) 21 20 0.050
(3,16) 19 18 0.055
(6,15) 23 21 0.095
(4,16) 19 17 0.117
(3,15) 17 17 0.000
(7,13) 17 15 0.133
(4,11) 27 24 0.125
4040
(1,15) 15 15 0.000
(2,17) 18 17 0.058
(4,15) 20 16 0.250
(1,12) 14 12 0.166
(2,9) 9 9 0.000
(1,14) 15 14 0.071
(4,13) 15 13 0.153
(2,10) 12 10 0.200
(1,9) 9 9 0.000
(2,7) 7 7 0.000
6060
(0,18) 20 18 0.111
(1,17) 18 18 0.000
(2,18) 18 16 0.125
(4,17) 18 17 0.058
(5,15) 22 17 0.294
(3,11) 11 11 0.000
(2,10) 10 10 0.000
(1,7) 7 7 0.000
(3,4) 6 4 0.500
(6,1) 6 6 0.000
8080
(2,15) 15 15 0.000
(4,13) 13 13 0.000
(3,11) 12 11 0.090
(2,9) 12 9 0.333
(1,8) 9 8 0.125
(6,18) 19 18 0.055
(5,16) 18 17 0.058
(8,13) 17 13 0.307
(5,11) 12 11 0.090
(3,9) 14 9 0.555
105
Table 5.5 Multiple route generation using niche based particle swarm optimization
Number of Particles=20
Number of Niches = 4 Multiple Route Planning using Niche Based Particle Swarm Optimization
Map Size
2020 2020 4040 4040 6060 6060 8080 8080
S
T
A
T
I
C
Case 1 2 3 4 5 6 7 8
Initial State (0,0) (0,0) (0,0) (0,0) (0,0) (0,0) (0,0) (0,0)
Goal State (3,18) (6,15) (1,15) (1,14) (0,18) (3,11) (2,15) (6,18)
Manhattan Distance 21 21 16 15 18 14 17 24
Route 1 using SN-PSO 20 25 17 16 23 12 15 19
Route 2 using SN-PSO 22 26 19 19 27 15 17 21
The SN-PSO is tested for a static environment with multiple route generation, and the
results are comparable with the Manhattan distance as shown in Table 5.5. The performance of
SN-PSO is better for complex and large size maps.
The SN-PSO has been compared with the methodology given in the paper Y. Hui et al
[19], and results from NS-PSO are better as compared to their method as shown in Table 5.6.
Niche particle swarm optimization technology performs well for less complex environments, and
it gives optimal peaks in each sub-population, while SN-PSO has been tested for complex
environments cluttered with obstacles and with different map sizes. The Niche particle swarm
optimization technology was tested in the same size environment, while SN-PSO has been tested
for different environments with different obstacle ratios. The SN-PSO not only provides optimal
peaks in each sub-swarm, it also provides the second best peak as well and it gives more options
for multiple routes. The SN-PSO is applied to the same application used by Y. Hui et al with the
same parameter settings.
Table 5.6 SN-PSO comparison with niche particle swarm optimization technology
Experiment Sub-
population
Generation of feasible route Generation of approximate optimal route
Iterative
times
N-PSO SN-PSO Iterative
times
N-PSO SN-PSO Total consumed time
Time(s) Time (s) Time(s) Time(s) N-PSO SN-PSO
1 1 16 0.804 0.654 51 2.851 2.952 7.944 6.341
2 19 0.851 0.738 55 3.014 2.800
2 2 22 1.422 1.233 57 3.525 3.325 8.485 5.528
3 28 1.533 1.342 72 3.695 3.230
106
5.11 Optimization of Route Planning
A separate module for route optimization has been implemented to repair the generated
routes. The concept of v-edge has been used for the optimization of the route. The v-edge is the
combination of diagonal v-shaped cells that are adjacent to each other but not in a straight line.
Their positions can be changed without extending the distance or number of cells by just
swapping their positions, and the route can be repaired. The number of v-edges were counted in
each generated route and fixed by taking the route with the same number of cells. This method
does not reduce the number of cells traversed. It only increases the smoothness of the route. By
replacing the v-edge with the adjacent cell, the route becomes smoother and straighter. The route
optimizer compares all the combinations of the v-edges along the route and removes the v-edges
by replacing with diagonal cells along with keeping the same cells and distance as shown in
Figure5.8 and Figure5.9.
The diagonal movement generates v-edges that need to be repaired by using a route
optimizer. The smoothness of the routes depends on the number of v-edges, and the clearness of
the route depends on avoiding obstacles and mines. The clearness of the route has been
incorporated in the fitness function, but the smoothness of the route requires a separate module.
The main objective of this module is to repair the path without altering the original route plans.
For this purpose, each generated route has been compared with the number of cells traversed for
each of the required paths. The resultant selected route will have the same number of cells and
distance measure.
107
Figure5.8. Shortest path without optimization.
Figure5.9. Shortest path with optimization.
108
Summary
This chapter presents an extension of particle swarm optimization technique for
optimization of route planning. The simulated niche based particle swarm optimization has been
used successfully for route planning and optimization of routes in static and dynamic
environments. The SN-PSO uses an online planning strategy for route generation and effectively
deals with unknown environments. SN-PSO has been tested with simple to complex environments
and found to be robust, efficient and scalable. The resultant routes have been classified by the
number of v-edges and unwanted curves. The route optimizer repairs the generated routes while
keeping the number of cells and distance the same. The distances of the resultant routes have
been compared with the A* algorithm and are found to be closer to algorithm A*as compared to
SN-PSO. The SN-PSO successfully generates multiple feasible routes and has been implemented
for mine detection and the route optimization problem. The consistent experimental results with
different size maps have shown the scalability and robustness of the system for handling dynamic
environments. The SN-PSO can be further tested for other constraint satisfaction and multi-
objective optimization problems in different application areas.

You might also like