You are on page 1of 36

Chapter 11: Evolutionary Programming

Chapter 11: Evolutionary Programming


Computational Intelligence: Second Edition
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Contents
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Compact Overview
Originated from the research of L.J. Fogel in 1962 on using
simulated evolution to develop articial intelligence
Diers substantially from GA and GP, in that evolutionary
programming (EP) emphasizes the development of behavioral
models and not genetic models
EP is derived from the simulation of adaptive behavior in
evolution
EP considers phenotypic evolution
EP iteratively applies two evolutionary operators:
Variation through application of mutation operators
Selection
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Basic Evolutionary Programming Algorithm: Algorithm
11.1
t = 0, initialize strategy parameters;
Create and initialize the population, C(0), of n
s
individuals;
Evaluate the tness, f (x
i
(t)), of each individual;
while stopping condition(s) not true do
for each individual, x
i
(t) C(t) do
Create an ospring, x

i
(t), by applying the mutation operator;
Evaluate the tness, f (x

i
(t));
Add x

i
(t) to the set of ospring, C

(t);
end
Select the new population, C(t + 1), from C(t) C

(t);
t = t + 1;
end
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Basic Components
The main components of an EP:
Initialization
Evaluation
Fitness function measures the behavioral error of an
individual with respect to the environment of that individual
provides an absolute tness measure of how well the problem
is solved
Survival in EP is usually based on a relative tness measure
A score is computed to quantify how well an individual
compares with a randomly selected group of competing
individuals
Individuals that survive to the next generation are selected
based on this relative tness
The search process in EP is therefore driven by a relative
tness measure, and not an absolute tness measure
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Basic Components (cont)
The main components of an EP (cont):
Mutation as the only source of variation
Selection
Main purpose to select new population
A competitive process where parents and ospring compete to
survive
Behaviors of individuals are inuenced by strategy parameters
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Mutation Operators
Mutation is the only means of introducing variation in an EP
population
In general, mutation is dened as
x

ij
(t) = x
ij
(t) + x
ij
(t)
x

i
(t) is the ospring and x
i
(t) is the mutational step size
Mutational step size:
Noise sampled from some probability distribution
Deviation of noise is determined by a strategy parameter,
ij
Generally, the step size is calculated as
x
ij
(t) = (
ij
(t))
ij
(t)
: R R scales the contribution of the noise
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Categories of EP Algorithms
Based on the characteristics of the scaling function, , the
following categories of EP algorithms:
non-adaptive EP, in which case () = . In other words,
the deviations in step sizes remain static.
dynamic EP, where the deviations in step sizes change over
time using some deterministic function, , usually a function
of the tness of individuals.
self-adaptive EP, in which case deviations in step sizes change
dynamically. The best values for
ij
are learned in parallel with
the decision variables, x
ij
.
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Strategy Parameters
The deviations,
ij
, are referred to as strategy parameters
Each individual has its own strategy parameters
An individual is represented as the tuple,

i
(t) = (x
i
(t),
i
(t))
Correlation coecients between components of the individual
have also been used as strategy parameters
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Mutation Distributions: Uniform

ij
(t) U(x
min,j
, x
max,j
)
x
min
and x
max
provide lower and upper bounds for the values
of
ij
Note that
E[
ij
] = 0
to prevent any bias induced by the noise
Alternative uniform mutation:
x
ij
(t) = U(0, 1)( y
j
(t) x
ij
(t))
directing all individuals to make random movements towards
the best individual
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Mutation Distributions: Gaussian

ij
(t) N(0,
ij
(t))
The Gaussian density function:
f
G
(x) =
1

2
e
x
2
/(2
2
)
where is the deviation of the distribution
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Mutation Distributions: Cauchy

ij
(t) C(0, )
is the scale parameter
Cauchy density function centered at the origin
f
C
(x) =
1

+ x
2
for > 0
Distribution function
F
C
(x) =
1
2
+
1

arctan(
x

)
Cauchy distribution has wider tails than the Gaussian
distribution, producing more, larger mutations
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Mutation Distributions: Levy

ij
(t) L()
Levy probability function, centered around the origin
F
L,,
(x) =
1

_

0
e
q

cos(qx)dq
> 0 is the scaling factor
0 < < 2 controls the shape of the distribution
If = 1, the Cauchy distribution is obtained
If = 2, the Gaussian distribution is obtained
For |x| >> 1, the Levy density function can be approximated
by
f
L
(x) x
(+1)
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Mutation Distributions: Exponential

ij
(t) E(0, )
Density function of the double exponential probability
distribution

ij
(t) E(0, )
> 0 controls the variance (which is equal to
2

2
)
random numbers can be calculated as follows:
x =
_
1

ln(2y) if y 0.5

ln(2(1 y)) if y > 0.5


where y U(0, 1).
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Mutation Distributions: Chaos

ij
(t) R(0, 1)
R(0, 1) represents a chaotic sequence within the space (1, 1)
The chaotic sequence can be generated using
x
t+1
= sin(2/x
t
)x
t
, t = 0, 1 . . .
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Mutation Distributions: Combined
Mean mutation operator (MMO)

ij
(t) =
N,ij
(t) +
C,ij
(t)
where

N,ij
N(0, 1)

C,ij
C(0, 1)
Generates more very small and large mutations compared to
the Gaussian distribution
Generates more very small and small mutations compared to
the Cauchy distribution
Generally, produces larger mutations than the Gaussian
distribution, and smaller mutations than the Cauchy
distribution
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Selection Operators
The selection operator is used to create the new population
New population is selected from parents and their ospring
Competition to survive is based on a relative tness measure
EP notation:
indicates the number of parent individuals
indicates the number of ospring
Selection consists of two steps:
Calculate score, or relative tness
Select new population
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Selection Operators: Compute Relative Fitness
Dene P(t) = C(t) C

(t) to be the competition pool


Let u
i
(t) P(t), i = 1, . . . , + denote an individual in the
competition pool
For each u
i
(t) P(t), randomly select a group of
competitors, excluding u
i
(t) P(t)
Calculate the score
s
i
(t) =
n
P

l =1
s
il
(t)
where
s
il
(t) =
_
1 if f (u
i
(t)) < f (u
l
(t))
0 otherwise
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Selection Operators: Select New Population
Elitism: the best individuals from P(t) are selected
Tournament selection: the best individuals are
stochastically selected using tournament selection.
Proportional selection: each individual is assigned a
probability of being selected:
p
s
(u
i
(t)) =
s
i
(t)

2
l =1
s
l
(t)
(1)
Use Roulette-wheel selection to select the survivors
Nonlinear ranking selection: Individuals are sorted in
ascending order of score and then ranked
p
s
_
u
(2i )
(t)
_
=
i

2
l =1
l
(2)
Use Roulette-wheel selection to select survivors
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Selection Operators: Select New Population
Direct competition between parent and ospring:
Create a number of ospring from the parent
Select the best ospring
The ospring, x

i
(t), survives to the next generation if
f (x

i
(t)) < f (x
i
(t)) or if
e
((f (x

i
(t))f (x
i
(t)))/(t))
> U(0, 1) (3)
where is the temperature coecient, with
(t) = (t 1), 0 < < 1; otherwise the parent survives
The ospring has a chance of surviving even if it has a worse
tness than the parent
This reduces selection pressure and improves exploration
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Introduction to Strategy Parameters
Mutational step sizes are dependent on strategy parameters
Usually each component of each individual has its own
strategy parameter
However, a single strategy parameter can also be associated
with a single individual
A single strategy parameter limits degrees of freedom in
addressing the explorationexploitation trade-o
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Static Strategy Parameters
Values of deviations are xed, using a linear strategy
parameter:
(
ij
(t)) =
ij
(t) =
ij
where
ij
is a small value
Ospring are created as
x

ij
(t) = x
ij
(t) + x
ij
(t)
with x
ij
(t) = N
ij
(0,
ij
)
A too small value for
ij
limits exploration and slows down
convergence
A too large value for
ij
limits exploitation and the ability to
ne-tune a solution
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Dynamic Strategy Parameters
Having strategy parameters proportional to tness

ij
(t) =
i
(t) = f (x
i
(t))
(0, 1]
Ospring is generated using
x

ij
(t) = x
ij
(t) + N(0,
i
(t))
= x
ij
(t) +
i
(t)N(0, 1)
Fitness proportional to error wrt best solution

ij
(t) =
i
(t) = |f ( y) f (x
i
)|
y is the most t individual
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Self-Adaptation
Major issues with regards to strategy parameters:
Amount of mutational noise to be added
Severity of noise, i.e. step sizes
Usually best practice wrt these issues is problem dependent
A solution is to self-adapt strategy parameters during the
search process
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Self-Adaptation (cont)
Additive methods:

ij
(t + 1) =
ij
(t) +
ij
(t)N
ij
(0, 1)
is the learning rate
In the rst application of this approach, = 1/6
If
ij
(t) 0, then
ij
(t) = , where is a small positive
constant (typically, = 0.001) to ensure positive, non-zero
deviations
An alternative:

ij
(t + 1) =
ij
(t) +
_
f

(
ij
(t))N
ij
(0, 1)
where
f

(a) =
_
a if a > 0
if a 0
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Self-Adaptation (cont)
Multiplicative methods:

ij
(t + 1) = (0)(
1
e

2
t
n
t
+
3
)

1
,
2
and
3
are control parameters
Lognormal methods:

ij
(t + 1) =
ij
(t)e
(N
i
(0,1)+

N
ij
(0,1))
where

=
1
_
2

n
x
, =
1

2n
x
Ospring is generated using
x

ij
(t) = x
ij
(t) +
ij
(t)N
ij
(0, 1)
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Self-Adaptation (cont)
Undesirable property of self-adaptation:
Stagnation due to strategy parameters converging too fast
Deviations becomes too small too fast, limiting exploration
Search then stagnates until strategy parameters grow
suciently large due to random variation
Solution:
Impose a lower bound on values of
ij
Use dynamic lower bounds

min
(t + 1) =
min
(t)
_
n
m
(t)

min
(t) is the lower bound at generation t
[0.25, 0.45] is the reference rate
n
m
(t) is the number of successful consecutive mutations
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Classical Evolutionary Programming (CEP)
EP with Gaussian mutation
Uses lognormal self-adaptation
Produces ospring using
x

ij
(t) = x
ij
(t) +
ij
(t)N
ij
(0, 1)
Elitism selection is used to select new population from parents
and ospring
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Fast Evolutionary Programming (FEP)
Mutational noise is sampled from the Cauchy distribution with
= 1
Uses lognormal self-adaptation
Ospring is generated using
x

ij
(t) = x
ij
(t) +
ij
(t)C
ij
(0, 1)
Elitism selection is used to select new population from parents
and ospring
The wider tails of the Cauchy distribution provide larger step
sizes
Therefore, faster convergence and better exploration
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Improved FEP
FEP showed that step sizes may be too large for proper
exploitation
CEP showed a better ability to ne-tune solutions
The improved FEP:
For each parent, IFEP generates two ospring
One ospring is generated using Gaussian mutation, and the
other using Cauchy mutation
The best ospring is chosen as the surviving ospring
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Exponential Evolutionary Programming
Uses the double exponential probability distribution to sample
mutational noise
Ospring are generated using
x

ij
(t) = x
ij
(t) +
ij
(t)
1

E
ij
(0, 1)

ij
is self-adapted
The variance of the distribution is controlled by
The smaller the value of , the greater the variance
Larger values of result in smaller step sizes
To ensure initial exploration and later exploitation, can be
initialized to a small value that increases with time
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Accelerated Evolutionary Programming
Individuals are represented as

i
(t) = (x
i
(t),
i
(t), a
i
(t))

ij
{1, 1}, j = 1, . . . , n
x
gives the search direction
a
i
represents the age of the individual
Age is used to force wider exploration if ospring are worse
than their parents
Ospring generation consists of two steps
Update age parameters and search directions
a
i
(t) =
_
1 if f (x
i
(t)) < f (x
i
(t 1))
a
i
(t 1) + 1 otherwise

ij
(t) =
_
sign(x
ij
(t) x
ij
(t 1)) if f (x
i
(t)) < f (x
i
(t 1))

ij
(t 1) otherwise
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Accelerated Evolutionary Programming (cont)
Ospring generation (cont)
If the tness does not improve, increase in age cause larger
step sizes
If a
i
(t) = 1

i
(t) =
1
f (x
i
(t))
x

ij
(t) = x
ij
(t) +
ij
(t)|N(0,
i
(t))|
If a
i
(t) > 1

i
(t) =
2
f (x
i
(t))a
i
(t)
x

ij
(t) = x
ij
(t) + N(0,
i
(t))
Selection: ospring competes directly with its parent using
absolute tness
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Momentum Evolutionary Programming
Ospring is generated as follows:
x

ij
(t) = x
ij
(t) + x
ij
(t) + x
ij
(t)
where
x
ij
(t) = ( y
j
(t) x
ij
(t))|N
ij
(0, 1)|
x
ij
(t) =
i
(t)x
ij
(t 1) + x
ij
(t 1)
with > 0 the learning rate, > 0 the momentum rate, and

i
(t) =
_
1 if f (x

i
(t 1)) < f (x
i
(t 1))
0 otherwise
(4)
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Evolutionary Programming with Local Search
To improve exploitation ability, by adding hill-climbing to
generate ospring
While a better tness can be obtained, hill-climbing is applied
to each ospring:
x

ij
(t) = x

ij
(t)
i
(t)
f
x
ij
(t)
The learning rate is calculated using

i
(t) =

n
x
j =1
f
x
ij
(t)

n
x
h=1

n
x
j =1

2
f
x
ih
(t)x
ij
(t)
f
x
ih
(t)
f
x
ij
(t)
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming
Chapter 11: Evolutionary Programming
Introduction
Basic Evolutionary Programming
Evolutionary Programming Operators
Strategy Parameters
Evolutionary Programming Implementations
Evolutionary Programming Hybrid with PSO
x
ij
(t + 1) = x
ij
(t) +
ij
(t) +
i
N
ij
(0, 1)
Classical EP mutation operator is used
Can use any other EP mutation operator
Computational Intelligence: Second Edition Chapter 11: Evolutionary Programming

You might also like