Professional Documents
Culture Documents
D1
X
i=1
(1)
The global minimum is in a narrow, long and flat valley. Although the valley is easy
to find, to find the global minimum is not that easy. In this report a Genetic Algorithm
was employed in order to find the global minimum, the vector X min that minimizes the
Rosenbrock function. It is worth noting that the Rosenbrock function has the minimum at
zero, and the vector that minimizes the function is a vector where all the components are
equal to 1.
Algorithm employed
The Genetic Algorithm employed in order to find the global minimum for the Rosenbrock
function has the following structure:
Simple Genetic Algorithm () {
Initialization(Random)
Evaluation(fitness)
while (f itness minimum value 106 )
{
= ((P1(i)+P2(i))/2) randomly
= (N,sigma(1/5 success rule))
Evaluation(fitness)
Selection and Reproduction = (Roulette wheel)
}
}
Where it is worth mentioning the following considerations :
The Algorithm starts from a random population of = 30 vectors X. Each one of
the vectors X has D components xi (i from 1 to D -1).Each component xi is randomly
selected from a uniform distribution between -100 and +100.
The fitness function (equation (1)) is evaluated for each one of the vectors X from the
population. If the minimum value for the fitness function obtained is less than 106
(defined threshold) then the algorithm stops, otherwise it continues to run in order to
find the optimum.
The Genetic Algorithm uses the crossover operator. It is used an arithmetic crossover,
where the mean of two randomly selected individuals generate a new individual. Using
the crossover operator at each iteration are generated 300 individuals.
The mutation is done adding to each vector a variable extracted from a Gaussian distribution with = 0 and =1.5. However follows the 1/5 succes rule based on
the probability of success ps. Where ps is the probability for a generated individual to
overcome the parent. However the threshold for ps was not set to 1/5 but to 0.05.
if ps is higher/equal than 0.05 then = /C . In this case the population is
approaching a minimum. In order to make a better exploitation of the current
knowledge the magnitude of mutations should be decreased, this is done dividing
2
Proposed approach
In addition to the previously described algorithm. It was proposed a new strategy to find
the optimum; this is done modifying the standard deviation from the Gaussian distribution
of the mutation operator. The idea is to have a certain period to explore the space of
candidate solutions, after this period the current knowledge is exploited and the search is
done in a reduced space. This is done periodically dividing by a constant. In that way the
exploration and exploitation are periodically balanced. The period selected is 200 iterations
and the results obtained can be found at follows.
Results
For the first approach described the results for D=2,5,10,20,100 can be found in Figures 2
and 3.
Figure 2: minimum value for the fitness function vs iteration for the first Genetic Algorithm
described for D=2,5,10,20,100
As it can be seen it is presented the evolution of the fitness function with the number
of iterations, and the changes of the success probability with the number of iterations. As
it can be seen the dimension of the vector D is proportional to the number of iterations.
Following this approach for D=2 it was necessary to run 793 iterations,instead for D=100 it
was necessary to run a total of 96768 iterations. Nonetheless the algorithm is able to converge
to the optimum with the precision requested.
Figure 3: Probability of success vs iteration for the first Genetic Algorithm described for
D=2,5,10,20,100
In Figure 3 it can be seen the ps behavior with the number of iterations. It was necessary
to lower the threshold of the success rule from 1/5=0.2 to 0.05, in order to be able to change
the degree of exploration. It is worth noting that effectiveness of the algorithm was dependent
on the success rule that enable to balance the exploration and the exploitation.
x 10
14
12
fD
10
20
30
40
50
60
70
Generations
In Figure 4, the behavior of the objective function for each individual at each iteration
is shown. As expected from a plus strategy the best individual of each population is hold for
the following generations leading to a monotonically decreasing of the OF. Since the sigma
value is adjusted after some iterations it can be seen an increase on the clusterization of the
individuals with the iterations.
Mean
10
88
193
1087
2366
Variance
5
1
1
30
65
Mean
633
1100
1519
2280
2366
Variance
72
60
1
2
65
500
1000
1500
2000
2500
3000
3500
Generations
Matlab Code
First approach Main program program code:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
%First Approach
n=[];ps=[];
%Each A iterations the sigma is divided by 1/c
%vector length 2, 5, 10, 20, 50, 100
D=100;
Pob=30;
Xi=random('unif',100,100,Pob,D);
mu=300;
sigma=1.5;
sigma 0=sigma;
%C is less than one, the success rule
%was in effect the 1/5 succes rule
c=0.9
f D=f(Xi,D,Pob);
ffin=106;
iter=0;
fopt = min(f D);
%If the minimum of the fitness value < threshold
%stops otherwise the search continues
while abs(min(f D))>ffin
a = 1; b = Pob;
%Fathers selection for the crossover
x = ceil(a + (ba) .* rand(1,2*mu));
%Crossover P1(i)+P2(i)/2 300 individuals generation
Mi=(Xi(x(1:2:2*mu),:)+Xi(x(2:2:2*mu),:))/2;
%Mutation v=v+N(1,0)
N=random('Normal',0,sigma,mu,D);
Mi=Mi+N;
%fitness Evaluation
fMi=f(Mi(1:mu,:),D,mu);
fALL=[f D' fMi'];
ALL=[Xi;Mi];
[B,IX] = sort(fALL);
%Steady state Selection
Xi=ALL(IX(1:Pob),:);
f D=f(Xi,D,Pob);
iter=iter+1;
n(iter)=fALL(IX(1));
disp(fALL(IX(1)))
ps(iter)=length(find(fALL<fopt))/(Pob+mu);
%Success rule
if ps>0.05
sigma=sigma*c;
else
sigma=sigma/c;
end
fopt=min(f D);
41
42
43
44
45
46
47
48
end
%New approach
n=[];ps=[];
%Improve rate
IP =1e10;
C med=10;
jj=1;
%Each A iterations the sigma is divided by 1/c
A=25/jj;
A 0=A;
V=1.39;
%vector length 2, 5, 10, 20, 50, 100
D=5;
Pob=30;
Xi=random('unif',100,100,Pob,D);
mu=300;
sigma=2;
sigma 0=sigma;
c=0.1(1/jj);
c 0=c;
f D=f(Xi,D,Pob);
ffin=106;
iter=0;
fopt = min(f D);
t cut=A;
while abs(min(f D))>ffin
%New approach to handle sigma
if (iter>C med+1)
IP= sum(n((iter1):(iterC med+1+1)))/C med;
IP = (IP n(iter1))/IP;
if(IP<0.05)
sigma = c*sigma;
end
end
%Fathers selection for the crossover
a = 1; b = Pob;
x = ceil(a + (ba) .* rand(1,2*mu));
%Crossover P1(i)+P2(i)/2
Mi=(Xi(x(1:2:2*mu),:)+Xi(x(2:2:2*mu),:))/2;
%Mutation
N=random('Normal',0,sigma,mu,D);
Mi=Mi+N;
%fitness Evaluation
fMi=f(Mi(1:mu,:),D,mu);
fALL=[f D' fMi'];
ALL=[Xi;Mi];
[B,IX] = sort(fALL);
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
Xi=ALL(IX(1:Pob),:);
%Selection
f D=f(Xi,D,Pob);
iter=iter+1;
n(iter)=fALL(IX(1));
disp(fALL(IX(1)))
ps(iter)=length(find(fALL<fopt))/(Pob+mu);
if ps>0.1
c=1/c 0;
else
c=c 0;
end
fopt=min(f D);
end
iter
sz=length(n);
ITER=[ITER iter];
sz=length(n);
for i=C med:sz
med(iC med+1)=sum(n(iC med+1:i))/10;
end
Function f D definition
1
2
3
4
5
6
function f D=f(x,D,pob)
f D=zeros(pob,1);
for i=1:(D1)
f D=f D +100*((x(:,i).2x(:,i+1).2).2+(1x(:,i)).2);
end
end