Professional Documents
Culture Documents
April 2006
Abstract
This paper analyzes the behavior of a selectorecombinative genetic algorithm (GA) with an ideal crossover on a
class of random additively decomposable problems (rADPs). Specifically, additively decomposable problems of
order k whose subsolution fitnesses are sampled from the standard uniform distribution U [0, 1] are analyzed. The
scalability of the selectorecombinative GA is investigated for 10,000 rADP instances. The validity of facetwise
models in bounding the population size, run duration, and the number of function evaluations required to suc-
cessfully solve the problems is also verified. Finally, rADP instances that are easiest and most difficult are also
investigated.
Keywords
Genetic algorithms, ideal recombination, decomposable problems, scalabiity.
Note
Also published as IlliGAL Report No. 2006016 at the Illinois Genetic Algorithms Laboratory at
http://www-illigal.ge.uiuc.edu/.
Abstract
This paper analyzes the behavior of a selectorecombinative genetic algorithm (GA) with an
ideal crossover on a class of random additively decomposable problems (rADPs). Specifically,
additively decomposable problems of order k whose subsolution fitnesses are sampled from the
standard uniform distribution U [0, 1] are analyzed. The scalability of the selectorecombinative
GA is investigated for 10,000 rADP instances. The validity of facetwise models in bounding the
population size, run duration, and the number of function evaluations required to successfully
solve the problems is also verified. Finally, rADP instances that are easiest and most difficult
are also investigated.
1 Introduction
In the last two decades, a class of competent genetic algorithms (GAs)—GAs that solve boundedly
difficult problems quickly, reliably, and accurately—have been developed and their scalability has
been tested on a class of problems that are adversarial in nature (Goldberg, 2002). Facetwise models
have also been developed to bound the scalability of selectorecombinative GAs, which show that
GAs that accurately identify and effectively exchange key substructures of additively-decomposable
problems, scale sub-quadratically with the problem size.
In this paper, we consider a broader class of additively decomposable problems and analyze the
behavior of selectorecombinative GAs. Specifically, we analyze a class of additively decomposable
problems, where the fitnesses of competing subsolutions within a partition are sampled from the
standard uniform distribution. Moreover, in the spirit of facetwise and dimensional thinking, we
consider a selectorecombinative GA with an ideal recombination operator. The ideal crossover
assumes that the substructures of the underlying search problems are known and exchanges sub-
solutions without disrupting them. Additionally, the use of the perfect crossover eliminates any
1
effects of inaccuracies in the identification and permits us to investigate the performance of selec-
torecombinative in what can be considered a best-case scenario. The purpose of this paper is to
investigate the scalability of an ideal selectorecombinative GA on random additively decomposable
problems (rADPs). We also verify the validity of facetwise models in bounding the population
size, run duration, and the number of function evaluations required to successfully solve the rADP
instances.
This paper is organized as follows. We provide a brief background on GA problem difficulty and
test problem design in the section 2, followed by an outline of facetwise models for bounding the
scalability of selectorecombinative GAs in section 3. We introduce the class of rADPs in section 4
followed by their analysis in section 5. Finally, we provide summary and key conclusions in section 6.
Goldberg, Deb, and Clark (1992) proposed population-sizing models for correctly deciding between
competing BBs. They incorporated noise arising from other partitions into their model. Harik,
Cantú-Paz, Goldberg, and Miller (1999) refined the above model by incorporating cumulative effects
of decision making over time rather than in first generation only. Harik et al. (1999) modeled the
decision making between the best and second best BBs in a partition as a gambler’s ruin problem
(Feller, 1970). Here we use an approximate form of the population-sizing model proposed by Harik
et al. (1999): √
π σBB k √
n= 2 m log m, (1)
2 d
2
where k is the BB size, m is the number of BBs, d is the signal between the competing BBs, and
σBB is the fitness variance of a building block. building blocks. The above equation assumes a
failure probability, α = 1/m.
Mühlenbein and Schlierkamp-Voosen (1993) derived a convergence-time model for the breeder GA
using the notion of selection intensity from population genetics (Bulmer, 1985). Selection-intensity
based models have since been developed for other selection schemes and for noisy environments
(Thierens & Goldberg, 1994; Miller & Goldberg, 1995; Bäck, 1995). Even though the selection-
intensity-based convergence-time models are derived for the OneMax problem, they are generally
applicable to additively decomposable problems as well:
ct √
tc = m, (2)
I
where I is the selection intensity, and ct = 4.72 is an empirically-determined constant. For binary
√
tournament selection, I = 1/ π.
The above model yields good characterization of population convergence when even from early
on in the GA run, only two subsolutions compete with each other. However, when the best
subsolution competes with more than one subsolution, stochastic errors in finite population can
accumulate, resulting in significant fluctuations in the proportions of competing subsolutions due
to genetic drift (Kimura, 1964; Goldberg & Segrest, 1987; Asoh & Mühlenbein, 1994). The drift
time required for convergence to the optimal subsolution is given by (Goldberg & Segrest, 1987),
td = 6p0 (1 − p0 )n. (3)
Assuming a worst-case scenario where all 2k −1 subsolutions compete with the optimal subsolution,
that is setting p0 = 1/2k , and substituting Equation 1 for n in the above equation we get
√ σBB √
td = 3 π · · m log m. (4)
d
Using equations 1 and 2, we can now predict the scalability, or the number of function evalua-
tions required for successful convergence, of GAs as follows:
σBB k
nfe = n · tc = cf · 2 · m log m, (5)
d
√
where cf = ct π/2I.
3
where m is the number of subproblems, gi is the ith subproblem, and Si ⊂ X1 , X2 , · · · X is the
subset of variables or order k for the ith subproblem.
In order to ease the analysis, in this study we consider the subproblems to be identical, that is,
g1 = g2 = · · · = gm . We note that this restriction can be relaxed and the behavior of selectorecom-
binative GAs is qualitatively similar (Pelikan, Sastry, Butz, & Goldberg, 2006). We further assume
that the subproblem is generated randomly from the standard uniform distribution U [0, 1]. That
is, the fitness contributions of 2k subsolutions are sampled from the standard uniform distribution
U [0, 1]. The same fitness contribution is used to compute the fitness of subsolutions from all the
partitions.
To estimate the population size and the drift time for rADPs, we need to estimate the noise-to-
signal ratio. To do so, we estimate the probability density function (p.d.f.) of the signal difference
d and the building-block variance (noise) σBB .
The signal difference d is defined as the fitness between the best and the second-best subsolution.
Since the subsolution fitness is sampled from a uniform distribution, the k th order statistic follows
a Beta distribution with parameters α = k and β = 2k − k + 1 (Balakrishnan & Cohen, 1991;
Balakrishnan & Chen, 1999; Balakrishnan & Rao, 1999b; Balakrishnan & Rao, 1999a; Hogg &
Craig, 1995). From order statistics, we know that if X has a p.d.f. f (x) and a cumulative density
function F (x), then the probability distribution of the r th order statistic, fUr:N is given by
2k ! k
fUr:2k = [F (x)]r−1 [1 − F (x)]2 −r f (x), (7)
(r − 1)! (2 − r)!
k
for r = 1, 2, · · · , 2k . Substituting the p.d.f. and cumulative density function of the standard uniform
distribution, f (x) = 1, and F (x) = x, respectively, in the above equation we get,
2k ! k
fUr:2k = xr−1 (1 − x)2 −r . (8)
(r − 1)! (2 − r)!
k
Therefore, the p.d.f of the fitness of the best subsolution fU2k :2k is given by
2k ! k k
fU2k :2k (x) = x2 −1 (1 − x)0 = 2k x2 −1 . (9)
(2 − 1)! 0!
k
Similarly, the p.d.f of the fitness of the second-best subsolution fU2k −1:2k (y) can be written as
2k !
2k −2 k
fU2k −1:2k (y) = x (1 − x)1
= 2k
2 k
− 1 · y 2 −2 (1 − y) . (10)
(2 − 2)! 1!
k
To summarize, the p.d.fs of the fitness of the best subsolution fU2k :2k (x) and the 2nd -best subsolution
fU2k −1:2k (y) is given by
k −1
fU2k :2k (x) = Beta(2k , 1) = 2k · x2 , (11)
2k −2
fU2k −1:2k (y) = Beta(2k − 1, 2) = 2k 2k − 1 · y (1 − y) . (12)
To compute the signal, we first need to compute the joint p.d.f of the best and the second best
subsolutions. From order statistics, we know that the joint p.d.f of ith and j th order statistics is
N!
gUi:N ,Uj:N (y, x) = ·
(i − 1)! (j − i − 1)! (N − j)!
[F (y)]i−1 [F (x) − F (y)]j−i−1 [1 − F (x)]N −j f (x)f (y), (13)
4
where y < x, and i < j. Since both the fitness of best and second-best subsolutions, X and Y ,
are sampled from the standard uniform distribution, f (x) = 1, f (y) = 1, F (x) = x, and F (y) = y.
Additionally, i = 2k − 1, j = 2k , and N = 2k . Substituting these values in the above equation we
can write the joint p.d.f of the fitnesses of best and 2nd -best subsolutions as
2k ! k
gUi:N ,Uj:N (y, x) = · y 2 −2 (x − y)0 (1 − x)0 ,
(2 − 2)! 0! 0!
k
k
= 2k 2k − 1 y 2 −2 (14)
We know that for the standard uniform distribution, μ2 = 1/12 and μ4 = 1/80. Therefore, the
k
mean and the variance of the subsolution variance is given by (2 2−1)
k μ2 ≈ μ2 = 1/12, and
2 (κ − 1)
0.0056
E 2
σBB − E[σBB
2
] = (κ − 1)μ4 − (κ − 3)μ2
2 ≈ , (16)
κ3 2k
respectively, where κ = 2k . We can approximate the p.d.f. of the variance of subsolution fitnesses
as the normal distribution with parameters μ = σU2 = 1/12, and σ 2 = 0.0056/2k :
1 0.075
fσ2 (σBB ) ∼ N
2
, √ . (17)
BB 12 2k
2 , we can easily see that E[1/d] = 2k and E[σ 2 ] = 1/12. Using
From the p.d.fs of d and σBB BB
these values in Equations 1, 4, and 5, we can bound the scalability of the GA with ideal crossover
for rADPs as discussed in the following section.
5 Analysis of rADPs
We generated 10,000 rADP instances for our analysis and considered problems with m = 5 to m
= 50 subproblems of order k = 3, 4, and 5. For brevity, we only show results for k = 4 in this
paper as the results are qualitatively similar for k = 3 and 5. We considered a generationwise selec-
torecombinative GA with non-overlapping populations of fixed size. We use a binary tournament
selection without replacement (Goldberg, Korb, & Deb, 1989; Sastry & Goldberg, 2001), and a
uniform BB-wise crossover (Sastry & Goldberg, 2004). In uniform BB-wise crossover, two parents
are randomly selected from the mating pool and their subsolutions in each partition are exchanged
with a probability of 0.5. Therefore, none of the subsolutions are disrupted during a recombination
event. The GA run is terminated when all the individuals in the population converge to the same
individual. The average number of BBs correctly converged are computed over 50 independent
runs. The minimum population size required such that m − 1 BBs converge to the correct value is
determined by a bisection method (Sastry, 2001). The results of population-size is averaged over
30 such bisection runs, while the results for tc and nf e are averaged over 1,500 independent GA
runs.
5
700
200
650
180 600
Population size, n
Population size, n
550
160
500
140 450
400
120
350
100 300
250
80
0 2 4 0 2 4
10 10 10 10 10 10
Noise−to−signal ratio, σbb/d Noise−to−signal ratio, σbb/d
(a) m = 10 (b) m = 50
Figure 1: The population size required to solve at least m − 1 building blocks of the rDPs to
optimality as a function of noise-to-signal ratio σBB /d. The plots show the results for 10,000
rADPs and are averaged over 30 bisection runs with 50 independent GA runs for each bisection
trial. The results for other problem sizes are shown in Figure 6 in the appendix.
We begin our analysis by verifying if the population size increases with the noise-to-signal ratio
σBB /d. We plot the minimum population size required as a function of noise-to-signal ratio for
m = 10, and m = 50 in Figure 1. The results for other problem sizes are provided in the appendix
(see Figure 6. The results clearly indicates that the population size on an average increases linearly
with σBB /d. The results also show that beyond a noise-to-signal ratio of about 100, the population
size saturates and no longer increases linearly with σBB /d. One reason for the saturation could be
that decision making no longer dictates the population sizing and it is governed by mixing which
does not depend on σBB /d. However, we can expect the population size not to saturate and to grow
with σBB /d when the recombination operator deviates from an ideality (Pelikan, Sastry, Butz, &
Goldberg, 2006).
We now compare the histogram of the population size, run duration, and number of function
evaluations for a given m in Figure 2. Specifically we plot the relative frequency—where, a particular
frequency is divided by the maximum frequency—of the rADPs that require a certain population,
run-duration, and number of function evaluations ranges to solve at least m−1 out of m subproblems
to optimality for m = 10, and 50. The results show that the histograms have lognormal behavior
with the tail increasing with m, which indicates the presence of drift in solving some rADPs. From
the histograms we find that 0.08–0.3%, 0.01–0.1%, and 0.15–0.59% of the rADP instances require n,
tc , and nf e greater than three standard deviations from the median, respectively. The histograms
for other problem sizes are given in the appendix (see Figures 7, 8, and 9).
Next, we investigate easy and hard rADP instances according to population-size, convergence-
time, and function-evaluation requirements for m = 5–50. Specifically, for each m, we rank each
rADP in descending order of the population size required. Then for each rADP, we sum up its
ranks over all m values and consider this as the overall rank of the rADP instance. We then
select the top 5 and bottom 5 rADP instances according to the overall rank. We repeat the same
procedure for tc and nf e and show the five hardest—in terms of requiring maximum n, tc , or
nf e over all values of m—and five easiest—in terms of requiring minimum n, tc , or nf e over all
6
1 1 1
0.7 0.7
Relative frequency
Relative frequency
0.7
Relative frequency
0 0 0
80 100 120 140 160 180 200 20 30 40 50 60 20 30 40 50 60
Population size, n Convergence time, tc No. function evals., nfe
0.7
Relative frequency
0.7 0.7
Relative frequency
Relative frequency
0.6 0.6 0.6
0 0 0
300 400 500 600 700 50 60 70 80 90 100 110 120 130 50 60 70 80 90 100 110 120 130
Population size, n Convergence time, t No. function evals., n
c fe
Figure 2: Histogram of population size, convergence time in terms of number of generations, and
the number of function evaluations required to successfully solve at least m − 1 out of m building
blocks correctly for m = 10 and m = 50. To obtain the relative frequency, all frequencies have been
divided by the maximum frequency. The plots show the results for 10,000 rADPs and are averaged
over 30 bisection runs with 50 independent GA runs for each bisection trial. The results for other
problem sizes are shown in Figures 7, 8, and 9 in the appendix.
values of m—rADP instances in Figure 3. The results clearly show that for hard rADP instances,
there are more subsolutions competing with the best subsolution. Moreover, the difference in the
fitness of competing subsolutions is very small, that is, hard problem instances have low signal d.
Additionally, since there are more than two competing subsolutions in each partition, we can expect
drift to play a major role as the given collateral noise (Goldberg, Deb, & Clark, 1992). In contrast,
easy rADP instances all have two competing subsolutions with a large signal. These results agree
with the models of GA-design theory (Goldberg, 2002).
One of our primary interest is to investigate the how the population size, convergence time, and
the number of function evaluations scale with the problem size. From the facetwise models (Equa-
tions 1, 2, and 5), we expect n, tc , and nf e to scale as Θ(mα1 log m), Θ(mα2 ), and Θ(mα3 log m),
respectively. We begin by plotting the histograms α1 , α2 , and α3 in Figures 4(a)–(c). The results
show that the population size required to solve at least m−1 subproblems to optimality scales on an
average as Θ(m0.35 log m) and in the worst case scales as Θ(m5 log m) as predicted by the facetwise
model (Equation 1). Similarly, on an average tc and nf e scale as Θ(m0.52 ) and Θ(m0.85 log m),
and in the worst case they scale Θ(m0.58 ) and Θ(m1.04 log m) respectively, agreeing with facetwise
models (Equation 2, 3, and 5).
We also plot the median, minimum, and maximum n, tc , and nf e required over the 10,000
rADP instances and compare it to the facetwise models in Figures 4(d)–(f). The results show that
7
1 1 1
Subsolution fitness
Subsolution fitness
0.7 0.7 0.7
Subsolution fitness
Subsolution fitness
0.7 th 0.7 th 0.7 th
5 5 5
0.6 0.6 0.6
0 0 0
2 4 6 8 10 12 14 16 2 4 6 8 10 12 14 16 2 4 6 8 10 12 14 16
Sorted subsolution index Sorted subsolution index Sorted subsolution index
Figure 3: The top five difficult and easy rADP instances with respect to the population size n, run
duration tc , and the number of function evaluations nf e . The subsolutions are sorted according to
their fitness values. The 10,000 rADPs are ranked in descending according to n, tc , and nf e values
for a given m and they are averaged over all m values (5–50), and the top 5 are depicted as the
difficult instances and the bottom 5 are depicted as the easy instances.
the facetwise models clearly bound the scalability of selectorecombinative GAs on rADPs. From
Figure 4(e), we can see that while the convergence-time model (Equation 2) bounds the empirically
observed tc on an average, the drift-time model (Equation 3) bounds the. More importantly,
the results clearly validate the assertion that testing and modeling selectorecombinative GAs on
adversarially-designed test problems bounds their performance on other problems that are equally
hard or easier than the adversarial function for the case of rADPs.
Finally, we analyze the easy and hard rADP instances in terms of scalability and the results are
shown in Figure 5. Unlike the problems shown in Figure 3, here we show five rADP instances that
are easy and hard in terms of scalability of n, tc , and nf e . The results show that the problems that
are most difficult in terms of scalability of n are rADP instances where there are two competing
subsolutions with near-equal fitness (very low signal d, or a very high noise-to-signal ratio). It
is interesting to note that the population-sizing model was derived based on this very scenario of
two competing subsolutions. Similarly the most difficult in terms of scalability of tc are rADP
instances with more than two competing subsolutions with near-equal fitness, where we can expect
drift to play a dominating role. Similar to the results shown in Figure 3, the easy rADP instances
in terms of scalability also have two competing subsolutions with a reasonably high signal (low
noise-to-signal ratio).
8
1
1 1
0.9 0.9
0.9
0.8 0.8
0.8
0.7
Relative frequency
0.7
Relative frequency
0.7
Relative frequency
0.6 0.6 0.6
0 0 0
0.3 0.35 0.4 0.45 0.5 0.4 0.42 0.44 0.46 0.48 0.5 0.52 0.54 0.56 0.75 0.8 0.85 0.9 0.95 1 1.05
Value of the slope, α Value of the slope, α2 Value of the slope, α
1 3
4
10
400
80
200
40
100
20 3
10
50
10
5 10 20 40 5 10 20 40 5 10 20 40
No. of building blocks, m No. of building blocks, m No. of building blocks, m
Figure 4: Histogram of the scalability of population size, convergence time, and number of
function evaluations with problem size for 10,000 rADPs. From the facetwise models, we use
n = Θ(mα1 log m), tc = Θ(mα2 ), and nf e = Θ(mα3 log m). We also show the scalability of the
median, minimum, and maximum values of n, tc , and nf e with m.
Acknowledgments
The empirical runs for this paper were conducted on the Turing cluster at Computational Science
and Engineering at the University of Illinois.
This work was sponsored by the Air Force Office of Scientific Research, Air Force Materiel
Command, USAF, under grant FA9550-06-1-0096, the National Science Foundation under NSF
CAREER grant ECS-0547013 and ITR grant DMR-03-25939 at Materials Computation Center,
9
1 1 1
Subsolution fitness
Subsolution fitness
0.7 0.7 0.7
Subsolution fitness
Subsolution fitness
0.7 0.7 0.7
5th 5th 5th
0.6 0.6 0.6
0 0 0
2 4 6 8 10 12 14 16 2 4 6 8 10 12 14 16 2 4 6 8 10 12 14 16
Sorted subsolution index Sorted subsolution index Sorted subsolution index
Figure 5: The top five difficult and easy rADP instances with respect to the scalability of population
size n, run duration tc , and the number of function evaluations nf e .
UIUC. The work was also supported by the Research Award and the Research Board at the Uni-
versity of Missouri. The U.S. Government is authorized to reproduce and distribute reprints for
government purposes notwithstanding any copyright notation thereon.
The views and conclusions contained herein are those of the authors and should not be inter-
preted as necessarily representing the official policies or endorsements, either expressed or implied,
of the Air Force Office of Scientific Research, the National Science Foundation, or the U.S. Gov-
ernment.
References
Ackley, D. H. (1987). A connectionist machine for genetic hill climbing. Kluwer Academic Pub-
lishers.
Asoh, H., & Mühlenbein, H. (1994). On the mean convergence time of evolutionary algorithms
without selection and mutation. Parallel Problem Solving from Nature, 3 , 98–107.
Bäck, T. (1995). Generalized convergence models for tournament—and (μ, λ)—selection. Pro-
ceedings of the Sixth International Conference on Genetic Algorithms, 2–8.
Balakrishnan, N., & Chen, W. W. S. (1999). Handbook of tables for order statistics from lognormal
distributions with applications. Amsterdam, Netherlands: Kluwer Academic Publishers.
Balakrishnan, N., & Cohen, A. C. (1991). Order statistics and inference. New York, NY: Aca-
demic Press.
10
Balakrishnan, N., & Rao, C. R. (Eds.) (1999a). Order statistics: Applications. Amsterdam,
Netherlands: Elsevier.
Balakrishnan, N., & Rao, C. R. (Eds.) (1999b). Order statistics: Theory and methods. Amster-
dam, Netherlands: Elsevier.
Bethke, A. D. (1981). Genetic algorithms as function optimizers. Doctoral dissertation, University
of Michigan, Ann Arbor, MI. (University Microfilms No. 8106101).
Bulmer, M. G. (1985). The mathematical theory of quantitative genetics. Oxford: Oxford Uni-
versity Press.
Deb, K., & Goldberg, D. E. (1992). Analyzing deception in trap functions. Foundations of Genetic
Algorithms, 2 , 93–108. (Also IlliGAL Report No. 91009).
Feller, W. (1970). An introduction to probability theory and its applications. New York, NY:
Wiley.
Goldberg, D. E. (1987). Simple genetic algorithms and the minimal deceptive problem. In Davis,
L. (Ed.), Genetic algorithms and simulated annealing (Chapter 6, pp. 74–88). Los Altos, CA:
Morgan Kaufmann.
Goldberg, D. E. (2002). Design of innovation: Lessons from and for competent genetic algo-
rithms. Boston, MA: Kluwer Academic Publishers.
Goldberg, D. E., Deb, K., & Clark, J. H. (1992). Genetic algorithms, noise, and the sizing of
populations. Complex Systems, 6 , 333–362. (Also IlliGAL Report No. 91010).
Goldberg, D. E., Korb, B., & Deb, K. (1989). Messy genetic algorithms: Motivation, analysis,
and first results. Complex Systems, 3 (5), 493–530. (Also IlliGAL Report No. 89003).
Goldberg, D. E., & Segrest, P. (1987). Finite Markov chain analysis of genetic algorithms. Pro-
ceedings of the Second International Conference on Genetic Algorithms, 1–8.
Harik, G., Cantú-Paz, E., Goldberg, D. E., & Miller, B. L. (1999). The gambler’s ruin problem,
genetic algorithms, and the sizing of populations. Evolutionary Computation, 7 (3), 231–253.
(Also IlliGAL Report No. 96004).
Hogg, R. V., & Craig, A. T. (1995). Introduction to mathematical statistics (5th ed.). New York,
NY: Macmillan.
Kimura, M. (1964). Diffusion models in population genetics. Journal of Applied Probability, 1 ,
177–232.
Liepins, G. E., & Vose, M. D. (1990). Representational issues in genetic optimization. Journal
of Experimental and Theoretical Artificial Intelligence, 2 , 101–115.
Miller, B. L., & Goldberg, D. E. (1995). Genetic algorithms, tournament selection, and the effects
of noise. Complex Systems, 9 (3), 193–212. (Also IlliGAL Report No. 95006).
Mühlenbein, H., & Schlierkamp-Voosen, D. (1993). Predictive models for the breeder genetic
algorithm: I. continous parameter optimization. Evolutionary Computation, 1 (1), 25–49.
Pelikan, M., Sastry, K., Butz, M. V., & Goldberg, D. E. (2006, January). Hierarchical BOA
on random decomposable problems (IlliGAL Report No. 2006002). Urbana, IL: University of
Illinois at Urbana Champaign.
Sastry, K. (2001). Evaluation-relaxation schemes for genetic and evolutionary algorithms. Mas-
ter’s thesis, University of Illinois at Urbana-Champaign, Urbana, IL. (Also IlliGAL Report
No. 2002004).
11
Sastry, K., & Goldberg, D. E. (2001). Modeling tournament selection with replacement using ap-
parent added noise. Intelligent Engineering Systems Through Artificial Neural Networks, 11 ,
129–134. (Also IlliGAL Report No. 2001014).
Sastry, K., & Goldberg, D. E. (2004). Let’s get ready to rumble: Crossover versus mutation head
to head. Proceedings of the Genetic and Evolutionary Computation Conference, 2 , 126–137.
Also IlliGAL Report No. 2004005.
Thierens, D., & Goldberg, D. E. (1994). Convergence models of genetic algorithm selection
schemes. Parallel Problem Solving from Nature, 3 , 116–121.
Whitley, L. D. (1991). Fundamental principles of deception in genetic search. Foundations of
Genetic Algorithms, 221–241.
12
A Additional Results
280 350
110
260
100
240 300
Population size, n
Population size, n
Population size, n
90 220
200 250
80
180
70
160 200
60 140
120 150
50
0 2 4
100 0 2 4 0 2 4
10 10 10 10 10 10 10 10 10
Noise−to−signal ratio, σ /d Noise−to−signal ratio, σ /d Noise−to−signal ratio, σ /d
bb bb bb
Population size, n
Population size, n
300 350 400
300 350
250
300
250
200
250
200
200
150
0 2 4 0 2 4 0 2 4
10 10 10 10 10 10 10 10 10
Noise−to−signal ratio, σbb/d Noise−to−signal ratio, σbb/d Noise−to−signal ratio, σbb/d
Population size, n
500
450
450
400
400
350
350
300
300
250
250
200
0 2 4 0 2 4
10 10 10 10 10 10
Noise−to−signal ratio, σbb/d Noise−to−signal ratio, σbb/d
(g) m = 40 (h) m = 45
Figure 6: The population size required to solve at least m − 1 building blocks of the rDPs to
optimality as a function of noise-to-signal ratio σBB /d. The plots show the results for 10,000
rADPs and are averaged over 30 bisection runs with 50 independent GA runs for each bisection
trial.
13
1 1 1
0.7 0.7
Relative frequency
Relative frequency
0.7
Relative frequency
0 0 0
50 60 70 80 90 100 110 15 20 25 30 35 15 20 25 30 35
Population size, n Convergence time, tc No. function evals., nfe
0.7 0.7
Relative frequency
Relative frequency
0.7
Relative frequency
0 0 0
120 140 160 180 200 220 240 260 280 30 40 50 60 70 30 40 50 60 70
Population size, n Convergence time, t No. function evals., n
c fe
0.7 0.7
Relative frequency
Relative frequency
0.7
Relative frequency
0 0 0
150 200 250 300 30 40 50 60 70 80 90 100 30 40 50 60 70 80 90 100
Population size, n Convergence time, tc No. function evals., nfe
Figure 7: Histogram of population size, convergence time in terms of number of generations, and
the number of function evaluations required to successfully solve at least m − 1 out of m building
blocks correctly for m = 5, 15, and 20. To obtain the relative frequency, all frequencies have been
divided by the maximum frequency. The plots show the results for 10,000 rADPs and are averaged
over 30 bisection runs with 50 independent GA runs for each bisection trial.
14
1 1
1
0.9 0.9
0.9
0.8 0.8
0.8
0.7 0.7
Relative frequency
Relative frequency
0.7
Relative frequency
0.6 0.6
0.6
0.5 0.5
0.5
0.4 0.4
0.4
0.7 0.7
Relative frequency
Relative frequency
0.7
Relative frequency
0.7 0.7
Relative frequency
Relative frequency
0.7
Relative frequency
0 0 0
200 250 300 350 400 450 500 550 40 50 60 70 80 90 100 110 40 50 60 70 80 90 100 110
Population size, n Convergence time, t No. function evals., n
c fe
Figure 8: Histogram of population size, convergence time in terms of number of generations, and
the number of function evaluations required to successfully solve at least m − 1 out of m building
blocks correctly for m = 25, 30, and 35. To obtain the relative frequency, all frequencies have been
divided by the maximum frequency. The plots show the results for 10,000 rADPs and are averaged
over 30 bisection runs with 50 independent GA runs for each bisection trial.
15
1 1 1
0.7 0.7
Relative frequency
Relative frequency
0.7
Relative frequency
0 0 0
250 300 350 400 450 500 550 600 40 60 80 100 120 40 60 80 100 120
Population size, n Convergence time, tc No. function evals., nfe
0.7 0.7
Relative frequency
Relative frequency
0.7
Relative frequency
Figure 9: Histogram of population size, convergence time in terms of number of generations, and
the number of function evaluations required to successfully solve at least m − 1 out of m building
blocks correctly for m = 40 and 50. To obtain the relative frequency, all frequencies have been
divided by the maximum frequency. The plots show the results for 10,000 rADPs and are averaged
over 30 bisection runs with 50 independent GA runs for each bisection trial.
16