Professional Documents
Culture Documents
Andrei A. Prudius
Sigrún Andradóttir
545
Prudius and Andradóttir
the convergence to the optimal solution(s) might be slow. value between successive reviews is small (less than δ).
Our approach takes a different perspective. We propose Hence, if local search is not yielding much improvement
maintaining the appropriate balance between exploitation in the objective function value, then there is little merit in
and exploration during the various stages of the search. We continuing searching locally. On the other hand, if local
now present a BEES algorithm in Section 2.1 and provide search is making good progress, then the search will stay
numerical results for this algorithm in Section 2.2. local.
On the other hand, there are two ways in which the
2.1 BEES Algorithm for Deterministic Problems BEES algorithm switches from global to local search.
The first way occurs when the perceived improvement
In this section, we describe a specific version of the BEES is small, but substantial improvement has been achieved
method. This approach adaptively alternates between (larger than δ) since the last switch from local to global
sampling from different distributions. One sampling search. The second way occurs when the improvement
distribution aims at exploring the entire feasible region between successive reviews is large and the distance D is
(exploration or global search), while another class of small (this is sensible because the improvement has been
distributions (one for each feasible point) aims at searching local in nature and hence, switching to local search may
promising subregions (exploitation or local search). The be beneficial). The pseudo-code for this BEES algorithm
search nature (global or local) is reviewed each time k is given in Algorithm 2 (one iteration of this algorithm
points have been sampled. Let f ∗ be the function value of corresponds to one execution of the statements inside the
the best solution θ ∗ found so far (in this case maximum, while loop of Algorithm 2).
see equation (1)) and let fl∗ be the function value of the
best point found the last time local search was performed.
Define to be the improvement in the function value Algorithm 2: BEES Algorithm
between the current and preceding reviews and D to
be the Euclidian distance between the points where the 1: counter ← 0
corresponding function values were achieved. Then the 2: LocalSearch ← false
pseudo-code for how the sampling distribution is updated 3: Sample a solution θ from global distribution
is given in Algorithm 1 below. If the flag LocalSearch is 4: Evaluate objective function at θ
true, then local search is performed. Otherwise, global 5: Let fl∗ ← f (θ ) and θ ∗ ← θ
search is done. Note that the sampling distribution update 6: while Stopping criterion is not satisfied do
procedure requires two thresholds, namely the distance 7: if LocalSearch=true then
threshold d and the improvement threshold δ. 8: Sample a solution θ from local distribution corre-
sponding to θ ∗
9: else
Algorithm 1: Sampling Distribution Update Procedure 10: Sample a solution θ from global distribution
11: end if
1: if LocalSearch=true then 12: Evaluate objective function at θ
2: if ≤ δ then 13: Update θ ∗ and f ∗ (if needed)
3: fl∗ ← f ∗ 14: counter ← counter+1
4: LocalSearch ← false 15: if counter=k then
5: end if 16: Update and D
6: else 17: Update search nature (use Sampling Distribution
7: if ≤ δ then Update Procedure)
8: if f ∗ − fl∗ ≥ δ then 18: counter ← 0
9: LocalSearch ← true 19: end if
10: end if 20: end while
11: else 21: Present θ ∗ as the estimate of the optimal solution
12: if D ≤ d then
13: LocalSearch ← true
2.2 Deterministic Example
14: end if
15: end if
Consider the optimization problem (1) with
16: end if
We now briefly discuss the motivation for the proce- f (θ) = max{f1 (θ ), f2 (θ ), 0} (2)
dure. Observe that the switch from local to global search
occurs only if the improvement in the objective function
546
Prudius and Andradóttir
547
Prudius and Andradóttir
The test problem used here is the same as in (2) but with This paper has presented a new random search method for
white noise added. The noise is a normally distributed simulation optimization. The approach emphasizes the need
random variable with mean 0 and variance 50. Observe for maintaining the right balance between exploration and
that the standard deviation of the noise is roughly the same exploitation during various stages of the search. Prelimi-
as the range of the objective function values. This makes nary numerical show promising performance of the method.
the response surface highly noisy and hence this problem More numerical studies are of course required to understand
is relatively difficult to solve. better the behavior of the approach. We are also interested in
For the numerical studies, the additional parameters exploring whether the performance of existing methods for
of the BEES algorithm under consideration were set as simulation optimization can be improved by incorporating
follows: kl = 25, kg = 20, g = 5, α = 0.3, m = 10, ideas from this paper.
and γ = 0.5. As before, the SA method is implemented
as described by Alrefaei and Andradóttir (1999) with ACKNOWLEDGMENTS
the parameter values given in Section 2.2. The number
of replications at the current and candidate solutions This material is based upon work supported by the National
conducted in each iteration of the Local and Global SA Science Foundation under Grant No. 0000135, Grant No.
approaches was set equal to 10. Again the performance of 0217860, and Grant No. 0400260.
the algorithms was compared based on 100 replications.
Figure 2 shows the average performance of the estimated REFERENCES
optimal solution at each point in time as the simulation
effort increases. The considerably worse performance of Alrefaei, M. H., and S. Andradóttir. 1999. A simulated
the Global SA approach in this example can be explained annealing algorithm with constant temperature for dis-
by the fact that the estimator of the optimal solution is very crete stochastic optimization. Management Science 45
noisy in this case. The remaining conclusions are similar (5): 748-764.
to those in Section 2, except that convergence is slower Alrefaei, M. H., and S. Andradóttir. 2001. A modification
for all three algorithms due to the noise in the estimated of the stochastic ruler method for discrete stochastic op-
objective function values. timization. European Journal of Operational Research
133 (1): 160-182.
8 Alrefaei, M. H., and S. Andradóttir. 2004. Discrete stochas-
tic optimization using variants of the stochastic ruler
7
method. Working paper.
6 Andradóttir, S. 1998. Simulation optimization. In Hand-
book of simulation: Principles, methodology, advances,
5 applications, and practice, ed. J. Banks, 307-333. New
York: Wiley.
4 Andradóttir, S. 1999. Accelerating the convergence of ran-
dom search methods for discrete stochastic optimiza-
3
tion. ACM Transactions on Modeling and Computer
2 BEES Simulation 9 (4): 349-380.
Local SA Andradóttir, S. 2004. Simulation optimization with count-
1 Global SA ably infinite feasible regions: Efficiency and conver-
Optimal Solution
gence. Working paper.
0 Fu, M. C. 2002. Optimization for simulation: Theory vs.
0 1 2 3 4 5
Number of objective function evaluations x 10
4 practice. INFORMS Journal on Computing 14 (3):
192-215.
Gelfand, S. B., and S. K. Mitter. 1989. Simulated annealing
Figure 2: Average Objective Function Value at the Estimated
with noisy or imprecise energy measurements. Journal
Optimal Solution for the Stochastic Problem
of Optimization Theory and Applications 62 (1): 49-62.
Gong, W.-B., Y.-C. Ho, and W. Zhai. 1999. Stochastic
comparison algorithm for discrete optimization with
estimation. SIAM Journal on Optimization 10 (2):
384-404.
548
Prudius and Andradóttir
AUTHOR BIOGRAPHIES
549