Professional Documents
Culture Documents
http://SimulationResearch.lbl.gov
Michael Wetter
MWetter@lbl.gov
December 8, 2011
Notice:
This work was supported by the U.S. Department of Energy (DOE), by the Swiss Academy
of Engineering Sciences (SATW), and by the Swiss National Energy Fund (NEFF).
GenOpt
Generic Optimization Program
Version 3.1.0
DISCLAIMER
This document was prepared as an account of work sponsored by the
United States Government. While this document is believed to contain correct information, neither the United States Government nor any
agency thereof, nor The Regents of the University of California, nor any
of their employees, makes any warranty, express or implied, or assumes
any legal responsibility for the accuracy, completeness, or usefulness of
any information, apparatus, product, or process disclosed, or represents
that its use would not infringe privately owned rights. Reference herein
to any specific commercial product, process, or service by its trade name,
trademark, manufacturer, or otherwise, does not necessarily constitute
or imply its endorsement, recommendation, or favoring by the United
States Government or any agency thereof, or The Regents of the University of California. The views and opinions of authors expressed herein
do not necessarily state or reflect those of the United States Government
or any agency thereof or The Regents of the University of California.
GenOpt
Generic Optimization Program
Version 3.1.0
Contents
1 Abstract
2 Notation
3 Introduction
4 Optimization Problems
4.1 Classification of Optimization Problems . . . . . . . . . .
4.1.1 Problems with Continuous Variables . . . . . . . .
4.1.2 Problems with Discrete Variables . . . . . . . . . .
4.1.3 Problems with Continuous and Discrete Variables
4.1.4 Problems that use a Building Simulation Program
4.2 Algorithm Selection . . . . . . . . . . . . . . . . . . . . .
4.2.1 Problem Pc with n > 1 . . . . . . . . . . . . . . .
4.2.2 Problem Pcg with n > 1 . . . . . . . . . . . . . . .
4.2.3 Problem Pc with n = 1 . . . . . . . . . . . . . . .
4.2.4 Problem Pcg with n = 1 . . . . . . . . . . . . . . .
4.2.5 Problem Pd . . . . . . . . . . . . . . . . . . . . . .
4.2.6 Problem Pcd and Pcdg . . . . . . . . . . . . . . . .
4.2.7 Functions with Several Local Minima . . . . . . .
11
11
11
11
12
12
13
13
14
15
15
15
15
15
16
16
17
18
19
20
20
21
21
22
22
22
22
23
23
24
24
24
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
GenOpt
Generic Optimization Program
Version 3.1.0
5.3
5.4
5.5
5.6
c)
Global Search Set Map . . . . . .
d)
Local Search Direction Map . . .
e)
Parameter Update . . . . . . . . .
f)
Keywords . . . . . . . . . . . . . .
5.2.3 Multi-Start GPS Algorithms . . . . . . . .
Discrete Armijo Gradient . . . . . . . . . . . . . .
5.3.1 Keywords . . . . . . . . . . . . . . . . . . .
Particle Swarm Optimization . . . . . . . . . . . .
5.4.1 PSO for Continuous Variables . . . . . . . .
a)
Neighborhood Topology . . . . . .
b)
Model PSO Algorithm . . . . . . .
c)
Particle Update Equation . . . . .
(i)
Inertia Weight . . . . . .
(ii)
Constriction Coefficient .
5.4.2 PSO for Discrete Variables . . . . . . . . .
5.4.3 PSO for Continuous and Discrete Variables
5.4.4 PSO on a Mesh . . . . . . . . . . . . . . . .
5.4.5 Population Size and Number of Generations
5.4.6 Keywords . . . . . . . . . . . . . . . . . . .
Hybrid GPS Algorithm with PSO Algorithm . . .
5.5.1 For Continuous Variables . . . . . . . . . .
5.5.2 For Continuous and Discrete Variables . . .
5.5.3 Keywords . . . . . . . . . . . . . . . . . . .
Simplex Algorithm of Nelder and Mead . . . . . .
5.6.1 Main Operations . . . . . . . . . . . . . . .
5.6.2 Basic Algorithm . . . . . . . . . . . . . . .
5.6.3 Stopping Criteria . . . . . . . . . . . . . . .
5.6.4 ONeills Modification . . . . . . . . . . . .
5.6.5 Modification of Stopping Criteria . . . . . .
5.6.6 Benchmark Tests . . . . . . . . . . . . . . .
5.6.7 Keywords . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
24
25
25
25
26
28
30
32
32
33
35
35
35
36
37
38
38
38
39
42
42
43
43
45
45
47
49
50
50
52
55
.
.
.
.
.
.
.
56
56
56
57
58
59
59
60
GenOpt
Generic Optimization Program
Version 3.1.0
7 Algorithms for Parametric Runs
7.1 Parametric Runs by Single Variation
7.1.1 Algorithm Description . . . .
7.1.2 Keywords . . . . . . . . . . .
7.2 Parametric Runs on a Mesh . . . . .
7.2.1 Algorithm Description . . . .
7.2.2 Keywords . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
62
62
62
63
63
63
64
8 Constraints
8.1 Constraints on Independent Variables . . . .
8.1.1 Box Constraints . . . . . . . . . . . .
8.1.2 Coupled Linear Constraints . . . . . .
8.2 Constraints on Dependent Variables . . . . .
8.2.1 Barrier Functions . . . . . . . . . . . .
8.2.2 Penalty Functions . . . . . . . . . . .
8.2.3 Implementation of Barrier and Penalty
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
Functions
.
.
.
.
.
.
.
65
65
65
66
66
67
67
68
9 Program
9.1 Interface to the Simulation Program . . . . .
9.2 Interface to the Optimization Algorithm . . .
9.3 Package genopt.algorithm . . . . . . . . . .
9.4 Implementing a New Optimization Algorithm
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
69
69
70
70
72
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
74
74
74
74
74
74
77
77
78
84
86
86
87
88
89
90
90
91
GenOpt
Generic Optimization Program
Version 3.1.0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
92
92
94
94
95
12 Conclusions
97
13 Acknowledgments
98
14 Legal
99
14.1 Copyright Notice . . . . . . . . . . . . . . . . . . . . . . . 99
14.2 License agreement . . . . . . . . . . . . . . . . . . . . . . 99
A Benchmark Tests
101
A.1 Rosenbrock . . . . . . . . . . . . . . . . . . . . . . . . . . 101
A.2 Function 2D1 . . . . . . . . . . . . . . . . . . . . . . . . . 102
A.3 Function Quad . . . . . . . . . . . . . . . . . . . . . . . . 103
Product and company names mentioned herein may be the trademarks of their
respective owners. Any rights not expressly granted herein are reserved.
GenOpt
Generic Optimization Program
Version 3.1.0
Abstract
GenOpt
Generic Optimization Program
Version 3.1.0
Notation
1. We use the notation a , b to denote that a is equal to b by definition.
We use the notation a b to denote that a is assigned the value of b.
GenOpt
Generic Optimization Program
Version 3.1.0
Introduction
This problem is usually solved by iterative methods, which construct infinite sequences, of progressively better approximations to a solution, i.e., a
point that satisfies an optimality condition. If X Rn , with some n N,
and X or f () is not convex, we do not have a test for global optimality, and
the most one can obtain is a point that satisfies a local optimality condition.
Furthermore, for X Rn , tests for optimality are based on differentiability
assumptions of the cost function. Consequently, optimization algorithms can
fail, possibly far from a solution, if f () is not differentiable in the continuous
independent variables. Some optimization algorithms are more likely to fail at
1
The independent variables are the variables that are varied by the optimization
algorithm from one iteration to the next. They are also called design parameters or
free parameters.
2
The cost function is the function being optimized. The cost function measures
a quantity that should be minimized, such as a buildings annual operation cost, a
systems energy consumption, or a norm between simulated and measured values in
a data fitting process. The cost function is also called objective function.
3
If f () is discontinuous, it may only have an infimum (i.e., a greatest lower bound)
but no minimum even if the constraint set X is compact. Thus, to be correct, (3.1)
should be replaced by inf xX f (x). For simplicity, we will not make this distinction.
GenOpt
Generic Optimization Program
Version 3.1.0
discontinuities than others. GenOpt has algorithms that are not very sensitive to (small) discontinuities in the cost function, such as Generalized Pattern
Search algorithms, which can also be used in conjunction with heuristic global
optimization algorithms.
Since one of GenOpts main application fields is building energy use or
operation cost optimization, GenOpt has been designed such that it addresses
the special properties of optimization problems in this area. In particular,
GenOpt is designed for optimization problems with the following properties:
1. The cost function may have to be defined on approximate numerical solutions of differential algebraic equations, which may fail to be continuous
(see Section 4.1.4).
2. The number of independent variables is small.4
3. Evaluating the cost function requires much more computation time than
determining the values for the next iterate.
4. Analytical properties of the cost function (such as formula for the gradient) are not available.
GenOpt has the following properties:
1. GenOpt can be coupled to any simulation program that calculates the
cost function without having to modify or recompile either program,
provided that the simulation program reads its input from text files and
writes its output to text files.
2. The user can select an optimization algorithm from an algorithm library,
or implement a custom algorithm without having to recompile and understand the whole optimization environment.
3. GenOpt does not require an expression for the gradient of the cost function.
With GenOpt, it is easy to couple a new simulation program, specify the
optimization variables and minimize the cost function. Therefore, in designing
complex systems, as well as in system analysis, a generic optimization program
like GenOpt offers valuable assistance. Note, however, that optimization is not
easy: The efficiency and success of an optimization is strongly affected by the
properties and the formulation of the cost function, and by the selection of an
appropriate optimization algorithm.
This manual is structured as follows: In Section 4, we classify optimization problems and discuss which of GenOpts algorithms can be used for each
of these problems. Next, we explain the algorithms that are implemented in
GenOpt: In Section 5, we discuss the algorithms for multi-dimensional optimization; in Section 6 the algorithms for one-dimensional optimization; and
4
By small, we mean on the order of 10, but the maximum number of independent
variables is not restricted in GenOpt.
GenOpt
Generic Optimization Program
Version 3.1.0
10
GenOpt
Generic Optimization Program
Version 3.1.0
Optimization Problems
4.1
We will now classify some optimization problems that can be solved with
GenOpts optimization algorithms. The classification will be used in Section 4.2
to recommend suitable optimization algorithms.
We distinguish between problems whose design parameters are continuous
variables1 , discrete variables2 , or both. In addition, we distinguish between
problems with and without inequality constraints on the dependent variables.
min f (x),
xX
(4.2)
min f (x),
(4.3a)
g(x) 0,
(4.3b)
xX
where everything is as in (4.2) and, in addition, g : Rn Rm is a once continuously differentiable constraint function (for some m N). We will assume
that there exists an x X that satisfies g(x ) < 0.
Continuous variables can take on any value on the real line, possibly between
lower and upper bounds.
2
Discrete variables can take on only integer values.
11
GenOpt
Generic Optimization Program
Version 3.1.0
min f (x).
(4.4)
xXd
(4.5a)
(4.5b)
min f (x),
xX
(4.6a)
(4.6b)
min f (x),
(4.7a)
g(x) 0,
(4.7b)
xX
12
GenOpt
Generic Optimization Program
Version 3.1.0
4.2
Algorithm Selection
In this section, we will discuss which of GenOpts algorithms can be selected for the optimization problems that we introduced in Section 4.1.
13
GenOpt
Generic Optimization Program
Version 3.1.0
can be used. Every accumulation point of the Discrete Armijo Gradient algorithm is a feasible stationary point.
If f () is not continuously differentiable, or if f () must be approximated by
an approximating cost function f (, ) where the approximation error cannot
be controlled, as described in Section 4.1.4, then Pc can only be solved heuristically. We recommend using the hybrid algorithm (Section 5.5, page 42), the
GPS implementation of the Hooke-Jeeves algorithm (Section 5.2.2, page 24),
possibly with multiple starting points (Section 5.2.3, page 26), or a Particle
Swarm Optimization algorithm (Section 5.4, page 32).
We do not recommend using the Nelder-Mead Simplex algorithm (Section 5.6, page 45) or the Discrete Armijo Gradient algorithm (Section 5.3,
page 28).
The following approach reduces the risk of failing at a point which is nonoptimal and far from a minimizer of f ():
1. Selecting large values for the parameter Step in the optimization command file (see page 87).
2. Selecting different initial iterates.
3. Using the hybrid algorithm of Section 5.5, the GPS implementation of
the Hooke-Jeeves algorithm, possibly with multiple starting points (Section 5.2.3, page 26), and/or a Particle Swarm Optimization algorithm
and select the best of the solutions.
4. Doing a parametric study around the solution that has been obtained
by any of the above optimization algorithms. The parametric study can
be done using the algorithms Parametric (Section 7.1, page 62) and/or
EquMesh (Section 7.2, page 63). If the parametric study yields a further
reduction in cost, then the optimization failed at a non-optimal point.
In this situation, one may want to try another optimization algorithm.
If f () is continuously differentiable but must be approximated by approximating cost functions f (, ) where the approximation error can be controlled
as described in Section 4.1.4, then Pc can be solved using the hybrid algorithm
(Section 5.5, page 42) or the GPS implementation of the Hooke-Jeeves algorithm (Section 5.2.2, page 24), both with the error control scheme described
in the Model GPS Algorithm 5.1.8 (page 19). The GPS implementation of
the Hooke-Jeeves algorithm can be used with multiple starting points (Section 5.2.3, page 26). The error control scheme can be implemented using the
value of GenOpts variable stepNumber (page 68) and GenOpts pre-processing
capabilities (Section 11.3, page 92). A more detailed description of how to use
the error control scheme can be found in [PW03, WP03].
14
GenOpt
Generic Optimization Program
Version 3.1.0
possibly with multiple starting points (Section 5.2.3, page 26). Constraints
g() 0 can be implemented using barrier and penalty functions (Section 8,
page 65).
If f () or g() are not continuously differentiable, we recommend using the
hybrid algorithm (Section 5.5, page 42) or the GPS implementation of the
Hooke-Jeeves algorithm (Section 5.2.2, page 24), possibly with multiple starting points (Section 5.2.3, page 26), and implement the constraints g() 0
using barrier and penalty functions (Section 8, page 65). To reduce the risk of
terminating far from a minimum point of f (), we recommend the same measures as for solving Pc .
4.2.5 Problem Pd
To solve Pd , a Particle Swarm Optimization algorithm can be used (Section 5.4, page 32).
15
GenOpt
Generic Optimization Program
Version 3.1.0
Algorithms for
Multi-Dimensional
Optimization
5.1
Generalized Pattern Search (GPS) algorithms are derivative free optimization algorithms for the minimization of problem Pc and Pcg , defined in (4.2)
and (4.3), respectively. We will present the GPS algorithms for the case where
the function f () cannot be evaluated exactly, but can be approximated by
functions f : Rq+ Rn R, where the first argument Rq+ is the precision
parameter of PDE, ODE, and algebraic equation solvers. Obviously, the explanations are similar for problems where f () can be evaluated exactly, except
that the scheme to control is not applicable, and that the approximate functions f (, ) are replaced by f ().
Under the assumption that the cost function is continuously differentiable,
all the accumulation points constructed by the GPS algorithms are stationary.
What GPS algorithms have in common is that they define the construction
of a mesh Mk in Rn , which is then explored according to some rules that differ
among the various members of the family of GPS algorithms. If no decrease in
cost is obtained on mesh points around the current iterate, then the distance
between the mesh points is reduced, and the process is repeated.
We will now explain the framework of GPS algorithms that will be used to
implement different instances of GPS algorithms in GenOpt. The discussion
follows the more detailed description of [PW03].
16
GenOpt
Generic Optimization Program
Version 3.1.0
5.1.1 Assumptions
We will assume that f () and its approximating functions {f (, )}Rq+
have the following properties.
Assumption 5.1.1
1. There exists an error bound function : Rq+ R+ such that for any
bounded set S X, there exists an S Rq+ and a scalar KS (0, )
such that for all x S and for all Rq+ , with S ,1
| f (, x) f (x)| KS ().
(5.1)
Furthermore,
lim () = 0.
(5.2)
kk0
(5.3)
0 .
(5.4)
17
GenOpt
Generic Optimization Program
Version 3.1.0
(5.5)
With this construction, all iterates lie on a rational mesh of the form
Mk , {x0 + k D m | m N2n }.
(5.7)
We will now characterize the set-valued maps that determine the mesh
points for the global and local searches. Note that the images of these
maps may depend on the entire history of the computation.
Definition 5.1.6 Let Xk Rn and k Q+ be the sets of all sequences
containing k + 1 elements, let Mk be the current mesh, and let Rq+ be the
solver tolerance.
1. We define the global search set map to be any set-valued map
k : Xk k Rq+ 2Mk X
(5.8a)
(5.8b)
4. We will call
Lk , xk + k D ei | i {1, . . . , 2 n} X
(5.8c)
18
GenOpt
Generic Optimization Program
Version 3.1.0
Remark 5.1.7
1. The map k (, , ) can be dynamic in the sense that if {xki }Ii=0 , k (xk , k , ),
b
then the rule for selecting xkbi , 1 bi I, can depend on {xki }i1
i=0 and
b
{f (, xki )}i1
i=0 . It is only important that the global search terminates
after a finite number of computations, and that Gk (2Mk X) .
2. As we shall see, the global search affects only the efficiency of the algorithm but not its convergence properties. Any heuristic procedure that
leads to a finite number of function evaluations can be used for k (, , ).
3. The empty set is included in the range of k (, , ) to allow omitting the
global search.
Maps:
Step 0:
Step 1:
Step 2:
Step 3:
Step 4:
Initial iterate x0 X;
Mesh size divider r N, with r > 1;
Initial mesh size exponent s0 N.
Global search set map k : Xk k Rq+ 2Mk X ;
Function : R+ Rq+ (to assign ), such that the composition
: R+ R+ is strictly monotone decreasing and satisfies
(())/ 0, as 0.
Initialize k = 0, 0 = 1/rs0 , and = (1).
Global Search
Construct the global search set Gk = k (xk , k , ).
If f (, x ) f (, xk ) < 0 for any x Gk , go to Step 3;
else, go to Step 2.
Local Search
Evaluate f (, ) for any x Lk until some x Lk
satisfying f (, x ) f (, xk ) < 0 is obtained, or until all points
in Lk are evaluated.
Parameter Update
If there exists an x Gk Lk satisfying f (, x ) f (, xk ) < 0,
set xk+1 = x , sk+1 = sk , k+1 = k , and do not change ;
else, set xk+1 = xk , sk+1 = sk + tk , with tk N+ arbitrary,
k+1 = 1/rsk+1 , = (k+1 /0 ).
Replace k by k + 1, and go to Step 1.
19
GenOpt
Generic Optimization Program
Version 3.1.0
Remark 5.1.9
1. To ensure that does not depend on the scaling of 0 , we normalized the
argument of (). In particular, we want to decouple from the users
choice of the initial mesh parameter.
2. In Step 2, once a decrease of the cost function is obtained, one can
proceed to Step 3. However, one is allowed to evaluate f (, ) at more
points in Lk in an attempt to obtain a bigger reduction in cost. However,
one is allowed to proceed to Step 3 only after either a cost decrease has
been found, or after all points in Lk are tested.
Unconstrained Minimization
We will first present the convergence properties of the Model GPS Algorithm 5.1.8 on unconstrained minimization problems, i.e., for X = Rn .
First, we will need the notion of a refining subsequence, which we define as
follows:
Definition 5.1.10 (Refining Subsequence) Consider a sequence {xk }
k=0
constructed by Model GPS Algorithm 5.1.8. We will say that the subsequence
{xk }kK is the refining subsequence, if k+1 < k for all k K, and k+1 =
k for all k
/ K.
We now state that pattern search algorithms with adaptive precision function evaluations construct sequences with stationary accumulation points.
20
GenOpt
Generic Optimization Program
Version 3.1.0
Theorem 5.1.11 (Convergence to a Stationary Point) Suppose that Assumptions 5.1.1 and 5.1.4 are satisfied and that X = Rn . Let x Rn be an
accumulation point of the refining subsequence {xk }kK , constructed by Model
GPS Algorithm 5.1.8. Then,
f (x ) = 0.
(5.9)
b) Box-Constrained Minimization
We now present the convergence results for the box-constrained problem (4.2). See [AD03, PW03, KLT03] for the more general case of linearlyconstrained problems and for the convergence proofs.
First, we introduce the notion of a tangent cone and a normal cone, which
are defined as follows:
Definition 5.1.12 (Tangent and Normal Cone)
1. Let X Rn . Then, we define the tangent cone to X at a point x X
by
TX (x ) , { (x x ) | 0, x X}.
(5.10a)
2. Let TX (x ) be as above. Then, we define the normal cone to X at x X
by
NX (x ) , {v Rn | t TX (x ), hv, ti 0}.
(5.10b)
We now state that the accumulation points generated by Model GPS Algorithm 5.1.8 are feasible stationary points of problem (4.2).
Theorem 5.1.13 (Convergence to a Feasible Stationary Point)
Suppose Assumptions 5.1.1 and 5.1.4 are satisfied. Let x X be an accumulation point of a refining subsequence {xk }kK constructed by Model GPS
Algorithm 5.1.8 in solving problem (4.2). Then,
hf (x ), ti 0,
and
5.2
t TX (x ),
f (x ) NX (x ).
(5.11a)
(5.11b)
21
GenOpt
Generic Optimization Program
Version 3.1.0
Algorithm Parameters
The search direction matrix is defined as
D , [+s1 e1 , s1 e1 , . . . , +sn en , sn en ]
(5.12)
Local Search
22
GenOpt
Generic Optimization Program
Version 3.1.0
Output:
Step 0:
Step 1:
Step 2:
If f (, x
e) < f (, x)
Set x = x
e.
else
If i = 0, set i = 1, else set i = 0.
x}.
Set x
e = x + k D e2 i1+i and T T {e
If f (, x
e) < f (, x)
Set x = x
e.
else
If i = 0, set i = 1, else set i = 0.
end if.
end if.
end for.
Return T .
e)
Keywords
For the GPS implementation of the Coordinate Search Algorithm, the command file (see page 86) must only contain continuous parameters.
To invoke the algorithm, the Algorithm section of the GenOpt command
file must have the following form:
Algorithm {
Main
MeshSizeDivider
= GPSCoordinateSearch ;
= I n t e g e r ; // 1 < M e s h S i z e D i v i d e r
23
GenOpt
Generic Optimization Program
Version 3.1.0
InitialMeshSizeExponent
= I n t e g e r ; // 0 <= I n i t i a l M e s h S i z e E x p o n e n t
MeshSizeEx ponentIncrement = I n t e g e r ; // 0 < MeshSizeEx ponentIncrement
NumberOfStepReduction
= I n t e g e r ; // 0 < NumberOfStepReduction
}
Algorithm Parameters
c)
24
GenOpt
Generic Optimization Program
Version 3.1.0
Output:
Step 1:
Step 2:
Step 3:
Step 4:
Thus, k (xk , k , ) = Gk .
d) Local Search Direction Map
If the global search, as defined by Algorithm 5.2.3, has failed in reducing f (, ), then Algorithm 5.2.3 has constructed a set Gk that contains the
set {xk + k D ei | i = 1, . . . , 2n}. This is because in the evaluation of
Ek (xk , k , ), defined in Algorithm 5.2.1, all If f (, x
e) < f (, x) statements
yield false, and, hence, one has constructed {xk + k D ei | i = 1, . . . , 2n} =
Ek (xk , k , ).
Because the columns of D span Rn positively, it follows that the search on
the set {xk + k D ei | i = 1, . . . , 2n} is a local search. Hence, the constructed
set
Lk , {xk + k D ei | i = 1, . . . , 2n} Gk
(5.13)
Parameter Update
f) Keywords
For the GPS implementation of the Hooke-Jeeves algorithm, the command
file (see page 86) must only contain continuous parameters.
To invoke the algorithm, the Algorithm section of the GenOpt command
file must have the following form:
Algorithm {
Main
MeshSizeDivider
= GPSHookeJeeves ;
= Integer ;
// b i g g e r than 1
25
GenOpt
Generic Optimization Program
Version 3.1.0
InitialMeshSizeExponent
= Integer ;
MeshSizeEx ponentIncrement = I n t e g e r ;
NumberOfStepReduction
= Integer ;
// b i g g e r than o r e q u a l t o 0
// b i g g e r than 0
// b i g g e r than 0
The entries are the same as for the Coordinate Search algorithm, and explained
on page 23.
=
=
=
=
=
=
=
=
GPSCoordinateSearch ;
Uniform ;
Integer ;
I n t e g e r ; // b i g g e r than o r e q u a l t o 1
I n t e g e r ; // 1 < M e s h S i z e D i v i d e r
I n t e g e r ; // 0 <= I n i t i a l M e s h S i z e E x p o n e n t
I n t e g e r ; // 0 < MeshSizeEx ponentIncrement
I n t e g e r ; // 0 < NumberOfStepReduction
26
GenOpt
Generic Optimization Program
Version 3.1.0
=
=
=
=
=
=
=
=
GPSHookeJeeves ;
Uniform ;
Integer ;
I n t e g e r ; // 0 <
I n t e g e r ; // 1 <
I n t e g e r ; // 0 <=
I n t e g e r ; // 0 <
I n t e g e r ; // 0 <
NumberOfInitialPoint
MeshSizeDivider
InitialMeshSizeExponent
MeshSizeEx ponentIncrement
NumberOfStepReduction
The entries are the same as for the multi-start Coordinate Search algorithm
above.
27
GenOpt
Generic Optimization Program
Version 3.1.0
5.3
Step 0:
Step 1:
Step 2 :
Step 3 :
Initial iterate x0 X.
, (0, 1), (0, ), k , k0 Z,
lmax , N (for reseting the step-size calculation).
Termination criteria m , x R+ , imax N.
Initialize i = 0 and m = 0.
Compute the search direction hi .
If m < m , stop.
Else, set = k0 +m and compute, for j {1, . . . , n},
hji = (f (xi + ej ) f (xi )) /.
Check descent.
Compute (xi ; hi ) = (f (xi + hi ) f (xi )) /.
If (xi ; hi ) < 0, go to Step 3.
Else, replace m by m + 1 and go to Step 1.
Line search.
Use Algorithm 5.3.2 (which requires k , lmax and ) to compute ki .
Set
i = arg min f (xi + hi ).
(5.14)
{ ki , ki 1 }
Step 4 :
Step 5 :
28
GenOpt
Generic Optimization Program
Version 3.1.0
Step 0:
Step 1:
f (xi + k hi ) f (xi )
f (xi +
Step 2:
Step 3:
Step 4:
k 1
hi ) f (xi ) >
k (xi ; hi ),
k 1
(5.15a)
(xi ; hi ). (5.15b)
2. If {xi }
i=0 is an infinite sequence constructed by Algorithm 5.3.1 and Algorithm 5.3.2 in solving (4.2), then every accumulation point x
b of {xi }
i=0
satisfies f (b
x) = 0.
Note that hi has the same units as the cost function, and the algorithm
evaluates xi + hi for some R+ . Thus, the algorithm is sensitive to the
Copyright (c) 1998-2011
The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.
29
GenOpt
Generic Optimization Program
Version 3.1.0
5.3.1 Keywords
For the Discrete Armijo Gradient algorithm, the command file (see page 86)
must only contain continuous parameters.
To invoke the algorithm, the Algorithm section of the GenOpt command
file must have the following form:
Algorithm {
Main = D i s c r e t e A r m i j o G r a d i e n t ;
Alpha = Double ;
// 0 < Alpha < 1
Beta = Double ;
// 0 < Beta < 1
Gamma = Double ;
// 0 < Gamma
K0
= Integer ;
KStar = I n t e g e r ;
LMax = I n t e g e r ;
// 0 <= LMax
Kappa = I n t e g e r ;
// 0 <= LMax
EpsilonM = Double ; // 0 < EpsilonM
EpsilonX = Double ; // 0 < EpsilonX
}
30
GenOpt
Generic Optimization Program
Version 3.1.0
31
GenOpt
Generic Optimization Program
Version 3.1.0
5.4
Particle Swarm Optimization (PSO) algorithms are population-based probabilistic optimization algorithms first proposed by Kennedy and Eberhart [EK95,
KE95] to solve problem Pc defined in (4.2) with possibly discontinuous cost
function f : Rn R. In Section 5.4.2, we will present a PSO algorithm for
discrete independent variables to solve problem Pd defined in (4.4), and in
Section 5.4.3 we will present a PSO algorithm for continuous and discrete independent variables to solve problem Pcd defined in (4.6). To avoid ambiguous
notation, we always denote the dimension of the continuous independent variable by nc N and the dimension of the discrete independent variable by
nd N.
PSO algorithms exploit a set of potential solutions to the optimization
problem. Each potential solution is called a particle, and the set of potential
solutions in each iteration step is called a population. PSO algorithms are global
optimization algorithms and do not require nor approximate gradients of the
cost function. The first population is typically initialized using a random number generator to spread the particles uniformly in a user-defined hypercube. A
particle update equation, which is modeled on the social behavior of members
of bird flocks or fish schools, determines the location of each particle in the
next generation.
A survey of PSO algorithms can be found in Eberhart and Shi [ES01].
Laskari et. al. present a PSO algorithm for minimax problems [LPV02b] and
for integer programming [LPV02a]. In [PV02a], Parsopoulos and Vrahatis discuss the implementation of inequality and equality constraints to solve problem
Pcg defined in (4.3).
We first discuss the case where the independent variable is continuous, i.e.,
the case of problem Pc defined in (4.2).
(5.16b)
32
GenOpt
Generic Optimization Program
Version 3.1.0
(5.17a)
x{xi (j)}k
j=0
pg,i (k)
arg min
f (x).
(5.17b)
P
x{{xi (j)}k
j=0 }i=1
Thus, pl,i (k) is the location that for the i-th particle yields the lowest cost over
all generations, and pg,i (k) is the location of the best particle over all generations. The term c1 1 (k) (pl,i (k) xi (k)) is associated with cognition since it
takes into account the particles own experience, and the term c2 2 (k) (pg,i (k)
xi (k)) is associated with social interaction between the particles. In view of
this similarity, c1 is called cognitive acceleration constant and c2 is called social
acceleration constant.
a)
Neighborhood Topology
The minimum in (5.17b) need not be taken over all points in the population. The set of points over which the minimum is taken is defined by the
neighborhood topology. In PSO, the neighborhood topologies are usually defined using the particle index, and not the particle location. We will use the
lbest, gbest, and the von Neumann neighborhood topology, which we will now
define.
In the lbest topology of size l N, with l > 1, the neighborhood of a particle
with index i {1, . . . , nP } consist of all particles whose index are in the set
Ni , {i l, . . . i, . . . , i + l},
(5.18a)
v
The gray points in Figure 5.1 are N(1,2)
. For simplicity, we round in GenOpt
the user-specified number of particles nP N to the next biggest integer nP
33
GenOpt
Generic Optimization Program
Version 3.1.0
0,0
0,1
0,2
0,3
1,0
1,1
1,2
1,3
2,0
2,1
2,2
2,3
2
such that nP N and
P . Then, we can wrap the indices by replacing,
nP n
for k Z, (0,
k) by ( nP , k),
(
nP + 1, k) by (1, k), and similarly by replacing
(k,
nP + 1) by
(k, 1). Then, a particle with indices
(k, 0) by (k, nP ) and
In principle, the lattice need not be a square, but we do not see any computational
disadvantage of selecting a square lattice.
34
GenOpt
Generic Optimization Program
Version 3.1.0
Step 0:
Step 1:
Step 2:
f (x).
(5.19b)
Step 3:
Step 4:
Step 5:
P
Update the particle location {xi (k + 1)}ni=1
X.
If k = nG , stop. Else, go to Step 2.
Replace k by k + 1, and go to Step 1.
We will now discuss the different implementations of the Model PSO Algorithm 5.4.1 in GenOpt.
c)
(i)
Version with Inertia Weight Eberhart and Shi [SE98, SE99] introduced an inertia weight w(k) which improves the performance of the original
PSO algorithm. In the version with inertia weight, the particle update equation
is, for all i {1, . . . , nP }, for k N and xi (k) Rnc , with vi (0) = 0,
vbi (k + 1) = w(k) vi (k) + c1 1 (k) pl,i (k) xi (k)
+c2 2 (k) pg,i (k) xi (k) ,
(5.20a)
vij (k + 1) =
xi (k + 1) =
j
},
sign(b
vij (k + 1)) min{|b
vij (k + 1)|, vmax
j {1, . . . , nc },
xi (k) + vi (k + 1),
(5.20b)
(5.20c)
where
j
vmax
, (uj lj ),
(5.20d)
with R+ , for all j {1, . . . , nc }, and l, u Rnc are the lower and upper
bound of the independent variable. A common value is = 1/2. In GenOpt,
if 0, then no velocity clamping is used, and hence, vij (k + 1) = vbij (k + 1),
for all k N, all i {1, . . . , nP } and all j {1, . . . , nc }.
Copyright (c) 1998-2011
The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.
35
GenOpt
Generic Optimization Program
Version 3.1.0
k
(w0 w1 ),
K
(5.20e)
where w0 R is the initial inertia weight, w1 R is the inertia weight for the
last generation, with 0 w1 w0 , and K N is the maximum number of
generations. w0 = 1.2 and w1 = 0 can be considered as good choices [PV02b].
(ii)
xi (k + 1) = xi (k) + vi (k + 1),
(5.21b)
(5.21c)
where
j
vmax
, (uj lj ),
(5.21d)
is as in (5.20d).
In (5.21a), (, ) is called constriction coefficient, defined as
(, ) ,
|2
4 |
, if > 4,
otherwise,
(5.21e)
36
GenOpt
Generic Optimization Program
Version 3.1.0
4 v
+ 1) =
ij (k + 1) =
j
vij (k) + c1 1 (k) l,i
(k) ij (k)
j
+c2 2 (k) g,i
(k) ij (k) ,
sign(b
vij (k
(
+ 1)) min{|b
vij (k + 1)|, vmax },
i,j (k) s vij (k + 1) ,
0, if
1, otherwise,
where
s(v) ,
1
1 + ev
(5.22a)
(5.22b)
(5.22c)
(5.22d)
is the sigmoid function shown in Fig. 5.2 and i,j (k) U (0, 1), for all i
{1, . . . , nd } and for all j {1, . . . , mi }.
In (5.22b), vmax R+ is often set to 4 to prevent a saturation of the sigmoid
function, and c1 , c2 R+ are often such that c1 + c2 = 4 (see [KES01]).
Notice that s(v) 0.5, as v 0, and consequently the probability of
flipping a bit goes to 0.5. Thus, in the binary PSO, a small vmax causes a
large exploration, whereas in the continuous PSO, a small vmax causes a small
exploration of the search space.
37
GenOpt
Generic Optimization Program
Version 3.1.0
Any of the above neighborhood topologies can be used, and Model Algorithm 5.4.1 applies if we replace the constraint set X by the user-specified set
Xd Znd .
xc,0 +
1
rs
n
X
i=1
(5.23)
i i
nc
m s ei | m Z
(5.24)
where s Rnc is equal to the value defined by the variable Step in GenOpts
command file (see page 86). Then, we replace f (, ) by fb: Rnc Znd Rnc
R Rnc R, defined by
fb(xc , xd ; xc,0 , , s) , f ((xc ), xd ),
(5.25)
38
GenOpt
Generic Optimization Program
Version 3.1.0
5.4.6 Keywords
For the Particle Swarm algorithm, the command file (see page 86) can contain continuous and discrete independent variables.
The different specifications for the Algorithm section of the GenOpt command file are as follows:
PSO algorithm with inertia weight:
Algorithm {
Main
NeighborhoodTopology
NeighborhoodSize
NumberOfParticle
NumberOfGeneration
Seed
CognitiveAcceleration
SocialAcceleration
Max Velocity GainContinuous
MaxVelocityDiscrete
InitialInertiaWeight
FinalInertiaWeight
}
=
=
=
=
=
=
=
=
=
=
=
=
PSOIW;
g b e s t | l b e s t | vonNeumann ;
I n t e g e r ; // 0 < N e i g h b o r h o o d S i z e
Integer ;
Integer ;
Integer ;
Double ;
// 0 < C o g n i t i v e A c c e l e r a t i o n
Double ;
// 0 < S o c i a l A c c e l e r a t i o n
Double ;
Double ;
// 0 < M a x V e l o c i t y D i s c r e t e
Double ;
// 0 < I n i t i a l I n e r t i a W e i g h t
Double ;
// 0 < F i n a l I n e r t i a W e i g h t
=
=
=
=
=
=
=
=
=
=
=
PSOCC ;
g b e s t | l b e s t | vonNeumann ;
I n t e g e r ; // 0 < N e i g h b o r h o o d S i z e
Integer ;
Integer ;
Integer ;
Double ;
// 0 < C o g n i t i v e A c c e l e r a t i o n
Double ;
// 0 < S o c i a l A c c e l e r a t i o n
Double ;
Double ;
// 0 < M a x V e l o c i t y D i s c r e t e
Double ;
// 0 < C o n s t r i c t i o n G a i n <= 1
39
GenOpt
Generic Optimization Program
Version 3.1.0
PSO algorithm with constriction coefficient and continuous independent variables restricted to a mesh:
Algorithm {
Main
NeighborhoodTopology
NeighborhoodSize
NumberOfParticle
NumberOfGeneration
Seed
CognitiveAcceleration
SocialAcceleration
Max Velocity GainContinuous
MaxVelocityDiscrete
ConstrictionGain
MeshSizeDivider
InitialMeshSizeExponent
}
=
=
=
=
=
=
=
=
=
=
=
=
=
PSOCCMesh ;
g b e s t | l b e s t | vonNeumann ;
I n t e g e r ; // 0 < N e i g h b o r h o o d S i z e
Integer ;
Integer ;
Integer ;
Double ;
// 0 < C o g n i t i v e A c c e l e r a t i o n
Double ;
// 0 < S o c i a l A c c e l e r a t i o n
Double ;
Double ;
// 0 < M a x V e l o c i t y D i s c r e t e
Double ;
// 0 < C o n s t r i c t i o n G a i n <= 1
I n t e g e r ; // 1 < M e s h S i z e D i v i d e r
I n t e g e r ; // 0 <= I n i t i a l M e s h S i z e E x p o n e n t
The entries that are common to all implementations are defined as follows:
Main The name of the main algorithm. The implementation PSOIW uses the
location update equation (5.20) for the continuous independent variables,
and the implementation PSOCC uses (5.21) for the continuous independent
variables. All implementations use (5.22) for the discrete independent
variables.
NeighborhoodTopology This entry defines what neighborhood topology is
being used.
NeighborhoodSize For the lbest neighborhood topology, this entry is equal
to l in (5.18a). For the gbest and the von Neumann neighborhood topology, the value of NeighborhoodSize is ignored.
NumberOfParticle This is equal to the variable nP N.
NumberOfGeneration This is equal to the variable nG N in Algorithm 5.4.1.
Seed This value is used to initialize the random number generator.
CognitiveAcceleration This is equal to the variable c1 R+ .
SocialAcceleration This is equal to the variable c2 R+ .
MaxVelocityGainContinuous This is equal to the variable R+ in (5.20d)
and in (5.21d). If MaxVelocityGainContinuous is set to zero or to a
negative value, then no velocity clamping is used, and hence, vij (k + 1) =
vbij (k + 1), for all k N, all i {1, . . . , nP } and all j {1, . . . , nc }.
MaxVelocityDiscrete This is equal to the variable vmax R+ in (5.22b).
40
GenOpt
Generic Optimization Program
Version 3.1.0
41
GenOpt
Generic Optimization Program
Version 3.1.0
5.5
42
GenOpt
Generic Optimization Program
Version 3.1.0
Since the PSO algorithm is a global optimization algorithm, the hybrid algorithm is, compared to the Hooke-Jeeves algorithm, less likely to be attracted
by a local minimum that is not global. Thus, the hybrid algorithm combines
the global features of the PSO algorithm with the provable convergence properties of the GPS algorithm.
If the cost function is discontinuous, then the hybrid algorithm is, compared
to the Hooke-Jeeves algorithm, less likely to jam at a discontinuity far from a
solution.
5.5.3 Keywords
For this algorithm, the command file (see page 86) can contain continuous
and discrete independent variables. It must contain at least one continuous
parameter.
The specifications of the Algorithm section of the GenOpt command file
is as follows:
Note that the first entries are as for the PSO algorithm on page 40 and the
last entries are as for GPS implementation of the Hooke-Jeeves algorithm on
page 25.
Algorithm {
Main
NeighborhoodTopology
NeighborhoodSize
NumberOfParticle
NumberOfGeneration
Seed
CognitiveAcceleration
=
=
=
=
=
=
=
GPSPSOCCHJ;
g b e s t | l b e s t | vonNeumann ;
I n t e g e r ; // 0 < N e i g h b o r h o o d S i z e
Integer ;
Integer ;
Integer ;
Double ;
// 0 < C o g n i t i v e A c c e l e r a t i o n
43
GenOpt
Generic Optimization Program
Version 3.1.0
SocialAcceleration
Max Velocity GainContinuous
MaxVelocityDiscrete
ConstrictionGain
MeshSizeDivider
InitialMeshSizeExponent
MeshSizeEx ponentIncrement
NumberOfStepReduction
Double ;
Double ;
Double ;
Double ;
Integer ;
Integer ;
Integer ;
Integer ;
// 0 <
SocialAcceleration
//
//
//
//
//
//
MaxVelocityDiscrete
C o n s t r i c t i o n G a i n <= 1
MeshSizeDivider
InitialMeshSizeExponent
MeshSizeEx ponentIncrement
NumberOfStepReduction
0
0
1
0
0
0
<
<
<
<=
<
<
44
GenOpt
Generic Optimization Program
Version 3.1.0
5.6
(5.27a)
iI
(5.27b)
iI
45
GenOpt
Generic Optimization Program
Version 3.1.0
x2
x2
x**
x*
x*
xc
xc
xh
xh
x1
x1
(a) Reflection.
(b) Expansion.
x2
x2
x*
x*
x**
xc
xh
xc
x**
xh
x1
x1
(d) Partial outside contraction.
x2
xl
x 1*
x 2*
xh
x1
(e) Total contraction.
46
GenOpt
Generic Optimization Program
Version 3.1.0
n+1
1 X
xi
n i=1
(5.27c)
i6=h
(5.28a)
(5.28c)
(5.29)
47
GenOpt
Generic Optimization Program
Version 3.1.0
12
1.8693
9
1.8692
8
11
10
13
1.8691
x2
4
6
14
1.8690
17
16
1.8689
15
1.8688
1.8544
1.8546
1.8548
1.8550
1.8552
1.8554
1.8556
1.8558
1.8560
1.8562
1.8687
1.8564
x1
Figure 5.4 : Sequence of iterates generated by the Simplex algorithm.
4. If it turned out under 3 that f (x ) f (xl ), then we check if the new
point x is the worst of all points: If f (x ) > f (xi ), for all i I, with
i 6= h, we contract the simplex (see 5); otherwise we replace xh by x
and go to 2.
5. For the contraction, we first check if we should try a partial outside
contraction or a partial inside contraction: If f (x ) f (xh ), then we
try a partial inside contraction. To do so, we leave our indices as is and
apply (5.28c). Otherwise, we try a partial outside contraction. This is
done by replacing xh by x and applying (5.28c). After the partial inside
or the partial outside contraction, we continue at 6.
6. If f (x ) f (xh )3 , we do a total contraction of the simplex by replacing
xi (xi + xl )/2, for all i I. Otherwise, we replace xh by x . In both
cases, we continue from 2.
3
Nelder and Mead [NM65] use the strict inequality f (x ) > f (xh ). However, if
the user writes the cost function value with only a few representative digits to a text
file, then the function looks like a step function if slow convergence is achieved. In
such cases, f (x ) might sometimes be equal to f (xh ). Experimentally, it has been
shown advantageous to perform a total contraction rather than continuing with a
reflection. Therefore, the strict inequality has been changed to a weak inequality.
48
GenOpt
Generic Optimization Program
Version 3.1.0
Fig. 5.4 shows a contour plot of a cost function f : Rn R with a sequence of iterates generated by the Simplex algorithm. The sequence starts
with constructing an initial simplex x1 , x2 , x3 . x1 has the highest function
value and is therefore reflected, which generates x4 . x4 is the best point in
the set {x1 , x2 , x3 , x4 }. Thus, it is further expanded, which generates x5 . x2 ,
x3 and x5 now span the new simplex. In this simplex, x3 is the vertex with
the highest function value and hence goes over to x6 and further to x7 . The
process of reflection and expansion is continued again two times, which leads to
the simplex spanned by x7 , x9 and x11 . x7 goes over to x12 which turns out to
be the worst point. Hence, we do a partial inside contraction, which generates
x13 . x13 is better than x7 so we use the simplex spanned by x9 , x11 and x13
for the next reflection. The last steps of the optimization are for clarity not
shown.
!2
n+1
n+1
X
2
1
1 X
f (xi ) < 2 ,
f (xi )
(5.30)
n i=1
n + 1 i=1
then the original implementation of the algorithm stops. Nelder and Mead
have chosen this stopping criterion based on the statistical problem of finding
the minimum of a sum of squares surface. In this problem, the curvature
near the minimum yields information about the unknown parameters. A slight
curvature indicates a high sampling variance of the estimate. Nelder and Mead
argue that in such cases, there is no reason for finding the minimum point
with high accuracy. However, if the curvature is marked, then the sampling
variance is low and a higher accuracy in determining the optimal parameter
set is desirable.
Note that the stopping criterion (5.30) requires the variance of the function
values at the simplex vertices to be smaller than a prescribed limit. However,
if f () has large discontinuities, which has been observed in building energy
optimization problems [WW03], then the test (5.30) may never be satisfied.
For this reason, among others, we do not recommend using this algorithm if
the cost function has large discontinuities.
49
GenOpt
Generic Optimization Program
Version 3.1.0
(5.31a)
i {1, . . . , n},
(5.31b)
where xl denotes the best known point, and si and ei are as in (5.29).
i {1, . . . , n}
(5.31c)
where for each direction i {1, . . . , n}, the counter j N is set to zero for the
first trial and increased by one as long as f (xl ) = f (x).
If (5.31a) fails for any direction, then x computed by (5.31c) is the new
starting point and a new simplex with side lengths (c si ), i {1, . . . , n}, is
constructed. The point x that failed (5.31a) is then used as the initial point xl
in (5.29).
Numerical experiments showed that during slow convergence the algorithm
was restarted too frequently.
Fig. 5.5(a) shows a sequence of iterates where the algorithm was restarted
too frequently. The iterates in the figure are part of the iteration sequence near
the minimum of the test function shown in Fig. 5.5(b). The algorithm gets close
to the minimum with appropriately large steps. The last of these steps can be
seen at the right of the figure. After this step, the stopping criterion (5.30)
was satisfied which led to a restart check, followed by a new construction of
the simplex. From there on, the convergence was very slow due to the small
step size. After each step, the stopping criterion was satisfied again which led
to a new test of the optimality condition (5.31a), followed by a reconstruction
of the simplex. This check is very costly in terms of function evaluations and,
furthermore, the restart with a new simplex does not allow increasing the step
size, though we are heading locally in the right direction.
50
GenOpt
Generic Optimization Program
Version 3.1.0
1.86935
1.86910
1.86885
1.86860
300
x2
1.86835
200
f(x)
1.86810
100
1.86785
4
0
1.86760
1.855
2
4
1.856
1.857
1.858
x
1.859
1.860
1.861
1.862
0 x2
0
x1
2
4 4
51
GenOpt
Generic Optimization Program
Version 3.1.0
n+1
1 X
xi ,
n + 1 i=1
(5.32)
where xi , i {1, . . . , n}, are the simplex vertices. We also introduce the
normalized direction of the simplex between two steps,
xm,k xm,k1
dk ,
,
(5.33)
kxm,k xm,k1 k
(5.34)
then the moving direction of the simplex has changed by at least /2. Hence,
the simplex has changed the exploration direction. Therefore, a minimum
might be achieved and we need to test the variance of the vertices (5.30), possibly followed by a test of (5.31a).
Besides the above modification, a further modification was tested: In some
cases, a reconstruction of the simplex after a failed check (5.31a) yields to slow
convergence. Therefore, the algorithm was modified so that it continues at
point 2 on page 47 without reconstructing the simplex after failing the test
(5.31a). However, reconstructing the simplex led in most of the benchmark
tests to faster convergence. Therefore, this modification is no longer used in
the algorithm.
52
GenOpt
Generic Optimization Program
Version 3.1.0
Accuracy
Test
function
Original,
with reconstruction
Original,
no reconstruction
Modified,
with reconstruction
Modified,
no reconstruction
137
= 103
2D1
Quad
with I
matrix
120
3061
136
110
1436
1356
139
109
1433
1253
145
112
1296
1015
152
111
1060
1185
155
120
1371
1347
152
109
1359
1312
Rosenbrock
Quad
with
Q matrix
1075
Rosenbrock
139
= 105
2D1
Quad
with I
matrix
109
1066
Quad
with
Q matrix
1165
53
1.8
1.6
= 10 3
= 10 5
1.4
1.2
1.0
0.8
0.6
2D1
Rosenbrock
2D1
0.4
Rosenbrock
GenOpt
Generic Optimization Program
Version 3.1.0
54
GenOpt
Generic Optimization Program
Version 3.1.0
5.6.7 Keywords
For the Simplex algorithm, the command file (see page 86) must only contain continuous parameters.
To invoke the Simplex algorithm, the Algorithm section of the GenOpt
command file must have following form:
Algorithm {
Main
Accuracy
StepSizeFactor
BlockRestartCheck
ModifyStoppingCriterion
}
=
=
=
=
=
NelderMeadONeill ;
Double ;
// 0 < Accuracy
Double ;
// 0 < S t e p S i z e F a c t o r
I n t e g e r ; // 0 <= B l o c k R e s t a r t C h e c k
Boolean ;
55
GenOpt
Generic Optimization Program
Version 3.1.0
f(x)
x0,i
x1,i
x2,i
x3,i
i {0, 1, 2, ...}
Xi+1
f(x)
x0,(i+1)
Algorithms for
One-Dimensional Optimization
6.1
, x0 + s (x3 x0 ),
, x1 + s (x3 x1 ).
(6.1)
(6.2)
56
GenOpt
Generic Optimization Program
Version 3.1.0
i {0, 1, 2, . . .},
(6.3)
such that we have to evaluate f () in each step at one new point only. To do so,
we assign the new bounds of the interval such that either [x0,(i+1) , x3,(i+1) ] =
[x0,i , x2,i ], or [x0,(i+1) , x3,(i+1) ] = [x1,i , x3,i ], depending on which interval has
to be eliminated. By doing so, we have to evaluate only one new point in
the interval. It remains to decide where to locate the new point. The Golden
Section and Fibonacci Division differ in this decision.
(6.4a)
|x1 x3 |
= 1 q.
|x0 x3 |
(6.4b)
Hence,
(6.6a)
Now, we determine the fraction q. Since we apply the process of interval division
recursively, we know by scale similarity that
w
= q.
1q
(6.6b)
(6.7a)
with solutions
3 5
.
2
Since q < 1 by (6.4a), the solution of interest is
3 5
q=
0.382.
2
q1,2 =
(6.7b)
(6.7c)
57
GenOpt
Generic Optimization Program
Version 3.1.0
(6.8)
|x1, m x3, m |
|x0, m x2, m |
=
,
|x0, 0 x3, 0 |
|x0, 0 x3, 0 |
(6.9)
ln r
1.
ln(1 q)
(6.10)
is given by
m=
F1 , 1,
Fi
Fi1 + Fi2 ,
(6.11a)
i {2, 3, . . .}.
(6.11b)
The first few numbers of the Fibonacci sequence are {1, 1, 2, 3, 5, 8, 13, 21, . . .}.
The length of the intervals d1, i and d2, i , respectively, are then given by
d1, i =
Fmi
,
Fmi+2
d2, i =
Fmi+1
,
Fmi+2
i {0, 1, . . . , m},
(6.12)
where m > 0 describes how many iterations will be done. Note that m must
be known prior to the first interval division. Hence, the algorithm must be
stopped after m iterations.
The reduction of the length of the uncertainty interval per iteration is given
by
d3, (i+1)
d2, i
=
=
d3, i
d1, i + d2, i
Fmi+1
Fmi+2
Fmi+1
Fmi
Fmi+2 + Fmi+2
Fmi+1
.
Fmi+2
(6.13)
58
GenOpt
Generic Optimization Program
Version 3.1.0
=
=
(6.14)
=
=
d3, (m+1)
d3, (m+1) d3, m
d2, m
d3, 2 d3, 1
=
=
...
d3, 0
d3, 0
d3, m d3, (m1)
d3, 1 d3, 0
Fm Fm+1
1
F1 F2
...
=
.
F2 F3
Fm+1 Fm+2
Fm+2
(6.15)
Hence, m is given by
m = arg min m | r
mN
1
Fm+2
(6.16)
Fm+2
|x0, m x3, m |GS
= lim
(1 q)m = 0.95.
m
|x0, m x3, m |F
2
(6.17)
59
GenOpt
Generic Optimization Program
Version 3.1.0
Step 0:
Step 1:
Step 2:
x0 , x3 .
Procedure that returns ri , defined as
ri , |x0, i x2, i |/|x0, 0 x3, 0 |.
Initialize
x = x3 x0 ,
x2 = x0 + r1 x,
x1 = x0 + r2 x,
f1 = f (x1 ), f2 = f (x2 ), and
i = 2.
Iterate.
Replace i by i + 1.
If (f2 < f1 )
Set x0 = x1 , x1 = x2 ,
f1 = f2 ,
x2 = x3 ri x, and
f2 = f (x2 ).
else
Set x3 = x2 , x2 = x1 ,
f2 = f1 ,
x1 = x0 + ri x,
f1 = f (x1 ).
Stop or go to Step 1.
6.1.6 Keywords
For the Golden Section and the Fibonacci Division algorithm, the command file (see page 86) must contain only one continuous parameter.
To invoke the Golden Section or the Fibonacci Division algorithm, the
Algorithm Section of the GenOpt command file must have following form:
Algorithm {
Main
= GoldenSection | Fibonacci ;
[ AbsDiffFunction
= Double ;
|
// 0 < A b s D i f f F u n c t i o n
I n t e r v a l R e d u c t i o n = Double ;
]
// 0 < I n t e r v a l R e d u c t i o n
}
60
GenOpt
Generic Optimization Program
Version 3.1.0
(6.18)
61
GenOpt
Generic Optimization Program
Version 3.1.0
The here described algorithms for parametric runs can be used to determine
how sensitive a function is with respect to a change in the independent variables.
They can also be used to do a parametric sweep of a function over a set of
parameters. The algorithm described in Section 7.1 varies one parameter at
a time while holding all other parameters fixed at the value specified by the
keyword Ini. The algorithm described in Section 7.2, in contrast, constructs a
mesh in the space of the independent parameters, and evaluates the objective
function at each mesh point.
7.1
u
1
log ,
m
l
= l 10p i .
=
(7.1a)
(7.1b)
i
(u l).
m
(7.1c)
62
GenOpt
Generic Optimization Program
Version 3.1.0
Vary{
Parameter { Name = x1 ; I n i = 5 ; Step = 2; Min = 1 0 ; Max = 1 0 0 0 ; }
Parameter { Name = x2 ; I n i = 3 ; Step = 1 ; Min = 2 ; Max = 2 0 ;
}
}
and the cost function takes two arguments, x1 , x2 R. Then, the cost function
will be evaluated at the points
(x1 , x2 ) {(10, 3), (100, 3), (1000, 3), (5, 2), (5, 20)}.
7.1.2 Keywords
For this algorithm, the command file (see page 86) can contain continuous
and discrete parameters.
The Parametric algorithm is invoked by the following specification in the
command file:
Algorithm {
Main = P a r a m e t r i c ;
StopAtError = true | f a l s e ;
}
7.2
63
GenOpt
Generic Optimization Program
Version 3.1.0
and the cost function takes two arguments, x1 , x2 R. Then, the cost function
will be evaluated at the points
(x1 , x2 ) {(10, 1), (10, 1), (10, 10), (10, 10), (10, 100), (10, 100)}.
An alternative specification for x2 that uses a discrete parameter and gives
the same result is
Parameter {
Name = x2 ;
I n i = "1" ;
Values = "1, 10 , 100 " ;
}
7.2.2 Keywords
The Mesh algorithm is invoked by the following specification in the command file:
Algorithm {
Main
= Mesh ;
StopAtError = true | f a l s e ;
}
64
GenOpt
Generic Optimization Program
Version 3.1.0
Constraints
8.1
=
=
p
xi li ,
i
i 2
l + (t ) .
(8.2a)
(8.2b)
65
GenOpt
Generic Optimization Program
Version 3.1.0
xi li
ui l i
ti
arcsin
(8.2c)
xi
li + (ui li ) sin2 ti .
(8.2d)
p
ui xi ,
=
= ui (ti )2 .
(8.2e)
(8.2f)
(8.3)
8.2
We now discuss the situation where the constraints are non-linear and defined by
g(x) 0,
(8.4)
where g : Rn Rm is once continuously differentiable. (8.4) also allows formulating equality constraints of the form
h(x) = 0,
(8.5)
66
GenOpt
Generic Optimization Program
Version 3.1.0
(8.6)
m
X
max(0, g i (x))2 ,
(8.8)
i=1
67
GenOpt
Generic Optimization Program
Version 3.1.0
As for the barrier method, selecting the weighting factor is not trivial.
Too small a value for produces too big a violation of the constraint. Hence,
the boundary of the feasible set can be exceeded by an unacceptable amount.
Too large a value of can lead to ill-conditioning of the cost function, which
can cause numerical problems.
The weighting factors have to satisfy
0 < 0 < . . . < i < i+1 < . . . ,
(8.9)
(8.10)
68
GenOpt
Generic Optimization Program
Version 3.1.0
Input Files
initialization:
output
command
configuration
GenOpt
log
input
program
call
Simulation
Program
simulation
input template
simulation output
retrieval
Simulation
initialization
Optimization
output
log
Figure 9.1 : Interface between GenOpt and the simulation program that evaluates the cost function.
Program
GenOpt is divided into a kernel part and an optimization part. The kernel
reads the input files, calls the simulation program, stores the results, writes
output files, etc. The optimization part contains the optimization algorithms.
It also contains classes of mathematical functions such as those used in linear
algebra.
Since there is a variety of simulation programs and optimization algorithms,
GenOpt has a simulation program interface and an optimization algorithm interface. The simulation program interface allows using any simulation software
to evaluate the cost function (see below for the requirements on the simulation
program), and allows implementing new optimization algorithms with little
effort.
9.1
Text files are used to exchange data with the simulation program and to
specify how to start the simulation program. This makes it possible to couple
69
GenOpt
Generic Optimization Program
Version 3.1.0
9.2
9.3
Package genopt.algorithm
70
GenOpt
Generic Optimization Program
Version 3.1.0
Optimizer
Optimization
Algorithm
Optimizer
GenOpt
Kernel
External
Simulation
Program
Optimization
Algorithm
Optimizer
Utility
Classes
Utility Classes
Shared library
for commonly
used methods,
e.g., for
- linear algebra
- optimality check
- line search
- etc.
Optimization
Algorithm
Simulation Program
Any simulation program with
text-based I/O, e.g.,
- EnergyPlus
- SPARK
- DOE-2
- TRNSYS
- etc.
Superclass Optimizer
Offers methods to easily access
GenOpts kernel, e.g., for
- input retrieving
- cost function evaluation
- result reporting
- error reporting
- etc.
71
GenOpt
Generic Optimization Program
Version 3.1.0
9.4
72
GenOpt
Generic Optimization Program
Version 3.1.0
package genopt.algorithm;
import genopt.GenOpt;
import genopt.lang.OptimizerException;
import genopt.io.InputFormatException;
public class ClassName extends Optimizer{
public ClassName (GenOpt genOptData)
throws InputFormatException, OptimizerException,
IOException, Exception
{
73
GenOpt
Generic Optimization Program
Version 3.1.0
10
10.1
System Requirements
10.2
10.3
Running GenOpt
10.3.1
To run GenOpt from the file explorer, double-click on the file genopt.jar.1
This will start the graphical user interface. From the graphical user interface,
select File, Start... and select a GenOpt initialization file.
10.3.2
74
GenOpt
Generic Optimization Program
Version 3.1.0
j a v a j a r genopt . j a r [ i n i t i a l i z a t i o n F i l e ]
75
GenOpt
Generic Optimization Program
Version 3.1.0
Figure 10.1 : Output of GenOpt on Mac OS X for the example file in the
directory example/quad/GPSHookeJeeves.
For instance, to run the example file provided with GenOpt that minimizes
a quadratic function using the Hooke-Jeeves algorithm, type on Mac OS X
j a v a j a r genopt . j a r example / quad / GPSHookeJeeves/optMacOSX . i n i
on Linux
j a v a j a r genopt . j a r example / quad / GPSHookeJeeves/ optLinux . i n i
and on Windows
j a v a j a r genopt . j a r example \ quad \ GPSHookeJeeves\optWinXP . i n i
76
GenOpt
Generic Optimization Program
Version 3.1.0
11
Setting Up an Optimization
Problem
11.1
File Specification
This section defines the file syntax for GenOpt. The directory example of
the GenOpt installation contains several examples.
The following notation will be used to explain the syntax:
1. Text that is part of the file is written in fixed width fonts.
2. | stands for possible entries. Only one of the entries that are separated
by | is allowed.
3. [ ] indicates optional values.
4. The file syntax follows the Java convention. Hence,
(a) // indicates a comment on a single line,
(b) /* and */ enclose a comment,
(c) the equal sign, =, assigns values,
(d) a statement has to be terminated by a semi-colon, ;,
(e) curly braces, { }, enclose a whole section of statements, and
Copyright (c) 1998-2011
The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.
77
GenOpt
Generic Optimization Program
Version 3.1.0
StringReference
Integer
Double
Boolean
11.1.1
Initialization File
78
GenOpt
Generic Optimization Program
Version 3.1.0
5. whether and if so, how, the cost function value(s) have to be postprocessed, and
6. which simulation program is being used.
The sections must be specified in the order shown below. The order of the
keywords in each section is arbitrary, as long as the numbers that follow some
keywords (such as File1) are in increasing order.
The initialization file syntax is
Simulation {
Files {
Template {
File1 = String | StringReference ;
[ Path1 = S t r i n g | S t r i n g R e f e r e n c e ; ]
[ File2 = String | StringReference ;
[ Path2 = S t r i n g | S t r i n g R e f e r e n c e ; ]
[ ... ] ]
}
Input { // t h e number o f i n p u t f i l e must be e q u a l t o
// t h e number o f t e m p l a t e f i l e s
File1
= String | StringReference ;
[ Path1
= String | StringReference ; ]
[ SavePath1 = S t r i n g | S t r i n g R e f e r e n c e ; ]
[ File2
= String | StringReference ;
[ Path2
= String | StringReference ; ]
[ SavePath2 = S t r i n g | S t r i n g R e f e r e n c e ; ]
[ ... ] ]
}
Log {
// The Log s e c t i o n has t h e same s y n t a x a s t h e Input s e c t i o n .
}
Output {
// The Output s e c t i o n has t h e same s y n t a x a s t h e Input s e c t i o n .
}
Configuration {
File1 = String | StringReference ;
[ Path1 = S t r i n g | S t r i n g R e f e r e n c e ; ]
}
} // end o f s e c t i o n S i m u l a t i o n . F i l e s
[ CallParameter {
[ Prefix = String | StringReference ; ]
[ S u ffix = String | StringReference ; ]
}]
[ ObjectiveFunctionLocation {
Name1
= String ;
Delimiter1 = String | StringReference ;
| Function1 = S t r i n g ;
[ FirstCharacterAt1 = I n t eger ; ]
[ Name2
Delimiter2
= String ;
= String | StringReference ;
Function2
= String ;
79
GenOpt
Generic Optimization Program
Version 3.1.0
[ FirstCharacterAt2 = I n t eger ; ]
... ] ]
}]
} // end o f s e c t i o n S i m u l a t i o n
Optimization {
Files {
Command {
File1 = String | StringReference ;
[ Path1 = S t r i n g | S t r i n g R e f e r e n c e ; ]
}
}
} // end o f s e c t i o n O p t i m i z a t i o n
80
GenOpt
Generic Optimization Program
Version 3.1.0
81
GenOpt
Generic Optimization Program
Version 3.1.0
82
GenOpt
Generic Optimization Program
Version 3.1.0
or, equivalently,
Delimiter1
= "5," ;
83
GenOpt
Generic Optimization Program
Version 3.1.0
that require different values of this section. Then, the same (simulation
program specific) configuration file can be used for all runs and the different settings can be specified in the (project dependent) initialization
file rather than in the configuration file.
Optimization.Files.Command This section specifies where the optimization command file is located. This file contains the mathematical information of the optimization. See page 86 for a description of this file.
11.1.2
Configuration File
84
GenOpt
Generic Optimization Program
Version 3.1.0
= String ;
= String | StringReference ;
Function2
= String ;
= String ;
85
GenOpt
Generic Optimization Program
Version 3.1.0
By setting WriteInputFileExtension to false, the value of the keyword Simulation.Input.Filei (where i stands for 1, 2, 3) is copied
into Command, and the file extension is removed.
ObjectiveFunctionLocation Note that this section can also be specified in
the initialization file. The section in this file is ignored if this section is
also specified in the configuration file. See page 82 for a description.
11.1.3
Command File
The command file specifies optimization-related settings such as the independent parameters, the stopping criteria and the optimization algorithm being
used. The sequence of the entries in all sections of the command file is arbitrary.
There are two different types of independent parameters, continuous parameters and discrete parameters. Continuous parameters can take on any
values, possibly constrained by a minimum and maximum value. Discrete parameters can take on only user-specified discrete values, to be specified in this
file.
Some algorithms require all parameters to be continuous, or all parameters
to be discrete, or allow both continuous and discrete parameters. Please refer
to the algorithm section on page 16-64.
a)
86
GenOpt
Generic Optimization Program
Version 3.1.0
and replaces each occurrence by its numerical value before it writes the
simulation input files.
Ini Initial value of the parameter.
Step Step size of the parameter. How this variable is used depends on
the optimization algorithm being used. See the optimization algorithm
descriptions for details.
Min Lower bound of the parameter. If the keyword is omitted or set to
SMALL, the parameter has no lower bound.
Max Upper bound of the parameter. If the keyword is omitted or set to BIG,
the parameter has no upper bound.
Type Optional keyword that specifies that this parameter is continuous. By
default, if neither Type nor Values (see below) are specified, then the
parameter is considered to be continuous and the Parameter section
must have the above format.
b) Specification of a Discrete Parameter
For discrete parameters you need to specify the set of admissible values.
Alternatively, if a parameter is spaced either linearly or logarithmically, specify
the minimum and maximum value of the parameter and the number of intervals.
First, we list the entry for the case of specifying the set of admissible values:
// S e t t i n g s
Parameter {
Name
=
Ini
=
Values =
[ Type
=
}
f o r a d i s c r e t e parameter
String ;
Integer ;
String ;
SET ; ]
87
GenOpt
Generic Optimization Program
Version 3.1.0
c)
The specification of input function objects in optional. If any input function object is specified, then its name must appear either in another input
function object, in a simulation input template file, or in an output function
object. Otherwise, GenOpt terminates with an error message. See Section 11.3
on page 92 for an explanation of input and output function objects.
The syntax for input function objects is
// Input function objects entry
Function{
Name
= String;
Function = String;
}
Copyright (c) 1998-2011
The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.
88
GenOpt
Generic Optimization Program
Version 3.1.0
89
GenOpt
Generic Optimization Program
Version 3.1.0
11.1.4
Log File
GenOpt writes a log file to the directory that contains the initialization
file. The name of the log file is GenOpt.log.
The GenOpt log file contains general information about the optimization
process. It also contains warnings and errors that occur during the optimization.
11.1.5
Output Files
90
GenOpt
Generic Optimization Program
Version 3.1.0
11.2
91
GenOpt
Generic Optimization Program
Version 3.1.0
3. For the simulation input, the simulation log and the simulation output
files, the string tmp-genopt-run-#, where # is the number of the simulation, will be inserted between the name of the optimization initialization
file and the subdirectory name of the simulation input, log or output file.
4. When resolving the file names, a path separator (\ on Windows or /
on Mac OS X and Linux) will be appended if needed.
These rules work for situations in which the simulation program uses the
current directory, or subdirectories of the current directory, to read input and
write output, provided that the optimization configuration file is also in the
directory that contains the simulation input files.
For the declaration of the Command line in the GenOpt configuration file, we
recommend using the full directory name. For example, we recommend using
Command = "./simulate.sh
[linebreak added]
%Simulation.Files.Log.Path1%/%Simulation.Files.Log.File1%";
instead of
Command = "./simulate.sh ./%Simulation.Files.Log.File1%";
The first version ensures that the argument that is passed to simulate.sh
is the simulation log file in the working directory that is used by the current
simulation. However, in the second version, because of rule (1) the simulation
log file will be in the directory of GenOpts configuration file, and thus different
simulations may write to the same simulation log file simultaneously, causing
unpredictable behavior.
11.3
Function Objects
92
GenOpt
Generic Optimization Program
Version 3.1.0
Function
add(x0, x1)
add(x0, x1, x2)
subtract(x0, x1)
multiply(x0, x1)
multiply(x0, x1, x2)
divide(x0, x1)
log10(x0)
Returns
x0 + x1
x0 + x1 + x2
x0 x1
x0 x1
x0 x1 x2
x0 /x1
log10 (x0 )
93
GenOpt
Generic Optimization Program
Version 3.1.0
b) Pre-Processing
Example 11.3.1 Suppose we want to find the optimal window width and
height. Let w and h denote the window width and height, respectively. Suppose
we want the window height to be 1/2 times the window width, and the window
width must be between 1 and 2 meters. Then, we could specify in the command
file the section
Parameter{
Name =
w;
Ini = 1.5;
Step = 0.05;
Min =
1;
Max =
2;
Type = CONTINUOUS;
}
Function{
Name =
h;
Function = "multiply( %w%, 0.5 )";
}
Then, in the simulation input template files, GenOpt will replace all occurrences of %w% by the window width and all occurences of %h% by 1/2 times the
numerical value of %w%.
GenOpt does not report values that are computed by input functions. To
report such values, a user needs to specify them in the section ObjectiveFunctionLocation,
as shown in Example 11.3.2 below.
c)
Post-Processing
94
GenOpt
Generic Optimization Program
Version 3.1.0
11.4
0.0
2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5 x
0.1
0.2
Figure 11.1 : Function (11.1) with machine precision and with truncated digits.
The upper line shows the cost function value with machine precision, and the
lower line shows the cost function value with only two digits beyond the decimal
point.
For x Rn and f : Rn R, assume there exists a scalar > 0 such that
f (x ) = f (x ) for all x B(x , ), where B(x , ) , {x Rn | kx x k < }.
Obviously, in B(x , ), an optimization algorithm can fail because iterates in
B(x , ) contain no information about descent directions outside of B(x , ).
Furthermore, in absence of convexity of f (), the optimality of x cannot be
ascertain in general.
Such situations can be generated if the simulation program writes the cost
function value to the output file with only a few digits. Fig. 11.1 illustrates
that truncating digits can cause problems particularly in domains of f () where
the slope of f () is flat. In Fig. 11.1, we show the function
(11.1)
95
GenOpt
Generic Optimization Program
Version 3.1.0
The upper line is the exact value of f (), and the lower line is the rounded
value of f () such that it has only two digits beyond the decimal point. If the
optimization algorithm makes changes in x in the size of 0.2, then it may fail
for 0.25 < x < 1, which is far from the minimum. In this interval, no useful
information about the descent of f () can be obtained. Thus, the cost function
must be written to the output file without truncating digits.
To detect such cases, the optimization algorithm can cause GenOpt to check
whether a cost function values is equal to a previous cost function value. If
the same cost function value is obtained more than a user-specified number of
times, then GenOpt terminates with an error message. The maximum number
of equal cost function values is specified by the parameter MaxEqualResults
in the command file (see page 86).
GenOpt writes an information to the user interface and to the log file if a
cost function value is equal to a previous function value.
96
GenOpt
Generic Optimization Program
Version 3.1.0
12
Conclusions
97
GenOpt
Generic Optimization Program
Version 3.1.0
13
Acknowledgments
The development of GenOpt was sponsored by grants from the Swiss Academy
of Engineering Sciences (SATW), the Swiss National Energy Fund (NEFF) and
the Swiss National Science Foundation (SNF) and is supported by the Assistant Secretary for Energy Efficiency and Renewable Energy, Office of Building
Technology Programs of the U.S. Department of Energy, under Contract No.
DE-AC02-05CH11231. I would like to thank these institutions for their generous support.
98
GenOpt
Generic Optimization Program
Version 3.1.0
14
Legal
14.1
Copyright Notice
GenOpt Copyright (c) 1998-2011, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of
any required approvals from the U.S. Dept. of Energy). All rights reserved.
If you have questions about your rights to use or distribute this software,
please contact Berkeley Labs Technology Transfer Department at TTD@lbl.gov.
NOTICE. This software was developed under partial funding from the
U.S. Department of Energy. As such, the U.S. Government has been granted
for itself and others acting on its behalf a paid-up, nonexclusive, irrevocable,
worldwide license in the Software to reproduce, prepare derivative works, and
perform publicly and display publicly. Beginning five (5) years after the date
permission to assert copyright is obtained from the U.S. Department of Energy,
and subject to any subsequent five (5) year renewals, the U.S. Government
is granted for itself and others acting on its behalf a paid-up, nonexclusive,
irrevocable, worldwide license in the Software to reproduce, prepare derivative
works, distribute copies to the public, perform publicly and display publicly,
and to permit others to do so.
14.2
License agreement
GenOpt Copyright (c) 1998-2011, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of
any required approvals from the U.S. Dept. of Energy). All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
3. Neither the name of the University of California, Lawrence Berkeley National Laboratory, U.S. Dept. of Energy nor the names of its contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS
AND CONTRIBUTORS AS IS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
99
GenOpt
Generic Optimization Program
Version 3.1.0
100
GenOpt
Generic Optimization Program
Version 3.1.0
Benchmark Tests
This section lists the settings used in the benchmark tests on page 52.
A.1
Rosenbrock
(A.1)
Min = SMALL;
Max = BIG ;
Min = SMALL;
Max = BIG ;
101
GenOpt
Generic Optimization Program
Version 3.1.0
100
80
60
f(x)
40
20
0.2
0
0.2
0.4
x2
0.6
0.8
1.0
1.2
1.2
1.0
0.8
0.6
0.4
0.2
0 x1
0.2
0.4
0.6
0.8
1.2
A.2
Function 2D1
This function has only one minimum point. The function is defined as
f (x) ,
3
X
f i (x),
(A.2)
1
b,
,
2
(A.3)
i=1
with
1
f (x)
f 2 (x)
3
f (x)
10 6
Q,
,
6 8
, 100 arctan (2 x1 )2 + (2 x2 )2 ,
, 50 arctan (0.5 + x1 )2 + (0.5 + x2 )2 ,
1
, hb, xi + hx, Q xi,
2
(A.4)
(A.5)
102
GenOpt
Generic Optimization Program
Version 3.1.0
x2 2
2
x1
Figure A.2 :
(A.2).
Contour plot of
df (x)
x1
= 0 and
df (x)
x2
= 0, where f (x) is as in
A.3
Function Quad
(A.6)
(A.7)
This function is used in the benchmark test with two different positive definite
matrices M . In one test case, M is the identity matrix I and in the other test
case M is a matrix, called Q, with a large range of eigenvalues. The matrix Q
has elements
103
GenOpt
Generic Optimization Program
Version 3.1.0
579.7818
227.6855
49.2126
60.3045
152.4101
207.2424
8.0917
33.6562
204.1312
3.7129
227.6855
236.2505
16.7689
40.3592
179.8471
80.0880
64.8326
15.2262
92.2572
40.7367
49.2126
16.7689
84.1037
71.0547
20.4327
5.1911
58.7067
36.1088
62.7296
7.3676
60.3045
40.3592
71.0547
170.3128
140.0148
8.9436
26.7365
125.8567
62.3607
21.9523
152.4101
179.8471
20.4327
140.0148
301.2494
45.5550
31.3547
95.8025
164.7464
40.1319
207.2424
80.0880
5.1911
8.9436
45.5550
178.5194
22.9953
39.6349
88.1826
29.1089
32.2344
8.0917
64.8326
58.7067
26.7365
31.3547
22.9953
124.4208
43.5141
75.5865
33.6562
15.2262
36.1088
125.8567
95.8025
39.6349
43.5141
261.7592
86.8136
22.9873
204.1312
92.2572
62.7296
62.3607
164.7464
88.1826
75.5865
86.8136
265.3525
1.6500
3.7129
40.7367
7.3676
21.9523
40.1319
29.1089
32.2344
22.9873
1.6500
49.2499
I
10
10
10
10
10
10
10
10
10
10
500
Q
2235.1810
1102.4510
790.6100
605.2480
28.8760
228.7640
271.8830
3312.3890
2846.7870
718.1490
0
Both test functions have been optimized with the same parameter settings.
The settings for the parameters x0 to x9 are all the same, and given by
Vary{
Parameter {
Name = x0 ; Min = SMALL;
I n i = 0 ; Max = BIG ;
Step = 1 ;
}
}
104
GenOpt
Generic Optimization Program
Version 3.1.0
Bibliography
[AD03]
[BP66]
[CD01]
[CK02]
Maurice Clerc and James Kennedy. The particle swarm explosion, stability, and convergence in a multidimensional complex
space. IEEE Transactions on Evolutionary Computation, 6(1):58
73, February 2002.
[EK95]
[ES01]
R. C. Eberhart and Y. Shi. Particle swarm optimization: Developments, applications and resources. In Proceedings of the 2001
Congress on Evolutionary Computation, volume 1, pages 8486,
Seoul, South Korea, 2001. IEEE.
[HJ61]
[KDB76]
[KE95]
[KE97]
105
GenOpt
Generic Optimization Program
Version 3.1.0
[Kel99a]
[Kel99b]
C. T. Kelley. Iterative methods for optimization. Frontiers in Applied Mathematics. SIAM, 1999.
[KES01]
James Kennedy, Russell C. Eberhart, and Yuhui Shi. Swarm Intelligence. Morgan Kaufmann Publishers, 2001.
[KLT03]
[KM02]
[LPV02a]
[LPV02b]
E. C. Laskari, K. E. Parsopoulos, and M. N. Vrahatis. Particle swarm optimization for minimax problems. In Proceedings of
the 2002 Congress on Evolutionary Computation, volume 2, pages
15761581, Honolulu, HI, May 2002. IEEE.
[NM65]
J. A. Nelder and R. Mead. A simplex method for function minimization. The computer journal, 7(4):308313, January 1965.
[ON71]
[PFTV93]
W. H. Press, B. P. Flannery, S. A. Tuekolsky, and W. T. Vetterling. Numerical Recipes in C: The Art of Scientific Computing,
chapter 20. Cambridge University Press, 1993.
[Pol97]
Elijah Polak. Optimization, Algorithms and Consistent Approximations, volume 124 of Applied Mathematical Sciences. Springer
Verlag, 1997.
106
GenOpt
Generic Optimization Program
Version 3.1.0
[PV02a]
K. E. Parsopoulos and M. N. Vrahatis. Particle swarm optimization method for constrained optimization problems. In P. Sincak, J. Vascak, V. Kvasnicka, and J. Pospichal, editors, Intelligent
Technologies Theory and Applications: New Trends in Intelligent
Technologies, volume 76 of Frontiers in Artificial Intelligence and
Applications, pages 214220. IOS Press, 2002. ISBN: 1-58603-2569.
[PV02b]
[PW03]
Elijah Polak and Michael Wetter. Generalized pattern search algorithms with adaptive precision function evaluations. Technical Report LBNL-52629, Lawrence Berkeley National Laboratory,
Berkeley, CA, 2003.
[SE98]
[SE99]
[Smi69]
[Tor89]
Virginia Torczon. Multi-Directional Search: A Direct Search Algorithm for Parallel Machines. PhD thesis, Rice University, Houston,
TX, May 1989.
[Tse99]
[vdBE01]
[Wri96]
M. H. Wright. Direct search methods: once scorned, now respectable. In D. F. Griffiths and G. A. Watson, editors, Numerical
Analysis 1995, pages 191208. Addison Wesley Longman (Harlow),
1996.
107
GenOpt
Generic Optimization Program
Version 3.1.0
[WW03]
108