You are on page 1of 109

GenOpt(R)

Generic Optimization Program


User Manual
Version 3.1.0

Simulation Research Group


Building Technologies Department
Environmental Energy Technologies Division
Lawrence Berkeley National Laboratory
Berkeley, CA 94720

http://SimulationResearch.lbl.gov

Michael Wetter
MWetter@lbl.gov

December 8, 2011

Notice:
This work was supported by the U.S. Department of Energy (DOE), by the Swiss Academy
of Engineering Sciences (SATW), and by the Swiss National Energy Fund (NEFF).

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy.
All rights reserved.

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

DISCLAIMER
This document was prepared as an account of work sponsored by the
United States Government. While this document is believed to contain correct information, neither the United States Government nor any
agency thereof, nor The Regents of the University of California, nor any
of their employees, makes any warranty, express or implied, or assumes
any legal responsibility for the accuracy, completeness, or usefulness of
any information, apparatus, product, or process disclosed, or represents
that its use would not infringe privately owned rights. Reference herein
to any specific commercial product, process, or service by its trade name,
trademark, manufacturer, or otherwise, does not necessarily constitute
or imply its endorsement, recommendation, or favoring by the United
States Government or any agency thereof, or The Regents of the University of California. The views and opinions of authors expressed herein
do not necessarily state or reflect those of the United States Government
or any agency thereof or The Regents of the University of California.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Contents
1 Abstract

2 Notation

3 Introduction

4 Optimization Problems
4.1 Classification of Optimization Problems . . . . . . . . . .
4.1.1 Problems with Continuous Variables . . . . . . . .
4.1.2 Problems with Discrete Variables . . . . . . . . . .
4.1.3 Problems with Continuous and Discrete Variables
4.1.4 Problems that use a Building Simulation Program
4.2 Algorithm Selection . . . . . . . . . . . . . . . . . . . . .
4.2.1 Problem Pc with n > 1 . . . . . . . . . . . . . . .
4.2.2 Problem Pcg with n > 1 . . . . . . . . . . . . . . .
4.2.3 Problem Pc with n = 1 . . . . . . . . . . . . . . .
4.2.4 Problem Pcg with n = 1 . . . . . . . . . . . . . . .
4.2.5 Problem Pd . . . . . . . . . . . . . . . . . . . . . .
4.2.6 Problem Pcd and Pcdg . . . . . . . . . . . . . . . .
4.2.7 Functions with Several Local Minima . . . . . . .

11
11
11
11
12
12
13
13
14
15
15
15
15
15

5 Algorithms for Multi-Dimensional Optimization


5.1 Generalized Pattern Search Methods (Analysis) . . . . .
5.1.1 Assumptions . . . . . . . . . . . . . . . . . . . .
5.1.2 Characterization of GPS Algorithms . . . . . . .
5.1.3 Model Adaptive Precision GPS Algorithm . . . .
5.1.4 Convergence Results . . . . . . . . . . . . . . . .
a)
Unconstrained Minimization . . . . . .
b)
Box-Constrained Minimization . . . . .
5.2 Generalized Pattern Search Methods (Implementations)
5.2.1 Coordinate Search Algorithm . . . . . . . . . . .
a)
Algorithm Parameters . . . . . . . . . .
b)
Global Search . . . . . . . . . . . . . .
c)
Local Search . . . . . . . . . . . . . . .
d)
Parameter Update . . . . . . . . . . . .
e)
Keywords . . . . . . . . . . . . . . . . .
5.2.2 Hooke-Jeeves Algorithm . . . . . . . . . . . . . .
a)
Algorithm Parameters . . . . . . . . . .
b)
Map for Exploratory Moves . . . . . . .

16
16
17
18
19
20
20
21
21
22
22
22
22
23
23
24
24
24

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

GenOpt
Generic Optimization Program
Version 3.1.0

5.3
5.4

5.5

5.6

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

c)
Global Search Set Map . . . . . .
d)
Local Search Direction Map . . .
e)
Parameter Update . . . . . . . . .
f)
Keywords . . . . . . . . . . . . . .
5.2.3 Multi-Start GPS Algorithms . . . . . . . .
Discrete Armijo Gradient . . . . . . . . . . . . . .
5.3.1 Keywords . . . . . . . . . . . . . . . . . . .
Particle Swarm Optimization . . . . . . . . . . . .
5.4.1 PSO for Continuous Variables . . . . . . . .
a)
Neighborhood Topology . . . . . .
b)
Model PSO Algorithm . . . . . . .
c)
Particle Update Equation . . . . .
(i)
Inertia Weight . . . . . .
(ii)
Constriction Coefficient .
5.4.2 PSO for Discrete Variables . . . . . . . . .
5.4.3 PSO for Continuous and Discrete Variables
5.4.4 PSO on a Mesh . . . . . . . . . . . . . . . .
5.4.5 Population Size and Number of Generations
5.4.6 Keywords . . . . . . . . . . . . . . . . . . .
Hybrid GPS Algorithm with PSO Algorithm . . .
5.5.1 For Continuous Variables . . . . . . . . . .
5.5.2 For Continuous and Discrete Variables . . .
5.5.3 Keywords . . . . . . . . . . . . . . . . . . .
Simplex Algorithm of Nelder and Mead . . . . . .
5.6.1 Main Operations . . . . . . . . . . . . . . .
5.6.2 Basic Algorithm . . . . . . . . . . . . . . .
5.6.3 Stopping Criteria . . . . . . . . . . . . . . .
5.6.4 ONeills Modification . . . . . . . . . . . .
5.6.5 Modification of Stopping Criteria . . . . . .
5.6.6 Benchmark Tests . . . . . . . . . . . . . . .
5.6.7 Keywords . . . . . . . . . . . . . . . . . . .

6 Algorithms for One-Dimensional Optimization


6.1 Interval Division Algorithms . . . . . . . . . . . .
6.1.1 General Interval Division . . . . . . . . .
6.1.2 Golden Section Interval Division . . . . .
6.1.3 Fibonacci Division . . . . . . . . . . . . .
6.1.4 Comparison of Efficiency . . . . . . . . .
6.1.5 Master Algorithm for Interval Division . .
6.1.6 Keywords . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

24
25
25
25
26
28
30
32
32
33
35
35
35
36
37
38
38
38
39
42
42
43
43
45
45
47
49
50
50
52
55

.
.
.
.
.
.
.

56
56
56
57
58
59
59
60

GenOpt
Generic Optimization Program
Version 3.1.0
7 Algorithms for Parametric Runs
7.1 Parametric Runs by Single Variation
7.1.1 Algorithm Description . . . .
7.1.2 Keywords . . . . . . . . . . .
7.2 Parametric Runs on a Mesh . . . . .
7.2.1 Algorithm Description . . . .
7.2.2 Keywords . . . . . . . . . . .

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

.
.
.
.
.
.

.
.
.
.
.
.

62
62
62
63
63
63
64

8 Constraints
8.1 Constraints on Independent Variables . . . .
8.1.1 Box Constraints . . . . . . . . . . . .
8.1.2 Coupled Linear Constraints . . . . . .
8.2 Constraints on Dependent Variables . . . . .
8.2.1 Barrier Functions . . . . . . . . . . . .
8.2.2 Penalty Functions . . . . . . . . . . .
8.2.3 Implementation of Barrier and Penalty

. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
Functions

.
.
.
.
.
.
.

65
65
65
66
66
67
67
68

9 Program
9.1 Interface to the Simulation Program . . . . .
9.2 Interface to the Optimization Algorithm . . .
9.3 Package genopt.algorithm . . . . . . . . . .
9.4 Implementing a New Optimization Algorithm

.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

69
69
70
70
72

10 Installing and Running GenOpt


10.1 System Requirements . . . . . . . . . . . . . . .
10.2 Installing and uninstalling GenOpt . . . . . . . .
10.3 Running GenOpt . . . . . . . . . . . . . . . . . .
10.3.1 Running GenOpt from the file explorer .
10.3.2 Running GenOpt from the command line

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

74
74
74
74
74
74

11 Setting Up an Optimization Problem


11.1 File Specification . . . . . . . . . . . . . . . . . . . . . . .
11.1.1 Initialization File . . . . . . . . . . . . . . . . . . .
11.1.2 Configuration File . . . . . . . . . . . . . . . . . .
11.1.3 Command File . . . . . . . . . . . . . . . . . . . .
a)
Specification of a Continuous Parameter .
b)
Specification of a Discrete Parameter . .
c)
Specification of Input Function Objects .
d)
Structure of the Command File . . . . .
11.1.4 Log File . . . . . . . . . . . . . . . . . . . . . . . .
11.1.5 Output Files . . . . . . . . . . . . . . . . . . . . .
11.2 Resolving Directory Names for Parallel Computing . . . .

77
77
78
84
86
86
87
88
89
90
90
91

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

11.3 Pre-Processing and Post-Processing . . . . . . .


a)
Function Objects . . . . . . . .
b)
Pre-Processing . . . . . . . . .
c)
Post-Processing . . . . . . . .
11.4 Truncation of Digits of the Cost Function Value

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

92
92
94
94
95

12 Conclusions

97

13 Acknowledgments

98

14 Legal
99
14.1 Copyright Notice . . . . . . . . . . . . . . . . . . . . . . . 99
14.2 License agreement . . . . . . . . . . . . . . . . . . . . . . 99
A Benchmark Tests
101
A.1 Rosenbrock . . . . . . . . . . . . . . . . . . . . . . . . . . 101
A.2 Function 2D1 . . . . . . . . . . . . . . . . . . . . . . . . . 102
A.3 Function Quad . . . . . . . . . . . . . . . . . . . . . . . . 103

Product and company names mentioned herein may be the trademarks of their
respective owners. Any rights not expressly granted herein are reserved.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Abstract

GenOpt is an optimization program for the minimization of a cost function


that is evaluated by an external simulation program. It has been developed for
optimization problems where the cost function is computationally expensive
and its derivatives are not available or may not even exist. GenOpt can be
coupled to any simulation program that reads its input from text files and writes
its output to text files. The independent variables can be continuous variables
(possibly with lower and upper bounds), discrete variables, or both, continuous
and discrete variables. Constraints on dependent variables can be implemented
using penalty or barrier functions. GenOpt uses parallel computing to evaluate
the simulations.
GenOpt has a library with local and global multi-dimensional and onedimensional optimization algorithms, and algorithms for doing parametric runs.
An algorithm interface allows adding new minimization algorithms without
knowing the details of the program structure.
GenOpt is written in Java so that it is platform independent. The platform
independence and the general interface make GenOpt applicable to a wide range
of optimization problems.
GenOpt has not been designed for linear programming problems, quadratic
programming problems, and problems where the gradient of the cost function
is available. For such problems, as well as for other problems, special tailored
software exists that is more efficient.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Notation
1. We use the notation a , b to denote that a is equal to b by definition.
We use the notation a b to denote that a is assigned the value of b.

2. Rn denotes the Euclidean space of n-tuplets of real numbers. Vectors


x Rn are always column vectors, and their elements are denoted by
superscripts. The inner P
product in Rn is denoted by h, i and for x, y
n
n
R defined by hx, yi , i=1 xi y i . The norm in Rn is denoted by k k
n
and for x R defined by kxk , hx, xi1/2 .

3. We denote by Z the set of integers, by Q the set of rational numbers, and


by N , {0, 1, . . .} the set of natural numbers. The set N+ is defined as
N+ , {1, 2, . . .}. Similarly, vectors in Rn with strictly positive elements
are denoted by Rn+ , {x Rn | xi > 0, i {1, . . . , n} } and the set Q+
is defined as Q+ , {q Q | q > 0}.

4. Let W be a set containing a sequence {wi }ki=0 . Then, we denote by w k


the sequence {wi }ki=0 and by Wk the set of all k + 1 element sequences
in W.

5. If A and B are sets, we denote by A B the union of A and B and by


A B the intersection of A and B.

6. If S is a set, we denote by S the closure of S and by 2S the set of all


nonempty subsets of S.

b Qnq is a matrix, we will use the notation db D


b to denote
7. If D
n
b
b
the fact that d Q is a column vector of the matrix D. Similarly, by
b we mean that D Qnp (1 p q) is a matrix containing only
DD
b Further, card(D) denotes the number of columns of D.
columns of D.

8. f () denotes a function where () stands for the undesignated variables.


f (x) denotes the value of f () at the point x. f : A B indicates that
the domain of f () is in the space A and its range in the space B.
9. We say that a function f : Rn R is once continuously differentiable
if f () is defined on Rn , and if f () has continuous derivatives on Rn .
10. For x Rn and f : Rn R continuously differentiable, we say that x
is stationary if f (x ) = 0.
11. We denote by {ei }ni=1 the unit vectors in Rn .

12. We denote by U (0, 1) that R is a uniformly distributed random


number, with 0 1.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Introduction

The use of system simulation for analyzing complex engineering problems is


increasing. Such problems typically involve many independent variables1 , and
can only be optimized by means of numerical optimization. Many designers use
parametric studies to achieve better performance of such systems, even though
such studies typically yield only partial improvement while requiring high labor time. In such parametric studies, one usually fixes all but one variable and
tries to optimize a cost function2 with respect to the non-fixed variable. The
procedure is repeated iteratively by varying another variable. However, every
time a variable is varied, all other variables typically become non-optimal and
hence need also to be adjusted. It is clear that such a manual procedure is very
time-consuming and often impractical for more than two or three independent
variables.
GenOpt, a generic optimization program, has been developed to find with
less labor time the independent variables that yield better performance of such
systems. GenOpt does optimization of a user-supplied cost function, using a
user-selected optimization algorithm.
In the most general form, the optimization problems addressed by GenOpt
can be stated as follows: Let X be a user-specified constraint set, and let
f : X R be a user-defined cost function that is bounded from below. The
constraint set X consists of all possible design options, and the cost function
f () measures the system performance. GenOpt tries to find a solution to the
problem3
min f (x).
(3.1)
xX

This problem is usually solved by iterative methods, which construct infinite sequences, of progressively better approximations to a solution, i.e., a
point that satisfies an optimality condition. If X Rn , with some n N,
and X or f () is not convex, we do not have a test for global optimality, and
the most one can obtain is a point that satisfies a local optimality condition.
Furthermore, for X Rn , tests for optimality are based on differentiability
assumptions of the cost function. Consequently, optimization algorithms can
fail, possibly far from a solution, if f () is not differentiable in the continuous
independent variables. Some optimization algorithms are more likely to fail at
1
The independent variables are the variables that are varied by the optimization
algorithm from one iteration to the next. They are also called design parameters or
free parameters.
2
The cost function is the function being optimized. The cost function measures
a quantity that should be minimized, such as a buildings annual operation cost, a
systems energy consumption, or a norm between simulated and measured values in
a data fitting process. The cost function is also called objective function.
3
If f () is discontinuous, it may only have an infimum (i.e., a greatest lower bound)
but no minimum even if the constraint set X is compact. Thus, to be correct, (3.1)
should be replaced by inf xX f (x). For simplicity, we will not make this distinction.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

discontinuities than others. GenOpt has algorithms that are not very sensitive to (small) discontinuities in the cost function, such as Generalized Pattern
Search algorithms, which can also be used in conjunction with heuristic global
optimization algorithms.
Since one of GenOpts main application fields is building energy use or
operation cost optimization, GenOpt has been designed such that it addresses
the special properties of optimization problems in this area. In particular,
GenOpt is designed for optimization problems with the following properties:
1. The cost function may have to be defined on approximate numerical solutions of differential algebraic equations, which may fail to be continuous
(see Section 4.1.4).
2. The number of independent variables is small.4
3. Evaluating the cost function requires much more computation time than
determining the values for the next iterate.
4. Analytical properties of the cost function (such as formula for the gradient) are not available.
GenOpt has the following properties:
1. GenOpt can be coupled to any simulation program that calculates the
cost function without having to modify or recompile either program,
provided that the simulation program reads its input from text files and
writes its output to text files.
2. The user can select an optimization algorithm from an algorithm library,
or implement a custom algorithm without having to recompile and understand the whole optimization environment.
3. GenOpt does not require an expression for the gradient of the cost function.
With GenOpt, it is easy to couple a new simulation program, specify the
optimization variables and minimize the cost function. Therefore, in designing
complex systems, as well as in system analysis, a generic optimization program
like GenOpt offers valuable assistance. Note, however, that optimization is not
easy: The efficiency and success of an optimization is strongly affected by the
properties and the formulation of the cost function, and by the selection of an
appropriate optimization algorithm.
This manual is structured as follows: In Section 4, we classify optimization problems and discuss which of GenOpts algorithms can be used for each
of these problems. Next, we explain the algorithms that are implemented in
GenOpt: In Section 5, we discuss the algorithms for multi-dimensional optimization; in Section 6 the algorithms for one-dimensional optimization; and
4
By small, we mean on the order of 10, but the maximum number of independent
variables is not restricted in GenOpt.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

in Section 7 the algorithms for parametric runs. In Section 8, we discuss how


constraints on independent variables are implemented, and how constraints on
dependent variables can be implemented. In Section 9, we explain the structure of the GenOpt software, the interface for the simulation program and the
interface for the optimization algorithms. How to install and start GenOpt is
described in Section 10. Section 11 shows how to set up the configuration and
input files, and how to use GenOpts pre- and post-processing capabilities.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

10

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Optimization Problems

4.1

Classification of Optimization Problems

We will now classify some optimization problems that can be solved with
GenOpts optimization algorithms. The classification will be used in Section 4.2
to recommend suitable optimization algorithms.
We distinguish between problems whose design parameters are continuous
variables1 , discrete variables2 , or both. In addition, we distinguish between
problems with and without inequality constraints on the dependent variables.

4.1.1 Problems with Continuous Variables


To denote box-constraints on independent continuous variables, we will use
the notation


X , x Rn | li xi ui , i {1, . . . , n} ,
(4.1)

where li < ui for i {1, . . . , n}.

We will consider optimization problems of the form


Pc

min f (x),
xX

(4.2)

where f : Rn R is a once continuously differentiable cost function.


Now, we add inequality constraints on the dependent variables to (4.2) and
obtain
Pcg

min f (x),

(4.3a)

g(x) 0,

(4.3b)

xX

where everything is as in (4.2) and, in addition, g : Rn Rm is a once continuously differentiable constraint function (for some m N). We will assume
that there exists an x X that satisfies g(x ) < 0.

4.1.2 Problems with Discrete Variables


Next, we will discuss the situation where all design parameters can only
take on user-specified discrete values.
Let Xd Znd denote the constraint set with a finite, non-zero number of
integers for each variable.
1

Continuous variables can take on any value on the real line, possibly between
lower and upper bounds.
2
Discrete variables can take on only integer values.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

11

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

We will consider integer programming problems of the form


Pd

min f (x).

(4.4)

xXd

4.1.3 Problems with Continuous and Discrete Variables


Next, we will allow for continuous and discrete independent variables.
We will use the notation
X , Xc Xd ,


Xc ,
x Rnc | li xi ui , i {1, . . . , nc } ,

(4.5a)
(4.5b)

where the bounds on the continuous independent variables satisfy li <


ui for i {1, . . . , nc }, and the constraint set Xd Znd for the discrete
variables is a user-specified set with a finite, non-zero number of integers for
each variable.
We will consider mixed-integer programming problems of the form
Pcd

min f (x),
xX

(4.6a)
(4.6b)

where x , (xc , xd ) Rnc Znd , f : Rnc Znd R and X is as in (4.5).


Now, we add inequality constraints on the dependent variables to (4.6) and
obtain
Pcdg

min f (x),

(4.7a)

g(x) 0,

(4.7b)

xX

where everything is as in (4.6) and in addition g : Rnc Rnd Rm (for some


m N). We will assume that there exists an x X that satisfies g(x ) < 0.

4.1.4 Problems whose Cost Function is Evaluated by a


Building Simulation Program
Next, we will discuss problem Pc defined in (4.2) for the situation where
the cost function f : Rn R cannot be evaluated, but can be approximated
numerically by approximating cost functions f : Rp+ Rn R, where the first
argument is the precision parameter of the numerical solvers. This is typically
the case when the cost is computed by a thermal building simulation program,
such as EnergyPlus [CLW+ 01], TRNSYS [KDB76], or DOE-2 [WBB+ 93]. In
such programs, computing the cost involves solving a system of partial and
ordinary differential equations that are coupled to algebraic equations. In general, one cannot obtain an exact solution, but one can obtain an approximate
numerical solution. Hence, the cost function f (x) can only be approximated by
an approximating cost function f (, x), where Rq+ is a vector that contains
Copyright (c) 1998-2011
The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

12

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

precision parameters of the numerical solvers. Consequently, the optimization


algorithm can only be applied to f (, x) and not to f (x).
In such thermal building simulation programs it is common that the termination criteria of the solvers that are used to solve the partial differential
equations, ordinary differential equations, and algebraic equations depend on
the independent variable x. Therefore, a perturbation of x can cause a change in
the sequence of solver iterations, which causes the approximating cost functions
f (, x) to be discontinuous in x. Furthermore, if variable step size integration
methods are used, then the integration mesh can change from one simulation
to the next. Therefore, part of the change in function values between different points is caused by a change of the number of solver iterations, and by a
change of the integration mesh. Consequently, f (, ) is discontinuous, and a
descent direction for f (, ) may not be a descent direction for f (). Therefore,
optimization algorithms can terminate at points that are non-optimal.
The best one can do in trying to solve optimization problems where the cost
and constraint functions are evaluated by a thermal building simulation program that does not allow controlling the approximation error is to find points
that are close to a local minimizer of f (). Numerical experiments show that
by using tight enough precision and starting the optimization algorithm with
coarse initial values, one often comes close to a minimizer of f (). Furthermore,
by selecting different initial iterates for the optimization, or by using different
optimization algorithms, one can increase the chance of finding a point that is
close to a minimizer of f (). However, even if the optimization terminates at
a point that is non-optimal for f (), one may have obtained a better system
performance compared to not doing any optimization.
See [WP03, WW03] for a further discussion of optimization problems in
which the cost function value is computed by a building simulation program.

4.2

Algorithm Selection

In this section, we will discuss which of GenOpts algorithms can be selected for the optimization problems that we introduced in Section 4.1.

4.2.1 Problem Pc with n > 1


To solve Pc with n > 1, the hybrid algorithm (Section 5.5, page 42) or the
GPS implementation of the Hooke-Jeeves algorithm (Section 5.2.2, page 24)
can be used, possibly with multiple starting points (Section 5.2.3, page 26). If
f () is once continuously differentiable and has bounded level sets (or if the
constraint set X defined in (4.1) is compact) then these algorithms construct
for problem (4.2) a sequence of iterates with stationary accumulation points
(see Theorem 5.1.13).
Alternatively, the Discrete Armijo Gradient algorithm (Section 5.3, page 28)

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

13

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

can be used. Every accumulation point of the Discrete Armijo Gradient algorithm is a feasible stationary point.
If f () is not continuously differentiable, or if f () must be approximated by
an approximating cost function f (, ) where the approximation error cannot
be controlled, as described in Section 4.1.4, then Pc can only be solved heuristically. We recommend using the hybrid algorithm (Section 5.5, page 42), the
GPS implementation of the Hooke-Jeeves algorithm (Section 5.2.2, page 24),
possibly with multiple starting points (Section 5.2.3, page 26), or a Particle
Swarm Optimization algorithm (Section 5.4, page 32).
We do not recommend using the Nelder-Mead Simplex algorithm (Section 5.6, page 45) or the Discrete Armijo Gradient algorithm (Section 5.3,
page 28).
The following approach reduces the risk of failing at a point which is nonoptimal and far from a minimizer of f ():
1. Selecting large values for the parameter Step in the optimization command file (see page 87).
2. Selecting different initial iterates.
3. Using the hybrid algorithm of Section 5.5, the GPS implementation of
the Hooke-Jeeves algorithm, possibly with multiple starting points (Section 5.2.3, page 26), and/or a Particle Swarm Optimization algorithm
and select the best of the solutions.
4. Doing a parametric study around the solution that has been obtained
by any of the above optimization algorithms. The parametric study can
be done using the algorithms Parametric (Section 7.1, page 62) and/or
EquMesh (Section 7.2, page 63). If the parametric study yields a further
reduction in cost, then the optimization failed at a non-optimal point.
In this situation, one may want to try another optimization algorithm.
If f () is continuously differentiable but must be approximated by approximating cost functions f (, ) where the approximation error can be controlled
as described in Section 4.1.4, then Pc can be solved using the hybrid algorithm
(Section 5.5, page 42) or the GPS implementation of the Hooke-Jeeves algorithm (Section 5.2.2, page 24), both with the error control scheme described
in the Model GPS Algorithm 5.1.8 (page 19). The GPS implementation of
the Hooke-Jeeves algorithm can be used with multiple starting points (Section 5.2.3, page 26). The error control scheme can be implemented using the
value of GenOpts variable stepNumber (page 68) and GenOpts pre-processing
capabilities (Section 11.3, page 92). A more detailed description of how to use
the error control scheme can be found in [PW03, WP03].

4.2.2 Problem Pcg with n > 1


To solve Pcg , the hybrid algorithm (Section 5.5, page 42) or the GPS implementation of the Hooke-Jeeves algorithm (Section 5.2.2, page 24) can be used,
Copyright (c) 1998-2011
The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

14

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

possibly with multiple starting points (Section 5.2.3, page 26). Constraints
g() 0 can be implemented using barrier and penalty functions (Section 8,
page 65).
If f () or g() are not continuously differentiable, we recommend using the
hybrid algorithm (Section 5.5, page 42) or the GPS implementation of the
Hooke-Jeeves algorithm (Section 5.2.2, page 24), possibly with multiple starting points (Section 5.2.3, page 26), and implement the constraints g() 0
using barrier and penalty functions (Section 8, page 65). To reduce the risk of
terminating far from a minimum point of f (), we recommend the same measures as for solving Pc .

4.2.3 Problem Pc with n = 1


To solve Pc with n = 1, any of the interval division algorithms can be
used (Section 6.1, page 56). Since only a few function evaluations are required
for parametric studies in one dimension, the algorithm Parametric can also be
used for this problem (Section 7.1, page 62). We recommend doing a parametric
study if f () is expected to have several local minima.

4.2.4 Problem Pcg with n = 1


To solve Pcg with n = 1, the same applies as for Pc with n = 1. Constraints
g() 0 can be implemented by setting the penalty weighting factor in (8.8)
to a large value. This may still cause small constraint violations, but it is easy
to check whether the violation is acceptable.

4.2.5 Problem Pd
To solve Pd , a Particle Swarm Optimization algorithm can be used (Section 5.4, page 32).

4.2.6 Problem Pcd and Pcdg


To solve Pcd , or Pcdg , the hybrid algorithm (Section 5.5, page 42) or a
Particle Swarm Optimization algorithm can be used (Section 5.4, page 32).

4.2.7 Functions with Several Local Minima


If the problem has several local minima, we recommend using the GPS
implementation of the Hooke-Jeeves algorithm with multiple starting points
(Section 5.2.3, page 26), the hybrid algorithm (Section 5.5, page 42), or a
Particle Swarm Optimization algorithm (Section 5.4, page 32).

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

15

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Algorithms for
Multi-Dimensional
Optimization

5.1

Generalized Pattern Search Methods


(Analysis)

Generalized Pattern Search (GPS) algorithms are derivative free optimization algorithms for the minimization of problem Pc and Pcg , defined in (4.2)
and (4.3), respectively. We will present the GPS algorithms for the case where
the function f () cannot be evaluated exactly, but can be approximated by
functions f : Rq+ Rn R, where the first argument Rq+ is the precision
parameter of PDE, ODE, and algebraic equation solvers. Obviously, the explanations are similar for problems where f () can be evaluated exactly, except
that the scheme to control is not applicable, and that the approximate functions f (, ) are replaced by f ().
Under the assumption that the cost function is continuously differentiable,
all the accumulation points constructed by the GPS algorithms are stationary.
What GPS algorithms have in common is that they define the construction
of a mesh Mk in Rn , which is then explored according to some rules that differ
among the various members of the family of GPS algorithms. If no decrease in
cost is obtained on mesh points around the current iterate, then the distance
between the mesh points is reduced, and the process is repeated.
We will now explain the framework of GPS algorithms that will be used to
implement different instances of GPS algorithms in GenOpt. The discussion
follows the more detailed description of [PW03].

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

16

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

5.1.1 Assumptions
We will assume that f () and its approximating functions {f (, )}Rq+
have the following properties.
Assumption 5.1.1
1. There exists an error bound function : Rq+ R+ such that for any
bounded set S X, there exists an S Rq+ and a scalar KS (0, )
such that for all x S and for all Rq+ , with S ,1
| f (, x) f (x)| KS ().

(5.1)

Furthermore,
lim () = 0.

(5.2)

kk0

2. The function f : Rn R is once continuously differentiable.


Remark 5.1.2
1. The functions {f (, )}Rq+ may be discontinuous.
2. See [PW03] for the situation where f () is only locally Lipschitz continuous.
Next, we state an assumption on the level sets of the family of approximate
functions. To do so, we first define the notion of a level set.
Definition 5.1.3 (Level Set) Given a function f : Rn R and an R,
such that > inf xRn f (x), we will say that the set L (f ) Rn , defined as
L (f ) , {x Rn | f (x) },

(5.3)

is a level set of f (), parametrized by .


Assumption 5.1.4 (Compactness of Level Sets) Let {f (, )}Rq+ be as
in Assumption 5.1.1 and let X Rn be the constraint set. Let x0 X be
the initial iterate and 0 Rq+ be the initial precision setting of the numerical
solvers. Then, we assume that there exists a compact set C Rn such that
Lf (0 ,x0 ) (f (, )) X C,

0 .

(5.4)

For Rq+ , by S , we mean that 0 < i iS , for all i {1, . . . , q}.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

17

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

5.1.2 Characterization of GPS Algorithms


There exist different geometrical explanations for pattern search algorithms,
and a generalization is given in the review [KLT03]. We will use a simple implementation of the pattern search algorithms in [PW03] where we restrict the
search directions to be the positive and negative coordinate directions. Thus,
the search directions are the columns of the matrix
D , [e1 , +e1 , . . . , en , +en ] Zn2n ,

(5.5)

which suffices for box-constrained problems. Furthermore, we construct the


sequence of mesh size parameters that parametrizes the minimum distance
between iterates such that it satisfies the following assumption.
Assumption 5.1.5 (k-th Mesh Size Parameter) Let r, s0 , k N, with r >
k1
1, and {ti }i=0
N. We will assume that the sequence of mesh size parameters
satisfies
1
k , s ,
(5.6a)
rk
where for k > 0
k1
X
ti .
(5.6b)
sk , s0 +
i=0

With this construction, all iterates lie on a rational mesh of the form
Mk , {x0 + k D m | m N2n }.

(5.7)

We will now characterize the set-valued maps that determine the mesh
points for the global and local searches. Note that the images of these
maps may depend on the entire history of the computation.
Definition 5.1.6 Let Xk Rn and k Q+ be the sets of all sequences
containing k + 1 elements, let Mk be the current mesh, and let Rq+ be the
solver tolerance.
1. We define the global search set map to be any set-valued map

k : Xk k Rq+ 2Mk X

(5.8a)

whose image k (xk , k , ) contains only a finite number of mesh points.

2. We will call Gk , k (xk , k , ) the global search set.


3. We define the directions for the local search as
D , [e1 , +e1 , . . . , en , +en ].

(5.8b)

4. We will call


Lk , xk + k D ei | i {1, . . . , 2 n} X

(5.8c)

the local search set.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

18

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Remark 5.1.7
1. The map k (, , ) can be dynamic in the sense that if {xki }Ii=0 , k (xk , k , ),
b
then the rule for selecting xkbi , 1 bi I, can depend on {xki }i1
i=0 and
b

{f (, xki )}i1
i=0 . It is only important that the global search terminates
after a finite number of computations, and that Gk (2Mk X) .

2. As we shall see, the global search affects only the efficiency of the algorithm but not its convergence properties. Any heuristic procedure that
leads to a finite number of function evaluations can be used for k (, , ).
3. The empty set is included in the range of k (, , ) to allow omitting the
global search.

5.1.3 Model Adaptive Precision GPS Algorithm


We will now present our model GPS algorithm with adaptive precision cost
function evaluations.
Algorithm 5.1.8 (Model GPS Algorithm)
Data:

Maps:

Step 0:
Step 1:

Step 2:

Step 3:

Step 4:

Initial iterate x0 X;
Mesh size divider r N, with r > 1;
Initial mesh size exponent s0 N.

Global search set map k : Xk k Rq+ 2Mk X ;
Function : R+ Rq+ (to assign ), such that the composition
: R+ R+ is strictly monotone decreasing and satisfies
(())/ 0, as 0.
Initialize k = 0, 0 = 1/rs0 , and = (1).
Global Search
Construct the global search set Gk = k (xk , k , ).
If f (, x ) f (, xk ) < 0 for any x Gk , go to Step 3;
else, go to Step 2.
Local Search
Evaluate f (, ) for any x Lk until some x Lk
satisfying f (, x ) f (, xk ) < 0 is obtained, or until all points
in Lk are evaluated.
Parameter Update
If there exists an x Gk Lk satisfying f (, x ) f (, xk ) < 0,
set xk+1 = x , sk+1 = sk , k+1 = k , and do not change ;
else, set xk+1 = xk , sk+1 = sk + tk , with tk N+ arbitrary,
k+1 = 1/rsk+1 , = (k+1 /0 ).
Replace k by k + 1, and go to Step 1.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

19

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Remark 5.1.9
1. To ensure that does not depend on the scaling of 0 , we normalized the
argument of (). In particular, we want to decouple from the users
choice of the initial mesh parameter.
2. In Step 2, once a decrease of the cost function is obtained, one can
proceed to Step 3. However, one is allowed to evaluate f (, ) at more
points in Lk in an attempt to obtain a bigger reduction in cost. However,
one is allowed to proceed to Step 3 only after either a cost decrease has
been found, or after all points in Lk are tested.

3. In Step 3, we are not restricted to accepting the x Gk Lk that gives


lowest cost value. But the mesh size parameter k is reduced only if
there exists no x Gk Lk satisfying f (, x ) f (, xk ) < 0.

4. To simplify the explanations, we do not increase the mesh size parameter


if the cost has been reduced. However, our global search allows searchb Mk , and hence, our algorithm can easily
ing on a coarser mesh M
be extended to include a rule for increasing k for a finite number of
iterations.
5. Audet and Dennis [AD03] update the mesh size parameter using the
formula k+1 = m k , where Q, > 1, and m is any element of
Z. Thus, our update rule for k is a special case of Audets and Dennis
construction since we set = 1/r, with r N+ , r 2 (so that < 1) and
m N. We prefer our construction because we do not think it negatively
affects the computing performance, but it leads to simpler convergence
proofs.

5.1.4 Convergence Results


We will now present the convergence results for our Model GPS algorithm.
See [PW03] for a detailed discussion and convergence proofs.
a)

Unconstrained Minimization

We will first present the convergence properties of the Model GPS Algorithm 5.1.8 on unconstrained minimization problems, i.e., for X = Rn .
First, we will need the notion of a refining subsequence, which we define as
follows:
Definition 5.1.10 (Refining Subsequence) Consider a sequence {xk }
k=0
constructed by Model GPS Algorithm 5.1.8. We will say that the subsequence
{xk }kK is the refining subsequence, if k+1 < k for all k K, and k+1 =
k for all k
/ K.
We now state that pattern search algorithms with adaptive precision function evaluations construct sequences with stationary accumulation points.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

20

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Theorem 5.1.11 (Convergence to a Stationary Point) Suppose that Assumptions 5.1.1 and 5.1.4 are satisfied and that X = Rn . Let x Rn be an
accumulation point of the refining subsequence {xk }kK , constructed by Model
GPS Algorithm 5.1.8. Then,
f (x ) = 0.

(5.9)

b) Box-Constrained Minimization
We now present the convergence results for the box-constrained problem (4.2). See [AD03, PW03, KLT03] for the more general case of linearlyconstrained problems and for the convergence proofs.
First, we introduce the notion of a tangent cone and a normal cone, which
are defined as follows:
Definition 5.1.12 (Tangent and Normal Cone)
1. Let X Rn . Then, we define the tangent cone to X at a point x X
by
TX (x ) , { (x x ) | 0, x X}.
(5.10a)
2. Let TX (x ) be as above. Then, we define the normal cone to X at x X
by
NX (x ) , {v Rn | t TX (x ), hv, ti 0}.
(5.10b)
We now state that the accumulation points generated by Model GPS Algorithm 5.1.8 are feasible stationary points of problem (4.2).
Theorem 5.1.13 (Convergence to a Feasible Stationary Point)
Suppose Assumptions 5.1.1 and 5.1.4 are satisfied. Let x X be an accumulation point of a refining subsequence {xk }kK constructed by Model GPS
Algorithm 5.1.8 in solving problem (4.2). Then,
hf (x ), ti 0,
and

5.2

t TX (x ),

f (x ) NX (x ).

(5.11a)
(5.11b)

Generalized Pattern Search Methods


(Implementations)

We will now present different implementations of the Generalized Pattern


Search (GPS) algorithms. They all use the Model GPS Algorithm 5.1.8 to solve
problem Pc defined in (4.2). The problem Pcg defined in (4.3) can be solved
Copyright (c) 1998-2011
The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

21

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

by using penalty functions as described in Section 8.2.


We will discuss the implementations for the case where the function f ()
cannot be evaluated exactly, but will be approximated by functions f : Rq+
Rn R, where the first argument Rq+ is the precision parameter of the
PDE, ODE, and algebraic equation solvers. This includes the case where is
not varied during the optimization, in which case the explanations are identical,
except that the scheme to control is not applicable, and that the approximate
functions f (, ) are replaced by f ().
If the cost function f () is approximated by functions {f (, )}Rq+ with
adaptive precision , then the function : R+ Rq+ (to assign ) can be implemented by using GenOpts pre-processing capability (see Section 11.3).

5.2.1 Coordinate Search Algorithm


We will now present the implementation of the Coordinate Search algorithm with adaptive precision function evaluations using the Model GPS Algorithm 5.1.8. To simplify the implementation, we assign f (, x) = for all
x 6 X where X is defined in (4.1).
a)

Algorithm Parameters
The search direction matrix is defined as
D , [+s1 e1 , s1 e1 , . . . , +sn en , sn en ]

(5.12)

where si R, i {1, . . . , n}, is a scaling for each parameter (specified by


GenOpts parameter Step).
The parameter r N, r > 1, which is used to compute the mesh size parameter k , is defined by the parameter MeshSizeDivider, the initial value for the
mesh size exponent s0 N is defined by the parameter InitialMeshSizeExponent,
and the mesh size exponent increment tk is, for the iterations that do not reduce
the cost, defined by the parameter MeshSizeExponentIncrement.
b) Global Search
In the Coordinate Search Algorithm, there is no global search. Thus, Gk =
for all k N.
c)

Local Search

The local search set Gk is constructed using the set-valued map Ek : Rn


Q+ Rq+ 2Mk , which is defined as follows:

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

22

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Algorithm 5.2.1 (Map Ek : Rn Q+ Rq+ 2Mk for Coordinate Search)


Parameter:
Input:

Output:
Step 0:
Step 1:

Step 2:

Search direction matrix D = [+s1 e1 , s1 e1 , . . . , +sn en , sn en ].


Vector Nn .
Iteration number k N.
Base point x Rn .
Mesh divider k Q+ .
Set of trial points T .
Initialize T = .
If k = 0, initialize , i = 0 for all i {1, . . . , n}.
For i = 1, . . . , n
Set x
e = x + k D e2 i1+i and T T {e
x}.

If f (, x
e) < f (, x)
Set x = x
e.
else
If i = 0, set i = 1, else set i = 0.
x}.
Set x
e = x + k D e2 i1+i and T T {e
If f (, x
e) < f (, x)
Set x = x
e.
else
If i = 0, set i = 1, else set i = 0.
end if.
end if.
end for.
Return T .

Thus, Ek (x, k , ) = T for all k N.


Remark 5.2.2 In Algorithm 5.2.1, the vector Nn contains for each coordinate direction an integer 0 or 1 that indicates whether a step in the positive
or in the negative coordinate direction yield a decrease in cost in the previous
iteration. This reduces the number of exploration steps.
d) Parameter Update
The point x in Step 3 of the GPS Model Algorithm 5.1.8 corresponds to
x , arg minxEk (xk ,k ,) f (, x) in the Coordinate Search algorithm.

e)

Keywords

For the GPS implementation of the Coordinate Search Algorithm, the command file (see page 86) must only contain continuous parameters.
To invoke the algorithm, the Algorithm section of the GenOpt command
file must have the following form:
Algorithm {
Main
MeshSizeDivider

= GPSCoordinateSearch ;
= I n t e g e r ; // 1 < M e s h S i z e D i v i d e r

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

23

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

InitialMeshSizeExponent
= I n t e g e r ; // 0 <= I n i t i a l M e s h S i z e E x p o n e n t
MeshSizeEx ponentIncrement = I n t e g e r ; // 0 < MeshSizeEx ponentIncrement
NumberOfStepReduction
= I n t e g e r ; // 0 < NumberOfStepReduction
}

The entries are defined as follows:


Main The name of the main algorithm.
MeshSizeDivider The value for r N, r > 1, used to compute k , 1/rsk
(see equation (5.6a)). A common value is r = 2.
InitialMeshSizeExponent The value for s0 N in (5.6b). A common value
is s0 = 0.
MeshSizeExponentIncrement The value for ti N (for the iterations that
do not yield a decrease in cost) in (5.6b). A common value is ti = 1.
NumberOfStepReduction The maximum number of step reductions before
the algorithm stops. Thus, if we use the notation m , NumberOfStepReduction,
then we have for the last iterations k = 1/rs0 +m tk . A common value
is m = 4.

5.2.2 Hooke-Jeeves Algorithm


We will now present the implementation of the Hooke-Jeeves algorithm [HJ61]
with adaptive precision function evaluations using the Model GPS Algorithm 5.1.8.
The modifications of Smith [Smi69], Bell and Pike [BP66] and De Vogelaere [DV68]
are implemented in this algorithm.
To simplify the implementation, we assign f (, x) = for all x 6 X where
X is defined in (4.1).
a)

Algorithm Parameters

The algorithm parameters D, r, s0 , and tk are defined as in the Coordinate


Search algorithm (see page 22).
b) Map for Exploratory Moves
To facilitate the algorithm explanation, we use the set-valued map Ek : Rn
Q+ Rq+ 2Mk , as defined in Algorithm 5.2.1. The map Ek (, , ) defines the
exploratory moves in [HJ61], and will be used in Section c) to define the
global search set map and, under conditions to be seen in Section d), the local
search direction map as well.

c)

Global Search Set Map

The global search set map k (, , ) is defined as follows. Because 0 (, , )


depends on x1 , we need to introduce x1 , which we define as x1 , x0 .

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

24

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Algorithm 5.2.3 (Global Search Set Map k : Xk k Rq+ 2Mk )


Map:
Input:

Output:
Step 1:
Step 2:
Step 3:

Step 4:

Map for exploratory moves Ek : Rn Q+ Rq+ 2Mk .


Previous and current iterate, xk1 Rn and xk Rn .
Mesh divider k Q+ .
Solver precision Rq+ .
Global search set Gk .
Set x = xk + (xk xk1 ).
Compute Gk = Ek (x,
 k , ).
If minxGk f (, x) > f (, xk )
Set Gk Gk Ek (xk , k , ).
end if.
Return Gk .

Thus, k (xk , k , ) = Gk .
d) Local Search Direction Map
If the global search, as defined by Algorithm 5.2.3, has failed in reducing f (, ), then Algorithm 5.2.3 has constructed a set Gk that contains the
set {xk + k D ei | i = 1, . . . , 2n}. This is because in the evaluation of
Ek (xk , k , ), defined in Algorithm 5.2.1, all If f (, x
e) < f (, x) statements
yield false, and, hence, one has constructed {xk + k D ei | i = 1, . . . , 2n} =
Ek (xk , k , ).
Because the columns of D span Rn positively, it follows that the search on
the set {xk + k D ei | i = 1, . . . , 2n} is a local search. Hence, the constructed
set
Lk , {xk + k D ei | i = 1, . . . , 2n} Gk
(5.13)

is a local search set. Consequently, f (, ) has already been evaluated at all


points of Lk (during the construction of Gk ) and, hence, one does not need to
evaluate f (, ) again in a local search.
e)

Parameter Update

The point x in Step 3 of the GPS Model Algorithm 5.1.8 corresponds to


x , arg minxGk f (, x) in the Hooke-Jeeves algorithm. (Note that Lk Gk
if a local search has been done as explained in the above paragraph.)

f) Keywords
For the GPS implementation of the Hooke-Jeeves algorithm, the command
file (see page 86) must only contain continuous parameters.
To invoke the algorithm, the Algorithm section of the GenOpt command
file must have the following form:
Algorithm {
Main
MeshSizeDivider

= GPSHookeJeeves ;
= Integer ;
// b i g g e r than 1

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

25

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

InitialMeshSizeExponent
= Integer ;
MeshSizeEx ponentIncrement = I n t e g e r ;
NumberOfStepReduction
= Integer ;

// b i g g e r than o r e q u a l t o 0
// b i g g e r than 0
// b i g g e r than 0

The entries are the same as for the Coordinate Search algorithm, and explained
on page 23.

5.2.3 Multi-Start GPS Algorithms


All GPS algorithms can also be run using multiple initial points. Using multiple initial points increases the chance of finding the global minimum if the
cost function has several local minima, and furthermore, it decreases the risk
of not finding a minimum if the cost function is not continuously differentiable,
which is the case if building simulation programs, such as EnergyPlus or TRNSYS, are used to compute the cost function (see the discussion in Section 4.1.4).
The values that are specified by GenOpts parameter Ini in GenOpts command file (see Section 11.1.3) are used to initialize the first initial point. The
other initial points are randomly distributed, with a uniform distribution, between the lower and upper bounds of the feasible domain. They are, however,
set to the mesh M0 , defined in (5.7), which reduces the number of cost function
evaluations if the optimization algorithm converges from different initial points
to the same minimizer.
In GenOpts command file, a lower and an upper bound must be specified
for each independent variable using the keywords Min and Max.
To use the GPSCoordinateSearch algorithm with multiple starting points,
the Algorithm section of the GenOpt command file must have the following
form:
Algorithm {
Main
MultiStart
Seed
NumberOfInitialPoint
MeshSizeDivider
InitialMeshSizeExponent
MeshSizeEx ponentIncrement
NumberOfStepReduction
}

=
=
=
=
=
=
=
=

GPSCoordinateSearch ;
Uniform ;
Integer ;
I n t e g e r ; // b i g g e r than o r e q u a l t o 1
I n t e g e r ; // 1 < M e s h S i z e D i v i d e r
I n t e g e r ; // 0 <= I n i t i a l M e s h S i z e E x p o n e n t
I n t e g e r ; // 0 < MeshSizeEx ponentIncrement
I n t e g e r ; // 0 < NumberOfStepReduction

The entries are defined as follows:


Main The name of the main algorithm.
MultiStart Keyword to invoke the multi-start algorithm. The only valid
value is Uniform.
Copyright (c) 1998-2011
The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

26

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Seed This value is used to initialize the random number generator.


NumberOfInitialPoint The number of initial points.
The other entries are the same as for the Coordinate Search algorithm, and are
explained on page 23.
To use the GPSHookeJeeves algorithm with multiple starting points, the
Algorithm section of the GenOpt command file must have the following form:
Algorithm {
Main
MultiStart
Seed
NumberOfInitialPoint
MeshSizeDivider
InitialMeshSizeExponent
MeshSizeEx ponentIncrement
NumberOfStepReduction
}

=
=
=
=
=
=
=
=

GPSHookeJeeves ;
Uniform ;
Integer ;
I n t e g e r ; // 0 <
I n t e g e r ; // 1 <
I n t e g e r ; // 0 <=
I n t e g e r ; // 0 <
I n t e g e r ; // 0 <

NumberOfInitialPoint
MeshSizeDivider
InitialMeshSizeExponent
MeshSizeEx ponentIncrement
NumberOfStepReduction

The entries are the same as for the multi-start Coordinate Search algorithm
above.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

27

GenOpt
Generic Optimization Program
Version 3.1.0

5.3

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Discrete Armijo Gradient

The Discrete Armijo Gradient algorithm can be used to solve problem Pc


defined in (4.2) where f () is continuously differentiable.
The Discrete Armijo Gradient algorithm approximates gradients by finite
differences. It can be used for problems where the cost function is evaluated by
computer code that defines a continuously differentiable function but for which
obtaining analytical expressions for the gradients is impractical or impossible.
Since the Discrete Armijo Gradient algorithm is sensitive to discontinuities
in the cost function, we recommend not to use this algorithm if the simulation program contains adaptive solvers with loose precision settings, such as
EnergyPlus [CLW+ 01]. On such functions, the algorithm is likely to fail. In
Section 4.2, we recommend algorithms that are better suited for such situations.
We will now present the Discrete Armijo Gradient algorithm and the Armijo
step-size subprocedure.
Algorithm 5.3.1 (Discrete Armijo Gradient Algorithm)
Data:

Step 0:
Step 1:

Step 2 :

Step 3 :

Initial iterate x0 X.
, (0, 1), (0, ), k , k0 Z,
lmax , N (for reseting the step-size calculation).
Termination criteria m , x R+ , imax N.
Initialize i = 0 and m = 0.
Compute the search direction hi .
If m < m , stop.
Else, set = k0 +m and compute, for j {1, . . . , n},
hji = (f (xi + ej ) f (xi )) /.
Check descent.
Compute (xi ; hi ) = (f (xi + hi ) f (xi )) /.
If (xi ; hi ) < 0, go to Step 3.
Else, replace m by m + 1 and go to Step 1.
Line search.
Use Algorithm 5.3.2 (which requires k , lmax and ) to compute ki .
Set
i = arg min f (xi + hi ).
(5.14)
{ ki , ki 1 }

Step 4 :
Step 5 :

If f (xi + i hi ) f (xi ) > , replace m by m + 1 and go to Step 1.


Set xi+1 = xi + i hi .
If ki hi k < x , stop. Else, replace i by i + 1 and go to Step 1.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

28

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Algorithm 5.3.2 (Armijo Step-Size Subprocedure)


Data:

Step 0:
Step 1:

Iteration number i N, iterate xi Rn , search direction hi Rn ,


k , ki1 Z, , (0, 1), and (xi ; hi ) R with (xi ; hi ) < 0,
parameter for restart lmax , N.
Initialize l = 0.
If i = 0, set k = k , else set k = ki1 .
Replace l by l + 1, and test the conditions

f (xi + k hi ) f (xi )

f (xi +
Step 2:
Step 3:

Step 4:

k 1

hi ) f (xi ) >

k (xi ; hi ),

k 1

(5.15a)

(xi ; hi ). (5.15b)

If k satisfies (5.15a) and (5.15b), return k .


If k satisfies (5.15b) but not (5.15a),
replace k by k + 1.
else,
replace k by k 1.
If l < lmax or ki1 k + , go to Step 1. Else, go to Step 4.
Set K , {k Z | k k }, and compute
k , minkK {k | f (xi + k hi ) f (xi ) k (xi ; hi )}.
Return k .

Note that in Algorithm 5.3.2, as 1, the number of tries to compute the


Armijo step-size is likely to go to infinity. Under appropriate assumptions one
can show that = 1/2 yields fastest convergence [Pol97].
The step-size Algorithm 5.3.2 requires often only a small number of function
evaluations. However, occasionally, once a very small step-size has occurred,
Algorithm 5.3.2 can trap the Discrete Armijo Gradient algorithm into using a
very small step-size for all subsequent iterations. Hence, if ki1 > k + , we
reset the step-size by computing Step 4.
Algorithm 5.3.1 together with the step-size Algorithm 5.3.2 have the following convergence properties [Pol97].
Theorem 5.3.3 Let f : Rn R be continuously differentiable and bounded
below.
1. If Algorithm 5.3.1 jams at xi , cycling indefinitely in the loop defined by
Steps 1-2 or in the loop defined by Steps 1-4, then f (xi ) = 0.

2. If {xi }
i=0 is an infinite sequence constructed by Algorithm 5.3.1 and Algorithm 5.3.2 in solving (4.2), then every accumulation point x
b of {xi }
i=0
satisfies f (b
x) = 0.

Note that hi has the same units as the cost function, and the algorithm
evaluates xi + hi for some R+ . Thus, the algorithm is sensitive to the
Copyright (c) 1998-2011
The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

29

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

scaling of the problem variables, a rather undesirable effect. Therefore, in the


implementation of Algorithm 5.3.1 and Algorithm 5.3.2, we normalize the cost
function values by replacing, for all x Rn , f (x) by f (x)/f (x0 ), where x0 is
the initial iterate. Furthermore, we set x0 = 0 and evaluate the cost function
for the values j + xj sj , j {1, . . . , n}, where xj R is the j-th component
of the design parameter computed in Algorithm 5.3.1 or Algorithm 5.3.2 and
j R and sj R are the setting of the parameters Ini and Step, respectively,
for the j-th design parameter in the optimization command file (see page 86).
In view of the sensitivity of the Discrete Armijo Gradient algorithm to the
scaling of the problem variables and the cost function values, the implementation of penalty and barrier functions may cause numerical problems if the
penalty is large compared to the unpenalized cost function value.
If box-constraints for the independent parameters are specified, then the
transformations (8.2) are used.

5.3.1 Keywords
For the Discrete Armijo Gradient algorithm, the command file (see page 86)
must only contain continuous parameters.
To invoke the algorithm, the Algorithm section of the GenOpt command
file must have the following form:
Algorithm {
Main = D i s c r e t e A r m i j o G r a d i e n t ;
Alpha = Double ;
// 0 < Alpha < 1
Beta = Double ;
// 0 < Beta < 1
Gamma = Double ;
// 0 < Gamma
K0
= Integer ;
KStar = I n t e g e r ;
LMax = I n t e g e r ;
// 0 <= LMax
Kappa = I n t e g e r ;
// 0 <= LMax
EpsilonM = Double ; // 0 < EpsilonM
EpsilonX = Double ; // 0 < EpsilonX
}

The entries are defined as follows:


Main The name of the main algorithm.
Alpha The variable used in Step 1 and in Step 4 of Algorithm 5.3.2. A
typical value is = 1/2.
Beta The variable used in approximating the gradient and doing the line
search. A typical value is = 0.8.
Gamma The variable used in Step 4 of Algorithm 5.3.1 to determine whether
the accuracy of the gradient approximation will be increased.
K0 The variable k0 that determines the initial accuracy of the gradient approximation.
Copyright (c) 1998-2011
The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

30

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

KStar The variable k used to initialize the line search.


LMax The variable lmax used in Step 3 of Algorithm 5.3.2 to determine
whether the line search needs to be reinitialized.
Kappa The variable used in Step 3 of Algorithm 5.3.2 to determine whether
the line search needs to be reinitialized.
EpsilonM The variable m used in the determination criteria m < m in
Step 1 of Algorithm 5.3.1.
EpsilonX The variable x used in the determination criteria ki hi k < x in
Step 5 of Algorithm 5.3.1.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

31

GenOpt
Generic Optimization Program
Version 3.1.0

5.4

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Particle Swarm Optimization

Particle Swarm Optimization (PSO) algorithms are population-based probabilistic optimization algorithms first proposed by Kennedy and Eberhart [EK95,
KE95] to solve problem Pc defined in (4.2) with possibly discontinuous cost
function f : Rn R. In Section 5.4.2, we will present a PSO algorithm for
discrete independent variables to solve problem Pd defined in (4.4), and in
Section 5.4.3 we will present a PSO algorithm for continuous and discrete independent variables to solve problem Pcd defined in (4.6). To avoid ambiguous
notation, we always denote the dimension of the continuous independent variable by nc N and the dimension of the discrete independent variable by
nd N.
PSO algorithms exploit a set of potential solutions to the optimization
problem. Each potential solution is called a particle, and the set of potential
solutions in each iteration step is called a population. PSO algorithms are global
optimization algorithms and do not require nor approximate gradients of the
cost function. The first population is typically initialized using a random number generator to spread the particles uniformly in a user-defined hypercube. A
particle update equation, which is modeled on the social behavior of members
of bird flocks or fish schools, determines the location of each particle in the
next generation.
A survey of PSO algorithms can be found in Eberhart and Shi [ES01].
Laskari et. al. present a PSO algorithm for minimax problems [LPV02b] and
for integer programming [LPV02a]. In [PV02a], Parsopoulos and Vrahatis discuss the implementation of inequality and equality constraints to solve problem
Pcg defined in (4.3).
We first discuss the case where the independent variable is continuous, i.e.,
the case of problem Pc defined in (4.2).

5.4.1 PSO for Continuous Variables


We will first present the initial version of the PSO algorithm which is the
easiest to understand.
In the initial version of the PSO algorithm [EK95, KE95], the update equation for the particle location is as follows: Let k N denote the generation
number, let nP N denote the number of particles in each generation, let
xi (k) Rnc , i {1, . . . , nP }, denote the i-th particle of the k-th generation,
let vi (k) Rnc denote its velocity, let c1 , c2 R+ and let 1 (k), 2 (k) U (0, 1)
be uniformly distributed random numbers between 0 and 1. Then, the update
equation is, for all i {1, . . . , nP } and all k N,

vi (k + 1) = vi (k) + c1 1 (k) pl,i (k) xi (k)

+c2 2 (k) pg,i (k) xi (k) ,
(5.16a)
xi (k + 1) = xi (k) + vi (k + 1),

(5.16b)

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

32

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

where vi (0) , 0 and


pl,i (k)

arg min f (x),

(5.17a)

x{xi (j)}k
j=0

pg,i (k)

arg min

f (x).

(5.17b)

P
x{{xi (j)}k
j=0 }i=1

Thus, pl,i (k) is the location that for the i-th particle yields the lowest cost over
all generations, and pg,i (k) is the location of the best particle over all generations. The term c1 1 (k) (pl,i (k) xi (k)) is associated with cognition since it
takes into account the particles own experience, and the term c2 2 (k) (pg,i (k)
xi (k)) is associated with social interaction between the particles. In view of
this similarity, c1 is called cognitive acceleration constant and c2 is called social
acceleration constant.

a)

Neighborhood Topology

The minimum in (5.17b) need not be taken over all points in the population. The set of points over which the minimum is taken is defined by the
neighborhood topology. In PSO, the neighborhood topologies are usually defined using the particle index, and not the particle location. We will use the
lbest, gbest, and the von Neumann neighborhood topology, which we will now
define.
In the lbest topology of size l N, with l > 1, the neighborhood of a particle
with index i {1, . . . , nP } consist of all particles whose index are in the set
Ni , {i l, . . . i, . . . , i + l},

(5.18a)

where we assume that the indices wrap around, i.e., we replace 1 by nP 1,


replace 2 by nP 2, etc.
In the gbest topology, the neighborhood contains all points of the population, i.e.,
Ni , {1, . . . , nP },
(5.18b)
for all i {1, . . . , nP }.
For the von Neumann topology, consider a 2-dimensional lattice, with the
lattice points enumerated as shown in Figure 5.1. We will use the von Neumann
topology of range 1, which is defined, for i, j Z, as the set of points whose
indices belong to the set

n
o

v
N(i,j)
, (k, l) |k i| + |l j| 1, k, l Z .
(5.18c)

v
The gray points in Figure 5.1 are N(1,2)
. For simplicity, we round in GenOpt
the user-specified number of particles nP N to the next biggest integer nP

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

33

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

0,0

0,1

0,2

0,3

1,0

1,1

1,2

1,3

2,0

2,1

2,2

2,3

Figure 5.1 : Section of a 2-dimensional lattice of particles with nP 3.


v
The particles belonging to the von Neumann neighborhood N(1,2)
with range 1,
defined in (5.18c), are colored gray. Indicated by dashes are the particles that
are generated by wrapping the indices.

2
such that nP N and
P . Then, we can wrap the indices by replacing,
nP n
for k Z, (0,
k) by ( nP , k),
(
nP + 1, k) by (1, k), and similarly by replacing
(k,
nP + 1) by
(k, 1). Then, a particle with indices
(k, 0) by (k, nP ) and

(k, l), with 1 k nP and 1 l nP , has in the PSO algorithm the


index i = (k 1) nP + l, and hence i {1, . . . , nP }.
Kennedy and Mendes [KM02] show that greater connectivity of the particles speeds up convergence, but it does not tend to improve the populations
ability to discover the global optimum. Best performance has been achieved
with the von Neumann topology, whereas neither the gbest nor the lbest topology seemed especially good in comparison with other topologies.
Carlisle and Dozier [CD01] achieve on unimodal and multi-modal functions
for the gbest topology better results than for the lbest topology.
2

In principle, the lattice need not be a square, but we do not see any computational
disadvantage of selecting a square lattice.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

34

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

b) Model PSO Algorithm


We will now present the Model PSO Algorithm that is implemented in
GenOpt.
Algorithm 5.4.1 (Model PSO Algorithm for Continuous Variables)
Data:

Step 0:
Step 1:
Step 2:

Constraint set X, as defined in (4.1),


but with finite lower and upper bound for each independent variable.
Initial iterate x0 X.
Number of particles nP N and number of generations nG N.
P
.
Initialize k = 0, x0 (0) = x0 and the neighborhoods {Ni }ni=1
nP
Initialize {xi (0)}i=2 X randomly distributed.
For i {1, . . . , nP }, determine the local best particles
pl,i (k) , arg min f (x)
(5.19a)
x{xi (m)}k
m=0

and the global best particle


pg,i (k) ,
arg min

f (x).

(5.19b)

x{xj (m) | jNi }k


m=0

Step 3:
Step 4:
Step 5:

P
Update the particle location {xi (k + 1)}ni=1
X.
If k = nG , stop. Else, go to Step 2.
Replace k by k + 1, and go to Step 1.

We will now discuss the different implementations of the Model PSO Algorithm 5.4.1 in GenOpt.

c)

Particle Update Equation

(i)

Version with Inertia Weight Eberhart and Shi [SE98, SE99] introduced an inertia weight w(k) which improves the performance of the original
PSO algorithm. In the version with inertia weight, the particle update equation
is, for all i {1, . . . , nP }, for k N and xi (k) Rnc , with vi (0) = 0,

vbi (k + 1) = w(k) vi (k) + c1 1 (k) pl,i (k) xi (k)

+c2 2 (k) pg,i (k) xi (k) ,
(5.20a)
vij (k + 1) =

xi (k + 1) =

j
},
sign(b
vij (k + 1)) min{|b
vij (k + 1)|, vmax
j {1, . . . , nc },

xi (k) + vi (k + 1),

(5.20b)
(5.20c)

where
j
vmax
, (uj lj ),

(5.20d)

with R+ , for all j {1, . . . , nc }, and l, u Rnc are the lower and upper
bound of the independent variable. A common value is = 1/2. In GenOpt,
if 0, then no velocity clamping is used, and hence, vij (k + 1) = vbij (k + 1),
for all k N, all i {1, . . . , nP } and all j {1, . . . , nc }.
Copyright (c) 1998-2011
The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

35

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

We compute the inertia weight as


w(k) = w0

k
(w0 w1 ),
K

(5.20e)

where w0 R is the initial inertia weight, w1 R is the inertia weight for the
last generation, with 0 w1 w0 , and K N is the maximum number of
generations. w0 = 1.2 and w1 = 0 can be considered as good choices [PV02b].

(ii)

Version with Constriction Coefficient Clerc and Kennedy [CK02]


introduced a version with a constriction coefficient that reduces the velocity. In their Type 1 implementation, the particle update equation is, for
all i {1, . . . , nP }, for k N and xi (k) Rnc , with vi (0) = 0,

vbi (k + 1) = (, ) vi (k) + c1 1 (k) pl,i (k) xi (k)

+c2 2 (k) pg,i (k) xi (k) ,
(5.21a)
j
vij (k + 1)|, vmax
},
vij (k + 1)) min{|b
vij (k + 1) = sign(b
j {1, . . . , nc },

xi (k + 1) = xi (k) + vi (k + 1),

(5.21b)
(5.21c)

where
j
vmax
, (uj lj ),

(5.21d)

is as in (5.20d).
In (5.21a), (, ) is called constriction coefficient, defined as
(, ) ,

|2

4 |

, if > 4,
otherwise,

(5.21e)

where , c1 + c2 and (0, 1] control how fast the population collapses


into a point. If = 1, the space is thoroughly searched, which yields slower
convergence.
Equation (5.21) can be used with or without velocity clamping (5.21b). If
velocity clamping (5.21b) is used, Clerc and Kennedy use = 4.1, otherwise
they use = 4. In either case, they set c1 = c2 = /2 and a population size of
nP = 20.
Carlisle and Dozier [CD01] recommend the settings nP = 30, no velocity
clamping, = 1, c1 = 2.8 and c2 = 1.3.
Kennedy and Eberhart [KES01] report that using velocity clamping (5.21b)
and a constriction coefficient shows faster convergence for some test problems
compared to using an inertia weight, but the algorithm tends to get stuck in
local minima.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

36

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group
s(v) = 1/(1 + ev )
1
0.8
0.6
0.4
0.2

4 v

Figure 5.2 : Sigmoid function.

5.4.2 PSO for Discrete Variables


Kennedy and Eberhart [KE97] introduced a binary version of the PSO algorithm to solve problem Pd defined in (4.4).
The binary PSO algorithm encodes the discrete independent variables in
a string of binary numbers and then operates with this binary string. For
some i {1, . . . , nd }, let xi N be the component of a discrete independent
variable, and let i {0, 1}mi be its binary representation (with mi N+
bits), obtained using Gray encoding [PFTV93], and let l,i (k) and g,i (k) be
the binary representation of pl,i (k) and pg,i (k), respectively, where pl,i (k) and
pg,i (k) are defined in (5.19).
Then, for i {1, . . . , nd } and j {1, . . . , mi } we initialize randomly
ij (0) {0, 1}, and compute, for k N,
vbij (k + 1) =
vij (k

+ 1) =

ij (k + 1) =


j
vij (k) + c1 1 (k) l,i
(k) ij (k)

j
+c2 2 (k) g,i
(k) ij (k) ,
sign(b
vij (k
(

+ 1)) min{|b
vij (k + 1)|, vmax },

i,j (k) s vij (k + 1) ,

0, if
1, otherwise,

where
s(v) ,

1
1 + ev

(5.22a)
(5.22b)
(5.22c)

(5.22d)

is the sigmoid function shown in Fig. 5.2 and i,j (k) U (0, 1), for all i
{1, . . . , nd } and for all j {1, . . . , mi }.
In (5.22b), vmax R+ is often set to 4 to prevent a saturation of the sigmoid
function, and c1 , c2 R+ are often such that c1 + c2 = 4 (see [KES01]).
Notice that s(v) 0.5, as v 0, and consequently the probability of
flipping a bit goes to 0.5. Thus, in the binary PSO, a small vmax causes a
large exploration, whereas in the continuous PSO, a small vmax causes a small
exploration of the search space.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

37

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Any of the above neighborhood topologies can be used, and Model Algorithm 5.4.1 applies if we replace the constraint set X by the user-specified set
Xd Znd .

5.4.3 PSO for Continuous and Discrete Variables


For problem Pcd defined in (4.6), we treat the continuous independent
variables as in (5.20) or (5.21), and the discrete independent variables as
in (5.22). Any of the above neighborhood topologies can be used, and Model
Algorithm 5.4.1 applies if we define the constraint set X as in (4.5).

5.4.4 PSO on a Mesh


We now present a modification to the previously discussed PSO algorithms.
For evaluating the cost function, we will modify the continuous independent
variables such that they belong to a fixed mesh in Rnc . Since the iterates
of PSO algorithms typically cluster during the last iterations, this reduces
in many cases the number of simulation calls during the optimization. The
modification is done by replacing the cost function f : Rnc Znd R in Model
Algorithm 5.4.1 as follows: Let x0 , (xc,0 , xd,0 ) Rnc Znc denote the initial
iterate, let Xc be the feasible set for the continuous independent variables
defined in (4.5b), let r, s N, with r > 1, be user-specified parameters, let
,
and let the mesh be defined as
(
M(xc,0 , , s) ,

xc,0 +

1
rs

n
X
i=1

(5.23)

i i

nc

m s ei | m Z

(5.24)

where s Rnc is equal to the value defined by the variable Step in GenOpts
command file (see page 86). Then, we replace f (, ) by fb: Rnc Znd Rnc
R Rnc R, defined by
fb(xc , xd ; xc,0 , , s) , f ((xc ), xd ),

(5.25)

where : Rnc Rnc is the projection of the continuous independent variable


to the closest feasible mesh point, i.e., (xc ) M(xc,0 , , s) Xc . Thus, for
evaluating the cost function, the continuous independent variables are replaced
by the closest feasible mesh point, and the discrete independent variables remain unchanged.
Good numerical results have been obtained by selecting s Rnc and r, s
N such that about 50 to 100 mesh points are located along each coordinate
direction.

5.4.5 Population Size and Number of Generations


Parsopoulos and Vrahatis [PV02b] use for x Rnc a population size of
about 5 n up to n = 15. For n 10 . . . 20, they use nP 10 n. They set
Copyright (c) 1998-2011
The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

38

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

the number of generations to nG = 1000 up to n = 20 and to nG = 2000 for


n = 30.
Van den Bergh and Engelbrecht [vdBE01] recommend using more than 20
particles and 2000 to 5000 generations.
Kennedy and Eberhart [KES01] use, for test cases with the lbest neighborhood topology of size l = 2 and n = 2 and n = 30, a population size of
nP = 20 . . . 30. They report that 10 . . . 50 particles usually work well. As a rule
of thumb, they recommend for the lbest neighborhood to select the neighborhood size such that each neighborhood consists of 10 . . . 20% of the population.

5.4.6 Keywords
For the Particle Swarm algorithm, the command file (see page 86) can contain continuous and discrete independent variables.
The different specifications for the Algorithm section of the GenOpt command file are as follows:
PSO algorithm with inertia weight:
Algorithm {
Main
NeighborhoodTopology
NeighborhoodSize
NumberOfParticle
NumberOfGeneration
Seed
CognitiveAcceleration
SocialAcceleration
Max Velocity GainContinuous
MaxVelocityDiscrete
InitialInertiaWeight
FinalInertiaWeight
}

=
=
=
=
=
=
=
=
=
=
=
=

PSOIW;
g b e s t | l b e s t | vonNeumann ;
I n t e g e r ; // 0 < N e i g h b o r h o o d S i z e
Integer ;
Integer ;
Integer ;
Double ;
// 0 < C o g n i t i v e A c c e l e r a t i o n
Double ;
// 0 < S o c i a l A c c e l e r a t i o n
Double ;
Double ;
// 0 < M a x V e l o c i t y D i s c r e t e
Double ;
// 0 < I n i t i a l I n e r t i a W e i g h t
Double ;
// 0 < F i n a l I n e r t i a W e i g h t

PSO algorithm with constriction coefficient:


Algorithm {
Main
NeighborhoodTopology
NeighborhoodSize
NumberOfParticle
NumberOfGeneration
Seed
CognitiveAcceleration
SocialAcceleration
Max Velocity GainContinuous
MaxVelocityDiscrete
ConstrictionGain

=
=
=
=
=
=
=
=
=
=
=

PSOCC ;
g b e s t | l b e s t | vonNeumann ;
I n t e g e r ; // 0 < N e i g h b o r h o o d S i z e
Integer ;
Integer ;
Integer ;
Double ;
// 0 < C o g n i t i v e A c c e l e r a t i o n
Double ;
// 0 < S o c i a l A c c e l e r a t i o n
Double ;
Double ;
// 0 < M a x V e l o c i t y D i s c r e t e
Double ;
// 0 < C o n s t r i c t i o n G a i n <= 1

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

39

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

PSO algorithm with constriction coefficient and continuous independent variables restricted to a mesh:
Algorithm {
Main
NeighborhoodTopology
NeighborhoodSize
NumberOfParticle
NumberOfGeneration
Seed
CognitiveAcceleration
SocialAcceleration
Max Velocity GainContinuous
MaxVelocityDiscrete
ConstrictionGain
MeshSizeDivider
InitialMeshSizeExponent
}

=
=
=
=
=
=
=
=
=
=
=
=
=

PSOCCMesh ;
g b e s t | l b e s t | vonNeumann ;
I n t e g e r ; // 0 < N e i g h b o r h o o d S i z e
Integer ;
Integer ;
Integer ;
Double ;
// 0 < C o g n i t i v e A c c e l e r a t i o n
Double ;
// 0 < S o c i a l A c c e l e r a t i o n
Double ;
Double ;
// 0 < M a x V e l o c i t y D i s c r e t e
Double ;
// 0 < C o n s t r i c t i o n G a i n <= 1
I n t e g e r ; // 1 < M e s h S i z e D i v i d e r
I n t e g e r ; // 0 <= I n i t i a l M e s h S i z e E x p o n e n t

The entries that are common to all implementations are defined as follows:
Main The name of the main algorithm. The implementation PSOIW uses the
location update equation (5.20) for the continuous independent variables,
and the implementation PSOCC uses (5.21) for the continuous independent
variables. All implementations use (5.22) for the discrete independent
variables.
NeighborhoodTopology This entry defines what neighborhood topology is
being used.
NeighborhoodSize For the lbest neighborhood topology, this entry is equal
to l in (5.18a). For the gbest and the von Neumann neighborhood topology, the value of NeighborhoodSize is ignored.
NumberOfParticle This is equal to the variable nP N.
NumberOfGeneration This is equal to the variable nG N in Algorithm 5.4.1.
Seed This value is used to initialize the random number generator.
CognitiveAcceleration This is equal to the variable c1 R+ .
SocialAcceleration This is equal to the variable c2 R+ .
MaxVelocityGainContinuous This is equal to the variable R+ in (5.20d)
and in (5.21d). If MaxVelocityGainContinuous is set to zero or to a
negative value, then no velocity clamping is used, and hence, vij (k + 1) =
vbij (k + 1), for all k N, all i {1, . . . , nP } and all j {1, . . . , nc }.
MaxVelocityDiscrete This is equal to the variable vmax R+ in (5.22b).

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

40

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

For the PSOIW implementation, following additional entries must be specified:


InitialInertiaWeight This is equal to w0 R+ in (5.20e).
FinalInertiaWeight This is equal to w1 R+ in (5.20e).
For the PSOCC implementation, following additional entries must be specified:
ConstrictionGain This is equal to (0, 1] in (5.21e).
Notice that for discrete independent variables, the entries of InitialInertiaWeight,
FinalInertiaWeight, and ConstrictionGain are ignored.

For the PSOCCMesh implementation, following additional entries must be specified:


MeshSizeDivider This is equal to r N, with r > 1, used in (5.23).
InitialMeshSizeExponent This is equal to s N used in (5.23).

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

41

GenOpt
Generic Optimization Program
Version 3.1.0

5.5

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Hybrid Generalized Pattern Search Algorithm


with Particle Swarm Optimization Algorithm

This hybrid global optimization algorithm can be used to solve problem Pc


defined in (4.2) and problem Pcd defined in (4.6). Problem Pcg defined in (4.3)
and problem Pcdg defined in (4.7) can be solved if the constraint functions g()
are implemented as described in Section 8.2.
This hybrid global optimization algorithm starts by doing a Particle Swarm
Optimization (PSO) on a mesh, as described in Section 5.4.4, for a userspecified number of generations nG N. Afterwards, it initializes the HookeJeeves Generalized Pattern Search (GPS) algorithm, described in Section 5.2.2,
using the continuous independent variables of the particle with the lowest cost
function value. If the optimization problem has continuous and discrete independent variables, then the discrete independent variables will for the GPS
algorithm be fixed at the value of the particle with the lowest cost function
value.
We will now explain the hybrid algorithm for the case where all independent
variables are continuous, and then for the case with mixed continuous and
discrete independent variables. Throughout this section, we will denote the
dimension of the continuous independent variables by nc N and the dimension
of the discrete independent variables by nd N.

5.5.1 Hybrid Algorithm for Continuous Variables


We will now discuss the hybrid algorithm to solve problem Pc defined
in (4.2). However, we require the constraint set X Rnc defined in (4.1) to
have finite lower and upper bounds li , ui R, for all i {1, . . . , nc }.
First, we run the PSO algorithm 5.4.1, with user-specified initial iterate
x0 X for a user-specified number of generations nG N on the mesh defined
in (5.24). Afterwards, we run the GPS algorithm 5.1.8 where the initial iterate
x0 is equal to the location of the particle with the lowest cost function value,
i.e.,
x0 , p ,
arg min
f (x),
(5.26)
x{xj (k) | j{1,...,nP }, k{1,...,nG }}

where nP N denotes the number of particles and xj (k), j {1, . . . , nP },


k {1, . . . , nG } are as in Algorithm 5.4.1.
Since the PSO algorithm terminates after a finite number of iterations, all
convergence results of the GPS algorithm hold. In particular, if the cost function is once continuously differentiable, then the hybrid algorithm constructs
accumulation points that are feasible stationary points of problem (4.2) (see
Theorem 5.1.13).

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

42

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Since the PSO algorithm is a global optimization algorithm, the hybrid algorithm is, compared to the Hooke-Jeeves algorithm, less likely to be attracted
by a local minimum that is not global. Thus, the hybrid algorithm combines
the global features of the PSO algorithm with the provable convergence properties of the GPS algorithm.
If the cost function is discontinuous, then the hybrid algorithm is, compared
to the Hooke-Jeeves algorithm, less likely to jam at a discontinuity far from a
solution.

5.5.2 Hybrid Algorithm for Continuous and Discrete


Variables
For problem Pcd defined in (4.6) with continuous and discrete independent
variables, we run the PSO algorithm 5.4.1, with user-specified initial iterate
x0 X , Xc Xd Rnc Znd for a user-specified number of generations
nG N, where the continuous independent variables are restricted to the mesh
defined in (5.24). We require the constraint set Xc Rnc defined in (4.5b) to
have finite lower and upper bounds li , ui R, for all i {1, . . . , nc }.
Afterwards, we run the GPS algorithm 5.1.8, where the initial iterate x0
Xc is equal to pc Xc , which we define as the continuous independent variables
of the particle with the lowest cost function value, i.e., p , (pc , pd ) Xc
Xd , where p is defined in (5.26). In the GPS algorithm, we fix the discrete
components at pd Xd for all iterations. Thus, we use the GPS algorithm
to refine the continuous components of the independent variables, and fix the
discrete components of the independent variables.

5.5.3 Keywords
For this algorithm, the command file (see page 86) can contain continuous
and discrete independent variables. It must contain at least one continuous
parameter.
The specifications of the Algorithm section of the GenOpt command file
is as follows:
Note that the first entries are as for the PSO algorithm on page 40 and the
last entries are as for GPS implementation of the Hooke-Jeeves algorithm on
page 25.
Algorithm {
Main
NeighborhoodTopology
NeighborhoodSize
NumberOfParticle
NumberOfGeneration
Seed
CognitiveAcceleration

=
=
=
=
=
=
=

GPSPSOCCHJ;
g b e s t | l b e s t | vonNeumann ;
I n t e g e r ; // 0 < N e i g h b o r h o o d S i z e
Integer ;
Integer ;
Integer ;
Double ;
// 0 < C o g n i t i v e A c c e l e r a t i o n

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

43

GenOpt
Generic Optimization Program
Version 3.1.0
SocialAcceleration
Max Velocity GainContinuous
MaxVelocityDiscrete
ConstrictionGain
MeshSizeDivider
InitialMeshSizeExponent
MeshSizeEx ponentIncrement
NumberOfStepReduction

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group
=
=
=
=
=
=
=
=

Double ;
Double ;
Double ;
Double ;
Integer ;
Integer ;
Integer ;
Integer ;

// 0 <

SocialAcceleration

//
//
//
//
//
//

MaxVelocityDiscrete
C o n s t r i c t i o n G a i n <= 1
MeshSizeDivider
InitialMeshSizeExponent
MeshSizeEx ponentIncrement
NumberOfStepReduction

0
0
1
0
0
0

<
<
<
<=
<
<

The entries are defined as follows:


Main The name of the main algorithm.
NeighborhoodTopology This entry defines what neighborhood topology is
being used.
NeighborhoodSize This entry is equal to l in (5.18). For the gbest neighborhood topology, the value of NeighborhoodSize will be ignored.
NumberOfParticle This is equal to the variable nP N.

NumberOfGeneration This is equal to the variable nG N in Algorithm 5.4.1.

Seed This value is used to initialize the random number generator.

CognitiveAcceleration This is equal to the variable c1 R+ used by the


PSO algorithm.
SocialAcceleration This is equal to the variable c2 R+ used by the PSO
algorithm.
MaxVelocityGainContinuous This is equal to the variable R+ in (5.20d)
and in (5.21d). If MaxVelocityGainContinuous is set to zero or to a
negative value, then no velocity clamping is used, and hence, vij (k + 1) =
vbij (k + 1), for all k N, all i {1, . . . , nP } and all j {1, . . . , nc }.
MaxVelocityDiscrete This is equal to the variable vmax R+ in (5.22b).
ConstrictionGain This is equal to (0, 1] in (5.21e).

MeshSizeDivider This is equal to r N, with r > 1, used by the PSO


algorithm in (5.23) and used by the GPS algorithm to compute k ,
1/rsk (see equation (5.6a)). A common value is r = 2.
InitialMeshSizeExponent This is equal to s N used by the PSO algorithm in (5.23) and used by the GPS algorithm in (5.6a). A common
value is s0 = 0.
MeshSizeExponentIncrement The value for tk N (fixed for all k N)
used by the GPS algorithm in (5.6a). A common value is tk = 1.
NumberOfStepReduction The maximum number of step reductions before
the GPS algorithm stops. Thus, if we use the notation m , NumberOfStepReduction,
then we have for the last iterations k = 1/rs0 +m tk . A common value
is m = 4.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

44

GenOpt
Generic Optimization Program
Version 3.1.0

5.6

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Simplex Algorithm of Nelder and Mead with


the Extension of ONeill

The Simplex algorithm of Nelder and Mead is a derivative free optimization


algorithm. It can be used to seek a solution of problem Pc defined in (4.2) and
problem Pcg defined in (4.3), with constraints on the dependent parameters
implemented as described in Section 8. The number of independent parameters n must be larger than 1.
The Simplex algorithm constructs an n-dimensional simplex in the space
that is spanned by the independent parameters. At each of the (n + 1) vertices
of the simplex, the value of the cost function is evaluated. In each iteration
step, the point with the highest value of the cost function is replaced by another
point. The algorithm consists of three main operations: (a) point reflection,
(b) contraction of the simplex and (c) expansion of the simplex.
Despite the well known fact that the Simplex algorithm can fail to converge
to a stationary point [Kel99b, Tor89, Kel99a, Wri96, McK98, LRWW98], both
in practice and theory, particularly if the dimension of independent variables
is large, say bigger than 10 [Tor89], it is an often used algorithm. Several
improvements to the Simplex algorithm or algorithms that were motivated by
the Simplex algorithm exist, see for example [Kel99b, Tor89, Kel99a, Tse99].
However, in GenOpt, we use the original Nelder-Mead algorithm [NM65] with
the extension of ONeill [ON71]. Optionally, the here implemented algorithm
allows using a modified stopping criteria.
We will now explain the different steps of the Simplex algorithm.

5.6.1 Main Operations


The notation defined below is used in describing the main operations. The
operations are illustrated in Fig. 5.3 where for simplicity a two-dimensional
simplex is illustrated.
We now introduce some notation and definitions.
1. We will denote by I , {1, . . . , n + 1} the set of all vertex indices.
2. We will denote by l I the smallest index in I such that
l = arg min f (xi ).

(5.27a)

iI

Hence, f (xl ) f (xi ), for all i I.


3. We will denote by h I the smallest index in I such that
h = arg max f (xi ).

(5.27b)

iI

Hence, f (xh ) f (xi ), for all i I.


Copyright (c) 1998-2011
The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

45

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

x2

x2

x**
x*

x*
xc

xc

xh

xh
x1

x1

(a) Reflection.

(b) Expansion.

x2

x2
x*

x*
x**

xc
xh

xc

x**

xh
x1

(c) Partial inside contraction.

x1
(d) Partial outside contraction.

x2
xl

x 1*

x 2*

xh
x1
(e) Total contraction.

Figure 5.3 : Simplex operations.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

46

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

4. Let xi , for i I, denote the simplex vertices, and let h be as in (5.27b).


We will denote by xc Rn the centroid of the simplex, defined as
xc ,

n+1
1 X
xi
n i=1

(5.27c)

i6=h

Next, we introduce the three main operations.


Reflection Let h I be as in (5.27b) and let xc be as in (5.27c). The reflection
of xh Rn to a point denoted as x Rn is defined as
x , (1 + ) xc xh ,

(5.28a)

where R, with > 0, is called the reflection coefficient.

Expansion of the simplex Let x Rn be as in (5.28a) and xc be as


in (5.27c). The expansion of x Rn to a point denoted as x Rn is
defined as
x , x + (1 ) xc ,
(5.28b)
where R, with > 1, is called the expansion coefficient.
Contraction of the simplex Let h I be as in (5.27b) and xc be as in (5.27c).
The contraction of xh Rn to a point denoted as x Rn is defined as
x , xh + (1 ) xc ,

(5.28c)

where R, with 0 < < 1, is called the contraction coefficient.

5.6.2 Basic Algorithm


In this section, we describe the basic Nelder and Mead algorithm [NM65].
The extension of ONeill and the modified restart criterion are discussed later.
The algorithm is as follows:
1. Initialization: Given an initial iterate x1 Rn , a scalar c, with c = 1 in
the initialization, a vector s Rn with user-specified step sizes for each
independent parameter, and the set of unit coordinate vectors {ei }ni=1 ,
construct an initial simplex with vertices, for i {1, . . . , n},
xi+1 = x1 + c si ei .

(5.29)

Compute f (xi ), for i I.

2. Reflection: Reflect the worst point, that is, compute x as in (5.28a).


3. Test whether we got the best point: If f (x ) < f (xl ), expand the simplex
using (5.28b) since further improvement in this direction is likely. If
f (x ) < f (xl ), then xh is replaced by x , otherwise xh is replaced by
x , and the procedure is restarted from 2.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

47

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group
1.8694

12
1.8693

9
1.8692

8
11

10
13

1.8691

x2

4
6
14

1.8690

17

16
1.8689

15
1.8688

1.8544

1.8546

1.8548

1.8550

1.8552

1.8554

1.8556

1.8558

1.8560

1.8562

1.8687
1.8564

x1
Figure 5.4 : Sequence of iterates generated by the Simplex algorithm.
4. If it turned out under 3 that f (x ) f (xl ), then we check if the new
point x is the worst of all points: If f (x ) > f (xi ), for all i I, with
i 6= h, we contract the simplex (see 5); otherwise we replace xh by x
and go to 2.
5. For the contraction, we first check if we should try a partial outside
contraction or a partial inside contraction: If f (x ) f (xh ), then we
try a partial inside contraction. To do so, we leave our indices as is and
apply (5.28c). Otherwise, we try a partial outside contraction. This is
done by replacing xh by x and applying (5.28c). After the partial inside
or the partial outside contraction, we continue at 6.
6. If f (x ) f (xh )3 , we do a total contraction of the simplex by replacing
xi (xi + xl )/2, for all i I. Otherwise, we replace xh by x . In both
cases, we continue from 2.
3
Nelder and Mead [NM65] use the strict inequality f (x ) > f (xh ). However, if
the user writes the cost function value with only a few representative digits to a text
file, then the function looks like a step function if slow convergence is achieved. In
such cases, f (x ) might sometimes be equal to f (xh ). Experimentally, it has been
shown advantageous to perform a total contraction rather than continuing with a
reflection. Therefore, the strict inequality has been changed to a weak inequality.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

48

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Fig. 5.4 shows a contour plot of a cost function f : Rn R with a sequence of iterates generated by the Simplex algorithm. The sequence starts
with constructing an initial simplex x1 , x2 , x3 . x1 has the highest function
value and is therefore reflected, which generates x4 . x4 is the best point in
the set {x1 , x2 , x3 , x4 }. Thus, it is further expanded, which generates x5 . x2 ,
x3 and x5 now span the new simplex. In this simplex, x3 is the vertex with
the highest function value and hence goes over to x6 and further to x7 . The
process of reflection and expansion is continued again two times, which leads to
the simplex spanned by x7 , x9 and x11 . x7 goes over to x12 which turns out to
be the worst point. Hence, we do a partial inside contraction, which generates
x13 . x13 is better than x7 so we use the simplex spanned by x9 , x11 and x13
for the next reflection. The last steps of the optimization are for clarity not
shown.

5.6.3 Stopping Criteria


The first criterion is a test of the variance of the function values at the
vertices of the simplex

!2
n+1
n+1
X
2
1
1 X
f (xi ) < 2 ,
f (xi )
(5.30)
n i=1
n + 1 i=1

then the original implementation of the algorithm stops. Nelder and Mead
have chosen this stopping criterion based on the statistical problem of finding
the minimum of a sum of squares surface. In this problem, the curvature
near the minimum yields information about the unknown parameters. A slight
curvature indicates a high sampling variance of the estimate. Nelder and Mead
argue that in such cases, there is no reason for finding the minimum point
with high accuracy. However, if the curvature is marked, then the sampling
variance is low and a higher accuracy in determining the optimal parameter
set is desirable.
Note that the stopping criterion (5.30) requires the variance of the function
values at the simplex vertices to be smaller than a prescribed limit. However,
if f () has large discontinuities, which has been observed in building energy
optimization problems [WW03], then the test (5.30) may never be satisfied.
For this reason, among others, we do not recommend using this algorithm if
the cost function has large discontinuities.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

49

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

5.6.4 ONeills Modification


ONeill modified the termination criterion by adding a further condition [ON71].
He checks whether any orthogonal step, each starting from the best vertex of
the current simplex, leads to a further improvement of the cost function. He
therefore sets c = 0.001 and tests if
f (xl ) < f (x)

(5.31a)

for all x defined by


x , xl + c si ei ,

i {1, . . . , n},

(5.31b)

where xl denotes the best known point, and si and ei are as in (5.29).

5.6.5 Modification of Stopping Criteria


In GenOpt, (5.31) has been modified. It has been observed that users
sometimes write the cost function value with only few representative digits to
the output file. In such cases, (5.31a) is not satisfied if the write statement
in the simulation program truncates digits so that the difference f (xl ) f (x),
where f () denotes the value that is read from the simulation output file, is
zero. To overcome this numerical problem, (5.31b) has been modified to
x = xl + exp(j) c si ei ,

i {1, . . . , n}

(5.31c)

where for each direction i {1, . . . , n}, the counter j N is set to zero for the
first trial and increased by one as long as f (xl ) = f (x).
If (5.31a) fails for any direction, then x computed by (5.31c) is the new
starting point and a new simplex with side lengths (c si ), i {1, . . . , n}, is
constructed. The point x that failed (5.31a) is then used as the initial point xl
in (5.29).
Numerical experiments showed that during slow convergence the algorithm
was restarted too frequently.
Fig. 5.5(a) shows a sequence of iterates where the algorithm was restarted
too frequently. The iterates in the figure are part of the iteration sequence near
the minimum of the test function shown in Fig. 5.5(b). The algorithm gets close
to the minimum with appropriately large steps. The last of these steps can be
seen at the right of the figure. After this step, the stopping criterion (5.30)
was satisfied which led to a restart check, followed by a new construction of
the simplex. From there on, the convergence was very slow due to the small
step size. After each step, the stopping criterion was satisfied again which led
to a new test of the optimality condition (5.31a), followed by a reconstruction
of the simplex. This check is very costly in terms of function evaluations and,
furthermore, the restart with a new simplex does not allow increasing the step
size, though we are heading locally in the right direction.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

50

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

1.86935
1.86910
1.86885
1.86860

300

x2
1.86835

200
f(x)

1.86810

100
1.86785

4
0

1.86760
1.855

2
4

1.856

1.857

1.858
x

1.859

1.860

1.861

1.862

(a) Sequence of iterates in the neighborhood of the


minimum.

0 x2
0
x1

2
4 4

(b) 2-dimensional test function 2D1.

Figure 5.5 : Nelder Mead trajectory.


ONeills modification prevents both excessive checking of the optimality
condition as well as excessive reconstruction of the initial simplex. This is
done by checking for convergence only after a predetermined number of steps
(e.g., after five iterations). However, the performance of the algorithm depends
strongly on this number. As an extreme case, a few test runs were done where
convergence was checked after each step as in Fig. 5.5(a). It turned out that in
some cases no convergence was reached within a moderate number of function
evaluations if in (5.30) is chosen too large, e.g., = 103 (see Tab. 5.1).
To make the algorithm more robust, it is modified based on the following
arguments:
1. If the simplex is moving in the same direction in the last two steps, then
the search is not interrupted by checking for optimality since we are
making steady progress in the moving direction.
2. If we do not have a partial inside or total contraction immediately beyond
us, then it is likely that the minimum lies in the direction currently being
explored. Hence, we do not interrupt the search with a restart.
These considerations have led to two criteria that both have to be satisfied
to permit the convergence check according to (5.30), which might be followed
by a check for optimality.
First, it is checked if we have done a partial inside contraction or a total
contraction. If so, we check if the direction of the latest two steps in which
the simplex is moving has changed by an angle of at least (/2). To do so, we

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

51

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

introduce the center of the simplex, defined by


xm ,

n+1
1 X
xi ,
n + 1 i=1

(5.32)

where xi , i {1, . . . , n}, are the simplex vertices. We also introduce the
normalized direction of the simplex between two steps,
xm,k xm,k1
dk ,
,
(5.33)
kxm,k xm,k1 k

where k N is the current iteration number.


We determine how much the simplex has changed its direction dk between
two steps by computing the inner product hdk1 , dk i. The inner product is
equal to the cosine of the angle dk1 and dk . If
cos k = hdk1 , dk i 0,

(5.34)

then the moving direction of the simplex has changed by at least /2. Hence,
the simplex has changed the exploration direction. Therefore, a minimum
might be achieved and we need to test the variance of the vertices (5.30), possibly followed by a test of (5.31a).
Besides the above modification, a further modification was tested: In some
cases, a reconstruction of the simplex after a failed check (5.31a) yields to slow
convergence. Therefore, the algorithm was modified so that it continues at
point 2 on page 47 without reconstructing the simplex after failing the test
(5.31a). However, reconstructing the simplex led in most of the benchmark
tests to faster convergence. Therefore, this modification is no longer used in
the algorithm.

5.6.6 Benchmark Tests


Tab. 5.1 shows the number of function evaluations and Fig. 5.6 shows the
relative number of function evaluations compared to the original implementation for several test cases. The different functions and the parameter settings
are given in the Appendix. The only numerical parameter that was changed
for the different optimizations is the accuracy, .
It turned out that modifying the stopping criterion is effective in most
cases, particularly if a new simplex is constructed after the check (5.31a) failed.
Therefore, the following two versions of the simplex algorithm are implemented
in GenOpt:
1. The base algorithm of Nelder and Mead, including the extension of
ONeill. After failing (5.31a), the simplex is always reconstructed with
the new step size.
2. The base algorithm of Nelder and Mead, including the extension of
ONeill, but with the modified stopping criterion as explained above.
That is, the simplex is only reconstructed if its moving direction changed,
and if we have an inside or total construction beyond us.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

52

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Accuracy
Test
function

Original,
with reconstruction
Original,
no reconstruction
Modified,
with reconstruction
Modified,
no reconstruction

137

= 103
2D1
Quad
with I
matrix
120
3061

136

110

1436

1356

139

109

1433

1253

145

112

1296

1015

152

111

1060

1185

155

120

1371

1347

152

109

1359

1312

Rosenbrock

Quad
with
Q matrix
1075

Rosenbrock

139

= 105
2D1
Quad
with I
matrix
109
1066

Quad
with
Q matrix
1165

Table 5.1 : Comparison of the number of function evaluations for different


implementations of the simplex algorithm. See Appendix for the definition of
the function.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

53

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

1.8
1.6

= 10 3

= 10 5

1.4

Original, but no reconstruction


Modified stopping criterion,
with reconstruction
Modified stopping criterion,
no reconstruction

1.2
1.0
0.8
0.6

Quad with Q matrix

Quad with I matrix

2D1

Rosenbrock

Quad with Q matrix

Quad with I matrix

2D1

0.4

Rosenbrock

Relative number of function evaluations compared to original


implemenation (with reconstruction)

GenOpt
Generic Optimization Program
Version 3.1.0

Figure 5.6 : Comparison of the benchmark tests.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

54

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

5.6.7 Keywords
For the Simplex algorithm, the command file (see page 86) must only contain continuous parameters.
To invoke the Simplex algorithm, the Algorithm section of the GenOpt
command file must have following form:
Algorithm {
Main
Accuracy
StepSizeFactor
BlockRestartCheck
ModifyStoppingCriterion
}

=
=
=
=
=

NelderMeadONeill ;
Double ;
// 0 < Accuracy
Double ;
// 0 < S t e p S i z e F a c t o r
I n t e g e r ; // 0 <= B l o c k R e s t a r t C h e c k
Boolean ;

The key words have following meaning:


Main The name of the main algorithm.
Accuracy The accuracy that has to be reached before the optimality condition is checked. Accuracy is defined as equal to of (5.30), page 49.
StepSizeFactor A factor that multiplies the step size of each parameter for
(a) testing the optimality condition and (b) reconstructing the simplex.
StepSizeFactor is equal to c in (5.29) and (5.31c).
BlockRestartCheck Number that indicates for how many main iterations
the restart criterion is not checked. If zero, restart might be checked
after each main iteration.
ModifyStoppingCriterion Flag indicating whether the stopping criterion
should be modified. If true, then the optimality check (5.30) is done
only if both of the following conditions are satisfied: (a) in the last step,
either a partial inside contraction or total contraction was done, and (b)
the moving direction of the simplex has changed by an angle k of at
least (/2), where k is computed using (5.34).

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

55

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group
Xi

f(x)

x0,i

x1,i

x2,i

x3,i

i {0, 1, 2, ...}
Xi+1

f(x)

x0,(i+1)

x1,(i+1) x2,(i+1) x3,(i+1)

Figure 6.1 : Interval division.

Algorithms for
One-Dimensional Optimization

6.1

Interval Division Algorithms

Interval division algorithm can be used to minimize a function f : R R,


(i.e., the function depends on one independent parameter only,) over a userspecified interval. The algorithms do not require derivatives and they require
only one function evaluation per interval division, except for the initialization.
First, we explain a master algorithm for the interval division algorithms.
The master algorithm is used to implement two commonly used interval division
algorithms: The Golden Section search and the Fibonacci Division.

6.1.1 General Interval Division


We now describe the ideas behind the interval division methods. For given
x0 , x3 R, with x0 < x3 , let X , [x0 , x3 ]. Suppose we want to minimize f ()
on X, and suppose that f : R R has a unique minimizer x X. For some
s (0, 1), let
x1
x2

, x0 + s (x3 x0 ),
, x1 + s (x3 x1 ).

(6.1)
(6.2)

If f (x1 ) f (x2 ), then x [x0 , x2 ]. Hence, we can eliminate the interval


(x2 , x3 ] and restrict our search to [x0 , x2 ]. Similarly, if f (x1 ) > f (x2 ), then
x [x1 , x3 ] and we can eliminate [x0 , x1 ). Thus, we reduced the initial interCopyright (c) 1998-2011
The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

56

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

val to a new interval that contains the minimizer x .


Let i N be the iteration number. We want to nest the sequence of
intervals
[x0,(i+1) , x3,(i+1) ] [x0,i , x3,i ],

i {0, 1, 2, . . .},

(6.3)

such that we have to evaluate f () in each step at one new point only. To do so,
we assign the new bounds of the interval such that either [x0,(i+1) , x3,(i+1) ] =
[x0,i , x2,i ], or [x0,(i+1) , x3,(i+1) ] = [x1,i , x3,i ], depending on which interval has
to be eliminated. By doing so, we have to evaluate only one new point in
the interval. It remains to decide where to locate the new point. The Golden
Section and Fibonacci Division differ in this decision.

6.1.2 Golden Section Interval Division


Suppose we have three points x0 < x1 < x3 in X R such that for some
q (0, 1), to be determined later,
|x0 x1 |
= q.
|x0 x3 |

(6.4a)

|x1 x3 |
= 1 q.
|x0 x3 |

(6.4b)

Hence,

Suppose that x2 is located somewhere between x1 and x3 and define the


ratio
|x1 x2 |
w,
.
(6.5)
|x0 x3 |

Depending on which interval is eliminated, the interval in the next iteration


step will either be of length (q + w) |x0 x3 |, or (1 q) |x0 x3 |. We select the
location of x2 such that the two intervals are of the same length. Hence,
q + w = 1 q.

(6.6a)

Now, we determine the fraction q. Since we apply the process of interval division
recursively, we know by scale similarity that
w
= q.
1q

(6.6b)

Combining (6.6a) and (6.6b) leads to


q 2 3q + 1 = 0,

(6.7a)

with solutions

3 5
.
2
Since q < 1 by (6.4a), the solution of interest is

3 5
q=
0.382.
2
q1,2 =

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

(6.7b)

(6.7c)

57

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

The fractional distances q 0.382 and 1 q 0.618 correspond to the


so-called Golden Section, which gives this algorithm its name.
Note that the interval is reduced in each step by the fraction 1 q, i.e., we
have linear convergence. In the m-th iteration, we have
|x0, m x2, m | =

|x1, m x3, m | = |x0, (m+1) x3, (m+1) |


(1 q)m+1 |x0, 0 x3, 0 |.

(6.8)

Hence, the required number of iterations, m, to reduce the initial interval of


uncertainty |x0, 0 x3, 0 | to at least a fraction r, defined as
r,

|x1, m x3, m |
|x0, m x2, m |
=
,
|x0, 0 x3, 0 |
|x0, 0 x3, 0 |

(6.9)

ln r
1.
ln(1 q)

(6.10)

is given by
m=

6.1.3 Fibonacci Division


Another way to divide an interval such that we need one function evaluation
per iteration can be constructed as follows: Given an initial interval [x0, i , x3, i ]
, i = 0, we divide it into three segments symmetrically around its midpoint.
Let d1, i < d2, i < d3, i denote the distance of the segment endpoints, measured
from x0, i . Then we have by symmetry d3, i = d1, i + d2, i . By the bracket
elimination procedure explained above, we know that we are eliminating a
segment of length d1, i . Therefore, our new interval is of length d3, (i+1) = d2, i .
By symmetry we also have d3, (i+1) = d1, (i+1) + d2, (i+1) . Hence, if we construct
our segment length such that d3, (i+1) = d1, (i+1) + d2, (i+1) = d2, i we can reuse
one known point. Such a construction can be done by using Fibonacci numbers,
which are defined recursively by
F0

F1 , 1,

Fi

Fi1 + Fi2 ,

(6.11a)
i {2, 3, . . .}.

(6.11b)

The first few numbers of the Fibonacci sequence are {1, 1, 2, 3, 5, 8, 13, 21, . . .}.
The length of the intervals d1, i and d2, i , respectively, are then given by
d1, i =

Fmi
,
Fmi+2

d2, i =

Fmi+1
,
Fmi+2

i {0, 1, . . . , m},

(6.12)

where m > 0 describes how many iterations will be done. Note that m must
be known prior to the first interval division. Hence, the algorithm must be
stopped after m iterations.
The reduction of the length of the uncertainty interval per iteration is given
by
d3, (i+1)
d2, i
=
=
d3, i
d1, i + d2, i

Fmi+1
Fmi+2
Fmi+1
Fmi
Fmi+2 + Fmi+2

Fmi+1
.
Fmi+2

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

(6.13)

58

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

After m iterations, we have


d3, m
d3, 0

=
=

d3, m d3, (m1)


d3, 2 d3, 1
...
d3, (m1) d3, (m2)
d3, 1 d3, 0
F2 F3
Fm Fm+1
2
...
=
.
F3 F4
Fm+1 Fm+2
Fm+2

(6.14)

The required number of iterations m to reduce the initial interval d3, 0 to at


least a fraction r, defined by (6.9), can again be obtained by expansion from
r

=
=

d3, (m+1)
d3, (m+1) d3, m
d2, m
d3, 2 d3, 1
=
=
...
d3, 0
d3, 0
d3, m d3, (m1)
d3, 1 d3, 0
Fm Fm+1
1
F1 F2
...
=
.
F2 F3
Fm+1 Fm+2
Fm+2

(6.15)

Hence, m is given by

m = arg min m | r
mN

1
Fm+2

(6.16)

6.1.4 Comparison of Efficiency


The Golden Section is more efficient than the Fibonacci Division. Comparing the reduction of the interval of uncertainty, |x0, m x3, m |, in the limiting
case for m , we obtain
lim

Fm+2
|x0, m x3, m |GS
= lim
(1 q)m = 0.95.
m
|x0, m x3, m |F
2

(6.17)

6.1.5 Master Algorithm for Interval Division


The following master algorithm explains the steps of the interval division
algorithm.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

59

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Algorithm 6.1.1 (Model Interval Division Algorithm)


Data:

Step 0:

Step 1:

Step 2:

x0 , x3 .
Procedure that returns ri , defined as
ri , |x0, i x2, i |/|x0, 0 x3, 0 |.
Initialize
x = x3 x0 ,
x2 = x0 + r1 x,
x1 = x0 + r2 x,
f1 = f (x1 ), f2 = f (x2 ), and
i = 2.
Iterate.
Replace i by i + 1.
If (f2 < f1 )
Set x0 = x1 , x1 = x2 ,
f1 = f2 ,
x2 = x3 ri x, and
f2 = f (x2 ).
else
Set x3 = x2 , x2 = x1 ,
f2 = f1 ,
x1 = x0 + ri x,
f1 = f (x1 ).
Stop or go to Step 1.

6.1.6 Keywords
For the Golden Section and the Fibonacci Division algorithm, the command file (see page 86) must contain only one continuous parameter.
To invoke the Golden Section or the Fibonacci Division algorithm, the
Algorithm Section of the GenOpt command file must have following form:
Algorithm {
Main
= GoldenSection | Fibonacci ;
[ AbsDiffFunction
= Double ;
|
// 0 < A b s D i f f F u n c t i o n
I n t e r v a l R e d u c t i o n = Double ;
]
// 0 < I n t e r v a l R e d u c t i o n
}

The keywords have the following meaning


Main The name of the main algorithm.
The following two keywords are optional. If none of them is specified, then
the algorithm stops after MaxIte function evaluations (i.e., after MaxIte2
iterations), where MaxIte is specified in the section OptimizationSettings.
If both of them are specified, an error occurs.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

60

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

AbsDiffFunction The absolute difference defined as


f , | min{f (x0 ), f (x3 )} min{f (x1 ), f (x2 )}|.

(6.18)

If f is lower than AbsDiffFunction, the search stops successfully.


Note: Since the maximum number of interval reductions must be known
for the initialization of the Fibonacci algorithm, this keyword can be
used only for the Golden Section algorithm. It must not be specified for
the Fibonacci algorithm.
IntervalReduction The required maximum fraction, r, of the end interval
length relative to the initial interval length (see equation (6.9)).

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

61

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Algorithms for Parametric


Runs

The here described algorithms for parametric runs can be used to determine
how sensitive a function is with respect to a change in the independent variables.
They can also be used to do a parametric sweep of a function over a set of
parameters. The algorithm described in Section 7.1 varies one parameter at
a time while holding all other parameters fixed at the value specified by the
keyword Ini. The algorithm described in Section 7.2, in contrast, constructs a
mesh in the space of the independent parameters, and evaluates the objective
function at each mesh point.

7.1

Parametric Runs by Single Variation

7.1.1 Algorithm Description


The Parametric algorithm allows doing parametric runs where one parameter at a time is varied and all other parameters are fixed at their initial values
(specified by the keyword Ini).
Each parameter must have a lower and upper bound. For the logarithmic
scale, the lower and upper bounds must be bigger than zero. To allow negative
increments, the lower bound can be larger than the upper bound. The absolute
value of the keyword Step defines in how many intervals each coordinate axis
will be divided. If Step < 0, then the spacing is logarithmic; otherwise it is
linear. Set Step = 0 to keep the parameter always fixed at the value specified
by Ini.
This algorithm can also be used with discrete parameters. This allows, for
example, using a string to specify a window construction.
The spacing is computed as follows: For simplicity, the explanation is done
for one parameter. Let l , Min, u , Max and m , |Step|, where Min, Max and
Step are specified in the command file.
If Step < 0, we compute, for i {0, . . . , m},
p
xi

u
1
log ,
m
l
= l 10p i .
=

(7.1a)
(7.1b)

If Step > 0, we compute, for i {0, . . . , m},


xi = l +

i
(u l).
m

(7.1c)

Example 7.1.1 (Parametric run with logarithmic and linear spacing)


Suppose the parameter specification is of the form
Copyright (c) 1998-2011
The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

62

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Vary{
Parameter { Name = x1 ; I n i = 5 ; Step = 2; Min = 1 0 ; Max = 1 0 0 0 ; }
Parameter { Name = x2 ; I n i = 3 ; Step = 1 ; Min = 2 ; Max = 2 0 ;
}
}

and the cost function takes two arguments, x1 , x2 R. Then, the cost function
will be evaluated at the points
(x1 , x2 ) {(10, 3), (100, 3), (1000, 3), (5, 2), (5, 20)}.

7.1.2 Keywords
For this algorithm, the command file (see page 86) can contain continuous
and discrete parameters.
The Parametric algorithm is invoked by the following specification in the
command file:
Algorithm {
Main = P a r a m e t r i c ;
StopAtError = true | f a l s e ;
}

The keywords have the following meaning:


Main The name of the main algorithm.
StopAtError If true, then the parametric run stops if a simulation error
occurs. If false, then the parametric run does not stop if a simulation
error occurs. The failed function evaluation will be assigned the function
value zero. For information, an error message will be written to the user
interface and the optimization log file.

7.2

Parametric Runs on a Mesh

7.2.1 Algorithm Description


In contrast to the algorithm Parametric, the algorithm Mesh spans a multidimensional grid in the space of the independent parameters, and it evaluates
the objective function at each grid point.
Note that the number of function evaluations increases exponentially with
the number of independent parameters. For example, a 5-dimensional grid with
2 intervals in each dimension requires 35 = 243 function evaluations, whereas
a 10-dimensional grid would require 310 = 59049 function evaluations.
The values that each parameter can take on are computed in the same way
as for the algorithm Parametric. Therefore, the specification of a Parameter
underlies the same constraints as for the algorithm Parametric, which is described above.
Copyright (c) 1998-2011
The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

63

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Example 7.2.1 (Parametric run on a mesh)


Suppose the parameter specification is of the form
Vary{
Parameter { Name = x1 ; Min = 10; I n i = 9 9 ; Max = 1 0 ; Step = 1 ; }
Parameter { Name = x2 ; Min = 1 ; I n i = 9 9 ; Max = 1 0 0 ; Step = 2; }
}

and the cost function takes two arguments, x1 , x2 R. Then, the cost function
will be evaluated at the points
(x1 , x2 ) {(10, 1), (10, 1), (10, 10), (10, 10), (10, 100), (10, 100)}.
An alternative specification for x2 that uses a discrete parameter and gives
the same result is
Parameter {
Name = x2 ;
I n i = "1" ;
Values = "1, 10 , 100 " ;
}

7.2.2 Keywords
The Mesh algorithm is invoked by the following specification in the command file:
Algorithm {
Main
= Mesh ;
StopAtError = true | f a l s e ;
}

The keywords have the following meaning:


Main The name of the main algorithm.
StopAtError If true, then the parametric run stops if a simulation error
occurs. If false, then the parametric run does not stop if a simulation
error occurs. The failed function evaluation will be assigned the function
value zero. For information, an error message will be written to the user
interface and the optimization log file.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

64

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Constraints

For some optimization problems it is necessary to impose constraints on the


independent variables and/or the dependent variables, as the following example
shows.
Example 8.0.2 Suppose we want to minimize the heating energy of a building, and suppose that the normalized mass flow m
of the heating system is
an independent variable, with constraints 0 m
1. Without using constraints, the minimum energy consumption would be achieved for m
= 0,
since then the heating system is switched off. To solve this problem, we
can impose a constraint on a dependent variable. One possibility is to add
a penalty term to the energy consumption. This could be such that every
time a thermal comfort criterion (which is a dependent variable) is violated, a
large positive number is added to the energy consumption. Thus, if ppd(x),
with ppd : Rn R, denotes the predicted percent of dissatisfied people (in
percentage), and if we require that ppd(x) 10%, we could use the inequality
constraint g(x) , ppd(x) 10 0.
In Section 8.1.1, the method that is used in GenOpt to implement box
constraints is described. In Section 8.2, penalty and barrier methods that can
be used to implement constraints on dependent variables are described. They
involve reformulating the cost function and, hence, are problem specific and
have to be implemented by the user.

8.1

Constraints on Independent Variables

8.1.1 Box Constraints


Box constraints are constant inequality constraints that define a feasible
set as


X , x Rn | li xi ui , i {1, . . . , n} ,
(8.1)

where li < ui for i {1, . . . , n}.


In GenOpt, box constraints are either implemented directly in the optimization algorithm by setting f (x) = for unfeasible iterates, or, for some
algorithms, the independent variable x X is transformed to a new unconstrained variable which we will denote in this section by t Rn .
Instead of optimizing the constrained variable x X, we optimize with
respect to the unconstrained variable t Rn . The transformation ensures that
all variables stay feasible during the iteration process. In GenOpt, the following transformations are used:
If li xi , for some i {1, . . . , n},
ti
i

=
=

p
xi li ,
i

i 2

l + (t ) .

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

(8.2a)
(8.2b)

65

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

If li xi ui , for some i {1, . . . , n},

xi li
ui l i

ti

arcsin

(8.2c)

xi

li + (ui li ) sin2 ti .

(8.2d)

If xi ui , for some i {1, . . . , n},


ti
xi

p
ui xi ,
=
= ui (ti )2 .

(8.2e)
(8.2f)

8.1.2 Coupled Linear Constraints


In some cases the constraints have to be formulated in terms of a linear
system of equations of the form
A x = b,

(8.3)

where A Rm Rn , x Rn , b Rm , and rank(A) = m.


There are various algorithms that take this kind of restriction into account.
However, such restrictions are rare in building simulation and thus not implemented in GenOpt. If there is a need to impose such restrictions, they can be
included by adding an appropriate optimization algorithm and retrieving the
coefficients by using the methods offered in GenOpts class Optimizer.

8.2

Constraints on Dependent Variables

We now discuss the situation where the constraints are non-linear and defined by
g(x) 0,
(8.4)

where g : Rn Rm is once continuously differentiable. (8.4) also allows formulating equality constraints of the form
h(x) = 0,

(8.5)

for h : Rn Rm , which can be implemented by using penalty functions. In


example, one can define g i (x) , hi (x)2 for i {1, . . . , m}. Then, since g i () is
non-negative, the only feasible value is g() = 0. Thus, we will only discuss the
case of inequality constraints of the form (8.4).
Such a constraint can be taken into account by adding penalty or barrier
functions to the cost function, which are multiplied by a positive weighting factor that is monotonically increased (for penalty functions) or monotonically
decreased to zero (for barrier functions).
We now discuss the implementation of barrier and penalty functions.
Copyright (c) 1998-2011
The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

66

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

8.2.1 Barrier Functions


Barrier functions impose a punishment if the dependent variable gets close
to the boundary of the feasible region. The closer the variable is to the boundary, the higher the value of the barrier function becomes.
To implement a barrier function for g(x) 0, where g : Rn Rm is a continuously differentiable function whose elements are strictly monotone increasing,
the cost function f : Rn R can be modified to
1
fe(x, ) , f (x) + Pm i
i=1 g (x)

(8.6)

where fe: Rn R R. The optimization algorithm is then applied to the new


function fe(x, ). Note that (8.6) requires that x is in the interior of the feasible
set1 .
A drawback of barrier functions is that the boundary of the feasible set
can not be reached. By selecting the weighting factors small, one can get close
to the boundary. However, too small a weighting factor can cause the cost
function to be ill-conditioned, which can cause problems for the optimization
algorithm.
Moreover, if the variation of the iterates between successive iterations is
too big, then the feasible boundary can be crossed. Such a behavior must be
prevented by the optimization algorithm, which can produce additional problems.
For barrier functions, one can start with a moderately large weighting factor
and let tend to zero during the optimization process. That is, one constructs
a sequence
0 > . . . > i > i+1 > . . . > 0.
(8.7)
Section 8.2.3 shows how i can be computed in the coarse of the optimization.
Barrier functions do not allow formulating equality constraints of the form (8.5).

8.2.2 Penalty Functions


In contrast to barrier functions, penalty functions allow crossing the boundary of the feasible set, and they allow implementation of equality constraints
of the form (8.5). Penalty functions add a positive term to the cost function if
a constraint is violated.
To implement a penalty function for g(x) 0, where g : Rn Rm is once
continuously differentiable and each element is strictly monotone decreasing,
the cost function f : Rn R can be modified to
fe(x, ) , f (x) +

m
X

max(0, g i (x))2 ,

(8.8)

i=1

where fe: Rn R R is once continuously differentiable in x. The optimization


algorithm is then applied to the new function fe(x, ).
1

I.e., x satisfies the strict inequality g(x) > 0.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

67

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

As for the barrier method, selecting the weighting factor is not trivial.
Too small a value for produces too big a violation of the constraint. Hence,
the boundary of the feasible set can be exceeded by an unacceptable amount.
Too large a value of can lead to ill-conditioning of the cost function, which
can cause numerical problems.
The weighting factors have to satisfy
0 < 0 < . . . < i < i+1 < . . . ,

(8.9)

with i , as i . See Section 8.2.3 for how to adjust i .

8.2.3 Implementation of Barrier and Penalty Functions


We now discuss how the weighting factors i can be adjusted. For i N,
let x (i ) be defined as the solution
x (i ) , arg min fe(x, i ),
xX

(8.10)

where fe(x, i ) is as in (8.6) or (8.8), respectively. Then, we initialize i = 0,


select an initial value 0 > 0 and compute x (0 ). Next, we select a i+1 such
that it satisfies (8.7) (for barrier functions) or (8.9) (for penalty functions), and
compute x (i+1 ), using the initial iterate x (i ), and increase the counter i
to i + 1. This procedure is repeated until i is sufficiently close to zero (for
barrier functions) or sufficiently large (for penalty functions).
To recompute the weighting factors i , users can request GenOpt to write
a counter to the simulation input file, and then compute i as a function of
this counter. The value of this counter can be retrieved by setting the keyword
WriteStepNumber in the optimization command file to true, and specifying
the string %stepNumber% in the simulation input template file. GenOpt will
replace the string %stepNumber% with the current counter value when it writes
the simulation input file. The counter starts with the value 1 and its increment
is 1.
Users who implement their own optimization algorithm in GenOpt can call
the method increaseStepNumber(...) in the class Optimizer to increase the
counter. If the keyword WriteStepNumber in the optimization command file is
set to true, the method calls the simulation to evaluate the cost function for
the new value of this counter. If WriteStepNumber is false, no new function
evaluation is performed by this method since the cost function does not depend
on this counter.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

68

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Input Files
initialization:

output

command

configuration

GenOpt

log

input

program
call
Simulation
Program

simulation
input template

simulation output
retrieval

Simulation

initialization

Optimization

Specification of file location


(input files, output files, log files, etc.)
Specification of parameter names, initial values,
command:
upper/lower bounds, optimization algorithm, etc.
Configuration of simulation program
configuration:
(error indicators, start command, etc.)
simulation input template: Templates of simulation input files

output

log

Figure 9.1 : Interface between GenOpt and the simulation program that evaluates the cost function.

Program

GenOpt is divided into a kernel part and an optimization part. The kernel
reads the input files, calls the simulation program, stores the results, writes
output files, etc. The optimization part contains the optimization algorithms.
It also contains classes of mathematical functions such as those used in linear
algebra.
Since there is a variety of simulation programs and optimization algorithms,
GenOpt has a simulation program interface and an optimization algorithm interface. The simulation program interface allows using any simulation software
to evaluate the cost function (see below for the requirements on the simulation
program), and allows implementing new optimization algorithms with little
effort.

9.1

Interface to the Simulation Program

Text files are used to exchange data with the simulation program and to
specify how to start the simulation program. This makes it possible to couple

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

69

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

any simulation program to GenOpt without requiring code adaptation on either


the GenOpt side or the simulation program side. The simulation program must
satisfy the following requirements:
1. The simulation program must read its input from one or more text files,
must write the value of the cost function to a text file, and must write
error messages to a text file.
2. It must be able to start the simulation program by a command and the
simulation program must terminate automatically. This means that the
user does not have to open the input file manually and shut down the
simulation program once the simulation is finished.
The simulation program may be a commercially available program or one
written by the user.

9.2

Interface to the Optimization Algorithm

The large variety of optimization algorithms led to the development of


an open interface that allows easy implementation of optimization algorithms.
Users can implement their own algorithms and add them to the library of available optimization algorithms without having to adapt and recompile GenOpt.
To implement a new optimization algorithm, the optimization algorithm must
be written according to the guidelines of Section 9.4. Thus, GenOpt can not
only be used to do optimization with built-in algorithms, but it can also be
used as a framework for developing, testing and comparing optimization algorithms.
Fig. 9.2 shows GenOpts program structure. The class Optimizer is the
superclass of each optimization algorithm. It offers all the functions required
for retrieval of parameters that specify the optimization settings, performing
the evaluation of the cost function and reporting results. For a listing of its
methods, see http://SimulationResearch.lbl.gov or the Javadoc code documentation that comes with GenOpts installation.

9.3

Package genopt.algorithm

The Java package genopt.algorithm consists of all classes that contain


mathematical formulas that are used by the optimization algorithm. The following packages belong to genopt.algorithm.
genopt.algorithm This package contains all optimization algorithms. The
abstract class Optimizer, which must be inherited by each optimization
algorithm, is part of this package.
genopt.algorithm.util.gps contains a model Generalized Pattern Search
optimization algorithm.
genopt.algorithm.util.linesearch contains classes for doing a line search
along a given direction.
Copyright (c) 1998-2011
The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

70

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Optimizer
Optimization
Algorithm

Optimizer
GenOpt
Kernel

External
Simulation
Program

Optimization
Algorithm

Optimizer

Utility
Classes
Utility Classes
Shared library
for commonly
used methods,
e.g., for
- linear algebra
- optimality check
- line search
- etc.

Optimization
Algorithm

Simulation Program
Any simulation program with
text-based I/O, e.g.,
- EnergyPlus
- SPARK
- DOE-2
- TRNSYS
- etc.

Superclass Optimizer
Offers methods to easily access
GenOpts kernel, e.g., for
- input retrieving
- cost function evaluation
- result reporting
- error reporting
- etc.

Figure 9.2 : Implementation of optimization algorithms into GenOpt.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

71

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

genopt.algorithm.util.math contains classes for mathematical operations.


genopt.algorithm.util.optimality contains classes that can be used to
check whether a variable value is at a minimum point or not.
genopt.algorithm.util.pso contains a model Particle Swarm Optimization algorithm.
These packages are documented in the Javadoc source code documentation that
comes with GenOpts.

9.4

Implementing a New Optimization Algorithm

To implement a new optimization algorithm, you must write a Java class


that has the syntax shown in Fig. 9.3. The class must use the methods of
the abstract class Optimizer to evaluate the cost function and to report the
optimization steps. The methods of the Optimizer class are documented in
the Javadoc source code documentation.
Follow these steps to implement and use your own optimization algorithm:
1. Put the byte-code (ClassName.class) in the directory genopt/algorithm.
2. Set the value of the keyword Main in the Algorithm section of the optimization command file to the name of the optimization class (without
file extension).
3. Add any further keywords that the algorithm requires to the Algorithm
section. The keywords must be located after the entry Main of the optimization command file. The keywords must be in the same sequence as
they are called in the optimization code.
4. Call the method Optimizer.report(final Point, final boolean) after evaluating the cost function. Otherwise, the result will not be reported.
5. Call either the method Optimizer.increaseStepNumber() or the method
Optimizer.increaseStepNumber(final Point) after the optimization
algorithm converged to some point. These methods increase a counter
that can be used to add penalty or barrier functions to the cost function. In particular, the methods Optimizer.increaseStepNumber()
and Optimizer.increaseStepNumber(finalPoint) increase the variable stepNumber (see Section 8) by one.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

72

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

package genopt.algorithm;
import genopt.GenOpt;
import genopt.lang.OptimizerException;
import genopt.io.InputFormatException;
public class ClassName extends Optimizer{
public ClassName (GenOpt genOptData)
throws InputFormatException, OptimizerException,
IOException, Exception
{

// set the mode to specify whether the


// default transformations for the box
// constraints should be used or not
int constraintMode = xxxx;
super(genOptData, constraintMode);

// remaining code of the constructor


}
public int run() throws OptimizerException, IOException
{

// the code of the optimization algorithm


}

// add any further methods and data members


}
Figure 9.3 :
algorithm.

Code snippet that specifies how to implement an optimization

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

73

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

10

Installing and Running


GenOpt

10.1

System Requirements

To run GenOpt and the GenOpt installation program, a Java 2 v1.6.0


runtime environment is required. GenOpt should run on any operating system
that can run Java applications.

10.2

Installing and uninstalling GenOpt

To install GenOpt, download the installation program genopt-install.jar


from http://SimulationResearch.lbl.gov/GO. Then, either double-click on
the file genopt-install.jar1 or open a command shell, change to the directory that contains genopt-install.jar and type
j a v a j a r genopt i n s t a l l . j a r

No environment variables need to be set to run GenOpt. (This is new since


GenOpt 2.1.0.)
Note that Windows 7, depending on the permission of the user account, may
not allow the GenOpt installation program to install GenOpt in C:\Program
Files. In this situation, GenOpt can be installed in another programs, and
then moved to C:\Program Files.
To uninstall GenOpt, delete the directory in which GenOpt was installed.

10.3

Running GenOpt

10.3.1

Running GenOpt from the file explorer

To run GenOpt from the file explorer, double-click on the file genopt.jar.1
This will start the graphical user interface. From the graphical user interface,
select File, Start... and select a GenOpt initialization file.

10.3.2

Running GenOpt from the command line

GenOpt can also be run as a console application, either with or without


the graphical user interface. To run GenOpt as a console application with the
graphical user interface, open a shell, change to the directory that contains
genopt.jar and type
1
Depending on your Java installation, the file extension jar may not be associated
with Java. In this situation, please consult the instructions of your operating system
for how to associate file extensions with programs.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

74

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

j a v a j a r genopt . j a r [ i n i t i a l i z a t i o n F i l e ]

where [initializationFile] is an optional argument that can be replaced


with the GenOpt initialization file (see example below). To start GenOpt
without the graphical user interface, type
j a v a c l a s s p a t h genopt . j a r genopt . GenOpt [ i n i t i a l i z a t i o n F i l e ]

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

75

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Figure 10.1 : Output of GenOpt on Mac OS X for the example file in the
directory example/quad/GPSHookeJeeves.
For instance, to run the example file provided with GenOpt that minimizes
a quadratic function using the Hooke-Jeeves algorithm, type on Mac OS X
j a v a j a r genopt . j a r example / quad / GPSHookeJeeves/optMacOSX . i n i

on Linux
j a v a j a r genopt . j a r example / quad / GPSHookeJeeves/ optLinux . i n i

and on Windows
j a v a j a r genopt . j a r example \ quad \ GPSHookeJeeves\optWinXP . i n i

This should produce the window shown in Fig. 10.1.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

76

GenOpt
Generic Optimization Program
Version 3.1.0

11

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Setting Up an Optimization
Problem

We will now discuss how to set up an optimization problem.


First, define a cost function. The cost function is the function that needs
to be minimized. It must be evaluated by an external simulation program
that satisfies the requirements listed on page 70. To maximize a cost function,
change the sign of the cost function to turn the maximization problem into a
minimization problem.
Next, specify possible constraints on the independent variables or on dependent variables (dependent variables are values that are computed in the
simulation program). To do so, use the default scheme for box constraints
on the independent variables or add penalty or barrier functions to the cost
function as described in Chapter 8.
Next, make sure that the simulation program writes the cost function value
to the simulation output file. It is important that the cost function value is
written to the output file without truncating any digits (see Section 11.4). For
example, if the cost function is computed by a Fortran program in double
precision, it is recommended to use the E24.16 format in the write statement.
In the simulation output file, the cost function value must be indicated by
a string that stands in front of the cost function value (see page 82).
Then, specify the files described in Section 11.1 and, if required, implement
pre- and post-processing, as described in Section 11.3.

11.1

File Specification

This section defines the file syntax for GenOpt. The directory example of
the GenOpt installation contains several examples.
The following notation will be used to explain the syntax:
1. Text that is part of the file is written in fixed width fonts.
2. | stands for possible entries. Only one of the entries that are separated
by | is allowed.
3. [ ] indicates optional values.
4. The file syntax follows the Java convention. Hence,
(a) // indicates a comment on a single line,
(b) /* and */ enclose a comment,
(c) the equal sign, =, assigns values,
(d) a statement has to be terminated by a semi-colon, ;,
(e) curly braces, { }, enclose a whole section of statements, and
Copyright (c) 1998-2011
The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

77

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

(f) the syntax is case sensitive.


The following basic types are used:
String

StringReference
Integer
Double
Boolean

Any sequence of characters.


If the sequence contains a blank character,
it has to be enclosed in apostrophes (").
If there are apostrophes within quoted text,
they must be specified by a leading backslash (i.e., \").
Similarly, a backslash must be preceded by another
backslash (i.e., "c:\\go_prg").
Any name of a variable that appears in the same section.
Any integer value.
Any double value (including integer).
Either true or false

The syntax of the GenOpt files is structured into sections of parameters


that belong to the same object. The sections have the form
ObjectKeyWord { Object }

where Object can either be another ObjectKeyWord or an assignment of the


form
Parameter = Value ;

Some variables can be referenced. References have to be written in the


form
Parameter = ObjectKeyWord1 . ObjectKeyWord2 . Value ;

where ObjectKeyWord1 refers to the root of the object hierarchy as specified


in the corresponding file.

11.1.1

Initialization File

The initialization file specifies


1. where the files of the current optimization problems are located,
2. which simulation files the user likes to have saved for later inspection,
3. what additional strings have to be passed to the command that starts
the simulation (such as the name of the simulation input file),
4. what number in the simulation output file is a cost function value,

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

78

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

5. whether and if so, how, the cost function value(s) have to be postprocessed, and
6. which simulation program is being used.
The sections must be specified in the order shown below. The order of the
keywords in each section is arbitrary, as long as the numbers that follow some
keywords (such as File1) are in increasing order.
The initialization file syntax is
Simulation {
Files {
Template {
File1 = String | StringReference ;
[ Path1 = S t r i n g | S t r i n g R e f e r e n c e ; ]
[ File2 = String | StringReference ;
[ Path2 = S t r i n g | S t r i n g R e f e r e n c e ; ]
[ ... ] ]
}
Input { // t h e number o f i n p u t f i l e must be e q u a l t o
// t h e number o f t e m p l a t e f i l e s
File1
= String | StringReference ;
[ Path1
= String | StringReference ; ]
[ SavePath1 = S t r i n g | S t r i n g R e f e r e n c e ; ]
[ File2
= String | StringReference ;
[ Path2
= String | StringReference ; ]
[ SavePath2 = S t r i n g | S t r i n g R e f e r e n c e ; ]
[ ... ] ]
}
Log {
// The Log s e c t i o n has t h e same s y n t a x a s t h e Input s e c t i o n .
}
Output {
// The Output s e c t i o n has t h e same s y n t a x a s t h e Input s e c t i o n .
}
Configuration {
File1 = String | StringReference ;
[ Path1 = S t r i n g | S t r i n g R e f e r e n c e ; ]
}
} // end o f s e c t i o n S i m u l a t i o n . F i l e s
[ CallParameter {
[ Prefix = String | StringReference ; ]
[ S u ffix = String | StringReference ; ]
}]
[ ObjectiveFunctionLocation {
Name1
= String ;
Delimiter1 = String | StringReference ;
| Function1 = S t r i n g ;
[ FirstCharacterAt1 = I n t eger ; ]
[ Name2
Delimiter2

= String ;
= String | StringReference ;

Function2

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

= String ;

79

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

[ FirstCharacterAt2 = I n t eger ; ]
... ] ]

}]
} // end o f s e c t i o n S i m u l a t i o n
Optimization {
Files {
Command {
File1 = String | StringReference ;
[ Path1 = S t r i n g | S t r i n g R e f e r e n c e ; ]
}
}
} // end o f s e c t i o n O p t i m i z a t i o n

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

80

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

The sections have the following meaning:


Simulation.Files.Template GenOpt writes the value of the independent
variables to the simulation input files. To do so, GenOpt reads the simulation input template files, replaces each occurrence of %variableName%
by the numerical value of the corresponding variable, and the resulting file contents are written as the simulation input files. The string
%variableName% refers to the name of the variable as specified by the
entry Name in the optimization command file on page 86.
The independent variables can be written to several simulation input
files if required. To do so, specify as many Filei and Pathi assignments
as necessary (where i stands for a one-based counter of the file and path
name). Note that there must obviously be the same number of files and
paths in the Input section that follows this section.
If there are multiple simulation input template files, each file will be
written to the simulation input file whose keyword ends with the same
number.
The following rules are imposed:
1. Each variable name specified in the optimization command file must
occur in at least one simulation input template file or in at least one
function that is specified in the section ObjectiveFunctionLocation
below.
2. Multiple occurrences of the same variable name are allowed in the
same file and in the same function specification (as specified by the
keyword Functioni, i = 1, 2, . . .).
3. If the value WriteStepNumber in the section OptimizationSettings
of the optimization command file is set to true, then rule 1 and 2
apply also to %stepNumber%. If WriteStepNumber is set to false,
then %stepNumber% can occur, but it will be ignored.
Simulation.Files.Input The simulation input file is generated by GenOpt
based on the current parameter set and the corresponding simulation input template file, as explained in the previous paragraph. Obviously, the
number of simulation input files must be equal to the number of simulation input template files.
The section Input has an optional keyword, called SavePath. If SavePath
is specified, then the corresponding input file will after each simulation
be copied into the directory specified by SavePath. The copied file will
have the same name, but with the simulation number added as prefix.
Simulation.Files.Log GenOpt scans the simulation log file for error messages. The optimization terminates if any of the strings specified by the

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

81

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

variable ErrorMessage in the SimulationError section of the GenOpt


configuration file is found. At least one simulation log file must be specified.
The section Log also has the optional keyword SavePath. It has the
same functionality as explained in the previous section.
Simulation.Files.Output GenOpt reads the cost function value from these
files as described in the item Simulation.ObjectiveFunctionLocation
below. The number of cost function values is arbitrary (but at least one
must be specified). The optimization algorithms minimize the first cost
function value. The other values can be used for post-processing of the
simulation output. They will also be reported to the output files and the
online chart.
GenOpt searches for the cost function value as follows:
1. After the first simulation, GenOpt searches for the first cost function value in the first output file as described below in the description of Simulation.ObjectiveFunctionLocation. If the first output file does not contain the first cost function value, then GenOpt
reads the second output file (if present) and so on until the last
output file is read. If GenOpt cannot find the cost function value
in any of the output files or function definitions, it will terminate
with an error. The same procedure is repeated with the second cost
function value, if present, until all cost function values have been
found.
2. In the following iterations, GenOpt will only read the file(s) where it
found the cost function value(s) after the first simulation. The files
that did not contain a cost function value after the first simulation
will not be read anymore.
This section also contains the optional keyword SavePath. If this keyword is specified, then GenOpt copies the output file. This is particularly
useful for doing parametric runs.
Simulation.Files.Configuration The entries in this section specify the
simulation configuration file, which contains information that is related
to the simulation program only, but not related to the optimization problem. The simulation configuration file is explained below.
Simulation.CallParameter Here, a prefix and suffix for the command that
starts the simulation program can be added. With these entries, any
additional information, such as the name of the weather file, can be
passed to the simulation program. To do so, one has to refer to either of
these entries in the argument of the keyword Command (see page 85).
Simulation.ObjectiveFunctionLocation This section specifies where the
cost function values can be found in the simulation output files, and

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

82

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

possibly how these values have to be post-processed before they will be


passed to the optimization algorithm.
To search for the cost function value, GenOpt reads the files Simulation.Output.File1,
Simulation.Output.File2, etc. one by one. Each file is parsed starting
at the last line and reading line-by-line towards the first line. GenOpt
assumes that the value that is written after the occurrence of the string
specified by Delimiteri (i = 1, 2, . . .) is the cost function value. Optionally, the entry FirstCharacterAti (i = 1, 2, . . .) can be used. If
FirstCharacterAti is greater than 0, then delimiter Delimiteri must
start at this position, otherwise it will be ignored.
For example, consider an output file that has the format
5 , 1 . 2 3 4 5 , 11
6 , 1 2 . 3 4 5 , 22

Then, the entries


D e l i m i t e r 1 = "5," ;
FirstCharacterAt1 = 0;

or, equivalently,
Delimiter1

= "5," ;

would cause GenOpt to return 22 as the objective function value, whereas


the specification
D e l i m i t e r 1 = "5," ;
FirstCharacterAt1 = 1;

would cause GenOpt to return 1.2345.


Alternatively to the entry Delimiteri, an entry Functioni can be specified to define how the cost function values should be post-processed. If
Functioni is specified, then FirstCharacterAti is ignored. See page 92
for an example that uses Functioni.
For convenience, the section ObjectiveFunctionLocation can optionally be specified in the initialization file, but its specification is required
in the configuration file. If this section is specified in both files, then the
specification in the initialization file will be used.
Specifying the section ObjectiveFunctionLocation in the initialization
file is of interest if a simulation program is used for different problems
Copyright (c) 1998-2011
The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

83

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

that require different values of this section. Then, the same (simulation
program specific) configuration file can be used for all runs and the different settings can be specified in the (project dependent) initialization
file rather than in the configuration file.
Optimization.Files.Command This section specifies where the optimization command file is located. This file contains the mathematical information of the optimization. See page 86 for a description of this file.

11.1.2

Configuration File

The configuration file contains information related only to the simulation


program used and not to the optimization problem. Hence, it has to be written
only once for each simulation program and operating system. We recommend
to put this file in the directory cfg so that it can be used for different optimization projects. Some configuration files are provided with the GenOpt
installation.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

84

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

The syntax is specified by


// E r r o r m e s s a g e s o f t h e s i m u l a t i o n program .
SimulationError {
ErrorMessag e = S t r i n g ;
[ ErrorMessag e = S t r i n g ;
[ ... ] ]
}
// Number format f o r w r i t i n g s i m u l a t i o n i n p u t f i l e s .
IO{
NumberFormat = F l o a t | Double ;
}
// S p e c i f y i n g how t o s t a r t t h e s i m u l a t i o n program .
SimulationStart{
Command = S t r i n g ;
W r i t e I n p u t F i l e E x t e n s i o n = Boolean ;
}
// S p e c i f y i n g t h e l o c a t i o n o f t h e
// c o s t f u n c t i o n v a l u e i n t h e s i m u l a t i o n output f i l e
ObjectiveFunctionLocation {
Name1
= String ;
Delimiter1 = String | StringReference ;
| Function1
[ Name2
Delimiter2
[ ... ] ]

= String ;
= String | StringReference ;

Function2

= String ;

= String ;

The entries have the following meaning:


SimulationError The error messages that might be written by the simulation program must be assigned to the keyword ErrorMessage so that
GenOpt can check whether the simulation has completed successfully.
At least one entry for ErrorMessage must be given.
IO The keyword NumberFormat specifies in what format the independent parameters will be written to the simulation input file. The setting Double
is recommended, unless the simulation program cannot read this number
format.
SimulationStart The keyword Command specifies what string must be used
to start the simulation program. It is important that this command waits
until the simulation terminates (see the directory cfg for examples). The
value of the variable Command is treated in a special way: Any value of
the optimization initialization file can be automatically copied into the
value of Command. To do so, surround the reference to the corresponding

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

85

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

keyword with percent signs. For example, a reference to the keyword


Prefix of the initialization file looks like
%S i m u l a t i o n . C a l l P a r a m e t e r . P r e f i x%

By setting WriteInputFileExtension to false, the value of the keyword Simulation.Input.Filei (where i stands for 1, 2, 3) is copied
into Command, and the file extension is removed.
ObjectiveFunctionLocation Note that this section can also be specified in
the initialization file. The section in this file is ignored if this section is
also specified in the configuration file. See page 82 for a description.

11.1.3

Command File

The command file specifies optimization-related settings such as the independent parameters, the stopping criteria and the optimization algorithm being
used. The sequence of the entries in all sections of the command file is arbitrary.
There are two different types of independent parameters, continuous parameters and discrete parameters. Continuous parameters can take on any
values, possibly constrained by a minimum and maximum value. Discrete parameters can take on only user-specified discrete values, to be specified in this
file.
Some algorithms require all parameters to be continuous, or all parameters
to be discrete, or allow both continuous and discrete parameters. Please refer
to the algorithm section on page 16-64.

a)

Specification of a Continuous Parameter

The structure for a continuous parameter is


// S e t t i n g s f o r a c o n t i n u o u s parameter
Parameter {
Name = S t r i n g ;
I n i = Double ;
Step = Double ;
[ Min = Double | SMALL; ]
[ Max = Double | BIG ; ]
[ Type = CONTINUOUS; ]
}

The entries are:


Name The name of the independent variable. GenOpt searches the simulation input template files for this string surrounded by percent signs

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

86

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

and replaces each occurrence by its numerical value before it writes the
simulation input files.
Ini Initial value of the parameter.
Step Step size of the parameter. How this variable is used depends on
the optimization algorithm being used. See the optimization algorithm
descriptions for details.
Min Lower bound of the parameter. If the keyword is omitted or set to
SMALL, the parameter has no lower bound.
Max Upper bound of the parameter. If the keyword is omitted or set to BIG,
the parameter has no upper bound.
Type Optional keyword that specifies that this parameter is continuous. By
default, if neither Type nor Values (see below) are specified, then the
parameter is considered to be continuous and the Parameter section
must have the above format.
b) Specification of a Discrete Parameter
For discrete parameters you need to specify the set of admissible values.
Alternatively, if a parameter is spaced either linearly or logarithmically, specify
the minimum and maximum value of the parameter and the number of intervals.
First, we list the entry for the case of specifying the set of admissible values:
// S e t t i n g s
Parameter {
Name
=
Ini
=
Values =
[ Type
=
}

f o r a d i s c r e t e parameter
String ;
Integer ;
String ;
SET ; ]

The entries are:


Name As for the continuous parameter above.
Ini 1-based index of the initial value. For example, if Values specifies three
admissible values, then Ini can be either 1, 2, or 3.
Values Set of admissible values. The entry must be of the form
Values = "value1, value2, value3";
i.e., the values are separated by a comma, and the list is enclosed in
apostrophes ("). For value1, value2, etc., numbers and strings are
allowed.
If all entries of Values are numbers, then the result reports contain
the actual values of this entry. Otherwise, the result reports contain
the index of this value, i.e., 1 corresponds to value1, 2 corresponds to
value2, etc.
Copyright (c) 1998-2011
The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

87

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Type Optional keyword that specifies that this parameter is discrete. By


default, if the entry Values is specified, a parameter is considered to be
discrete, and the Parameter section must have the above format.

To obtain linear or logarithmic spacing between a minimum and maximum


value, the Parameter section can be specified as
// S e t t i n g s f o r a d i s c r e t e parameter , l i n e a r l y o r l o g a r i t h m i c a l l y s p a c e d
Parameter {
Name = S t r i n g ;
Ini = Integer ;
Type = SET ;
Min = Double ;
Max = Double ;
Step = I n t e g e r ;
}

Name As for the continuous parameter above.


Ini 1-based index of the initial value. For example, if Step is set to +2 or
to 2, then Ini can be set to any integer between 1 and 3.
Type This variable must be equal to SET.
Min Minimum value of the spacing.
Max Maximum value of the spacing.
Step Number of intervals. If Step < 0, then the spacing is logarithmic,
otherwise it is linear. Set Step = 0 to keep the parameter always fixed
on its minimum value.
The linear or logarithmic spacing is computed using (7.1) on page 62.

c)

Specification of Input Function Objects

The specification of input function objects in optional. If any input function object is specified, then its name must appear either in another input
function object, in a simulation input template file, or in an output function
object. Otherwise, GenOpt terminates with an error message. See Section 11.3
on page 92 for an explanation of input and output function objects.
The syntax for input function objects is
// Input function objects entry
Function{
Name
= String;
Function = String;
}
Copyright (c) 1998-2011
The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

88

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

The entries are


Name A unique name that is not used for any other input function object
and for any other independent parameter.
Function A function object (see Section 11.3 on page 92). The string must
be enclosed by apostrophes (").
d) Structure of the Command File
Using above structures of the Parameter section, the command file has the
structure
// Settings of the independent parameters
Vary{
// Parameter entry
List any of the Parameter sections as described
in the Sections 11.1.3.a) and 11.1.3.b).
// Input function object
List any of the Function sections as described
in the Section 11.1.3.c).
}
// General settings for the optimization process
OptimizationSettings{
MaxIte
= Integer;
WriteStepNumber = Boolean;
[ MaxEqualResults = Integer; ]
[ UnitsOfExecution = Integer; ]
}
// Specification of the optimization algorithm
Algorithm{
Main = String;
... // any other entries that are required
// by the optimization algorithm
}
The different sections are:
Vary This section contains the definition of the independent parameter and
the input function objects. See Sections 11.1.3.a), 11.1.3.b), and 11.1.3.c)
for possible entries.
OptimizationSettings This section specifies general settings of the optimization. MaxIte is the maximum number of iterations. After MaxIte
main iterations, GenOpt terminates with an error message.
WriteStepNumber specifies whether the current step of the optimization
has to be written to the simulation input file or to a function object.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

89

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

The step number can be used to calculate a penalty or barrier function


(see Section 8.2 on page 66).
The optional parameter MaxEqualResults specifies how many times the
cost function value can be equal to a value that has previously been
obtained before GenOpt terminates. This setting is used to terminate
GenOpt if the cost function value is constant for several iterates (see
Section 11.4). The default value of MaxEqualResults is 5.
The optional parameter UnitsOfExecution specifies the maximum number of simulations that may run in parallel. If this parameter is not
specified or set to zero, then its value is set to the number of processors
of the computer that runs GenOpt. In general, this parameter need not
be specified.
Algorithm The setting of Main specifies which algorithm is invoked for doing
the optimization. Its value has to be equal to the class name that contains the algorithm. Note that additional parameters might be required
depending on the algorithm used (see Section 5 for the implemented
algorithms).

11.1.4

Log File

GenOpt writes a log file to the directory that contains the initialization
file. The name of the log file is GenOpt.log.
The GenOpt log file contains general information about the optimization
process. It also contains warnings and errors that occur during the optimization.

11.1.5

Output Files

In addition to GenOpt.log, GenOpt writes two output files to the directory


where the optimization command file is located. (The location of the optimization command file is defined by the variable Optimization.Files.Command.Path1
in the optimization initialization file.)
The iterations are written to the output files OutputListingMain.txt and
OutputListingAll.txt. The file OutputListingMain.txt contains only the
main iteration steps, whereas OutputListingAll.txt contains all iteration
steps.
Each time the method genopt.algorithm.Optimizer.report() is called
from the optimization algorithm, the current trial is reported in either one of
the files.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

90

GenOpt
Generic Optimization Program
Version 3.1.0

11.2

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Resolving Directory Names for Parallel


Computing

To allow doing simulations using parallel computing, GenOpt will create a


temporary directory for each simulation. This avoids different simulations writing to the same output or log files simultaneously. The simulations will be done
in subdirectories of the directory that contains the optimization initialization
file.
To explain which directories are created by GenOpt, suppose that GenOpts
optimization initialization file is stored in the directory /data/optMacOSX.ini.
(For Windows, simply replace /data with C:\data and replace all forwardslashes with backslashes.) Suppose that GenOpts initialization file states
that the simulation input file is /data/input/in.txt, the simulation log file
is /data/log.txt, and the simulation output file is /data/output/out.txt.
Thus, in this example, the simulation will read its input from input/in.txt,
it will write its log messages to log.txt, and it will write its output to
output/out.txt, where the directories input and output are subdirectories
of the directory in which the simulation was started. Then, for the first simulation, GenOpt will proceed as follows:
1. It will create the simulation input file /data/tmp-genopt-run-1/input/in.txt
(including the temporary directory tmp-genopt-run-1/input).
2. It will change the working directory for the simulation to the directory
/data/tmp-genopt-run-1. Hence, if the simulation program writes to
the current directory, then it will write to /data/tmp-genopt-run-1.
3. GenOpt will read /data/tmp-genopt-run-1/log.txt to retrieve the
simulation log messages.
4. If no error has been found in the log file, then GenOpt will read the
simulation output file /data/tmp-genopt-run-1/output/out.txt.
5. GenOpt will delete the directory /data/tmp-genopt-run-1 and all its
subdirectories.
For the second simulation, the same steps will be repeated, but the temporary
directory will be /data/tmp-genopt-run-2.
To resolve the directory names, GenOpt uses the following rules. The rules
are applied to the directories of the simulation input files, simulation log files
and simulation output files. They are also applied to the value of the keyword
Command in the section SimulationStart of the optimization configuration file.
1. A period (.) is replaced by the path name of the optimization initialization file.
2. If the keywords Path1, Path2 etc. are not specified in the optimization
initialization file, then they will be set to the directory of the optimization
initialization file.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

91

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

3. For the simulation input, the simulation log and the simulation output
files, the string tmp-genopt-run-#, where # is the number of the simulation, will be inserted between the name of the optimization initialization
file and the subdirectory name of the simulation input, log or output file.
4. When resolving the file names, a path separator (\ on Windows or /
on Mac OS X and Linux) will be appended if needed.
These rules work for situations in which the simulation program uses the
current directory, or subdirectories of the current directory, to read input and
write output, provided that the optimization configuration file is also in the
directory that contains the simulation input files.
For the declaration of the Command line in the GenOpt configuration file, we
recommend using the full directory name. For example, we recommend using
Command = "./simulate.sh
[linebreak added]
%Simulation.Files.Log.Path1%/%Simulation.Files.Log.File1%";
instead of
Command = "./simulate.sh ./%Simulation.Files.Log.File1%";
The first version ensures that the argument that is passed to simulate.sh
is the simulation log file in the working directory that is used by the current
simulation. However, in the second version, because of rule (1) the simulation
log file will be in the directory of GenOpts configuration file, and thus different
simulations may write to the same simulation log file simultaneously, causing
unpredictable behavior.

11.3

Pre-Processing and Post-Processing

Some simulation programs do not have the capability to pre-process the


independent variables, or to post-process values computed during the simulation. For such situations, GenOpts input function objects and output function
objects can be used.
a)

Function Objects

Function objects are formulas whose arguments can be the independent


variables, the keyword stepNumber, and for post-processing, the result of the
simulation.
Following functions are implemented:

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

92

GenOpt
Generic Optimization Program
Version 3.1.0
Function
add(x0, x1)
add(x0, x1, x2)
subtract(x0, x1)
multiply(x0, x1)
multiply(x0, x1, x2)
divide(x0, x1)
log10(x0)

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Returns
x0 + x1
x0 + x1 + x2
x0 x1
x0 x1
x0 x1 x2
x0 /x1
log10 (x0 )

Furthermore, all functions that are defined in the class java.lang.StrictMath


and whose arguments and return type are of type double can be accessed by
typing their name (without the package and class name).
In addition, users can implement any other static method with arguments
and return type double by adding the method to genopt/algorithm/util/math/Fun.java.
The method must have the syntax
public static double methodName(double x0, double x1) {
double r;
// do any computations
return r;
}
The number of arguments is arbitrary.
Next, compile the file after adding new methods and recreate the file
genopt.jar. This can be done by changing from a console window to the
directory where the file Makefile is located, and then typing
make jar
provided that your computer has a make systems and a java compiler. If your
computer does not have a make system, you may be able to enter the following
commands from a console window:
cd src
javac -Xlint:unchecked LISTOFJAVAFILES
jar cfm ../genopt.jar genopt/Manifest.txt [continues on next line]
LISTOFJAVAFILES genopt/img/* ../legal.html
cd ..
jar -i genopt.jar
where LISTOFJAVAFILES need to be replaced with all java files that are in the
directory src and its subdirectories.
Next, we present an example for pre-processing and afterwards an example
for post-processing.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

93

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

b) Pre-Processing
Example 11.3.1 Suppose we want to find the optimal window width and
height. Let w and h denote the window width and height, respectively. Suppose
we want the window height to be 1/2 times the window width, and the window
width must be between 1 and 2 meters. Then, we could specify in the command
file the section
Parameter{
Name =
w;
Ini = 1.5;
Step = 0.05;
Min =
1;
Max =
2;
Type = CONTINUOUS;
}
Function{
Name =
h;
Function = "multiply( %w%, 0.5 )";
}
Then, in the simulation input template files, GenOpt will replace all occurrences of %w% by the window width and all occurences of %h% by 1/2 times the
numerical value of %w%.
GenOpt does not report values that are computed by input functions. To
report such values, a user needs to specify them in the section ObjectiveFunctionLocation,
as shown in Example 11.3.2 below.
c)

Post-Processing

Example 11.3.2 Suppose we want to minimize the sum of annual heating


and cooling energy consumption, which we will call total energy. Some simulation programs cannot add different output variables. For example, EnergyPlus [CLW+ 01] writes the heating and cooling energy consumption separately
to the output file. In order to optimize the total energy, the simulation output
must be post-processed.
To post-process the simulation results in GenOpt, we can proceed as follows:
Suppose the cost function delimiter (see Section 11.1.1) for the heating and
cooling energy are, respectively, Eheat= and Ecool=. In addition, suppose we
want to report the value of the variable h that has been computed by the input
function object in Example 11.3.1.
Then, in the optimization initialization file (see Section 11.1.1) we can
specify the section
ObjectiveFunctionLocation{
Name1 = E_tot; Function1 = "add( %E_heat%, %E_cool% )";
Name2 = E_heat; Delimiter2 = "Eheat=";
Name3 = E_cool; Delimiter3 = "Ecool=";
Name4 = height; Function4 = %h%;
}
Copyright (c) 1998-2011
The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

94

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

This specification causes GenOpt to (i) substitute the value of h in Function4,


(ii) read from the simulation output file(s) the numbers that occur after the
strings Eheat= and Ecool=, (iii) substitute these numbers into the function
add( %E heat%, %E cool% ), (iv) evaluate the functions Function1 and Function4,
and (v) minimize the sum of heating and cooling energy.
As arguments of the function defined above, we can use any name of an independent variable, of an input function object, or the keyword %stepNumber%.

11.4

Truncation of Digits of the Cost Function


Value
f (x)
0.1

0.0
2.0

1.5

1.0

0.5

0.0

0.5

1.0

1.5 x

0.1
0.2
Figure 11.1 : Function (11.1) with machine precision and with truncated digits.
The upper line shows the cost function value with machine precision, and the
lower line shows the cost function value with only two digits beyond the decimal
point.
For x Rn and f : Rn R, assume there exists a scalar > 0 such that
f (x ) = f (x ) for all x B(x , ), where B(x , ) , {x Rn | kx x k < }.
Obviously, in B(x , ), an optimization algorithm can fail because iterates in
B(x , ) contain no information about descent directions outside of B(x , ).
Furthermore, in absence of convexity of f (), the optimality of x cannot be
ascertain in general.
Such situations can be generated if the simulation program writes the cost
function value to the output file with only a few digits. Fig. 11.1 illustrates
that truncating digits can cause problems particularly in domains of f () where
the slope of f () is flat. In Fig. 11.1, we show the function

f (x) , 0.1 x 0.1 x2 + 0.04 x4 .

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

(11.1)

95

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

The upper line is the exact value of f (), and the lower line is the rounded
value of f () such that it has only two digits beyond the decimal point. If the
optimization algorithm makes changes in x in the size of 0.2, then it may fail
for 0.25 < x < 1, which is far from the minimum. In this interval, no useful
information about the descent of f () can be obtained. Thus, the cost function
must be written to the output file without truncating digits.
To detect such cases, the optimization algorithm can cause GenOpt to check
whether a cost function values is equal to a previous cost function value. If
the same cost function value is obtained more than a user-specified number of
times, then GenOpt terminates with an error message. The maximum number
of equal cost function values is specified by the parameter MaxEqualResults
in the command file (see page 86).
GenOpt writes an information to the user interface and to the log file if a
cost function value is equal to a previous function value.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

96

GenOpt
Generic Optimization Program
Version 3.1.0

12

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Conclusions

In system optimization it is not possible to apply a general optimization


algorithm that works efficiently on all problems. What algorithm should be
used depends on the properties of the cost function, such as the number of
independent parameters, the continuity of the cost function and its derivatives,
and the existence of local minima. Thus a variety of optimization algorithms
is needed. To address this need, GenOpt has a library with different optimization algorithms and an optimization algorithm interface that users can use to
implement their own optimization algorithm if desired.
The fact that analytical properties of the cost function are unavailable for
the class of optimization problems that GenOpt has been developed for makes it
possible to separate optimization and function evaluation. Therefore, GenOpt
has a simulation program interface that allows coupling any program that exchanges input and output using text files. Hence, users are not restricted to
using a special program for evaluating the cost function. Rather, they can
use the simulation program they are already using for their system design and
development. Thus, the system can be optimized with little additional effort.
This open environment not only allows coupling any simulation program
and implementing special purpose algorithms, but it also allows sharing algorithms among users. This makes it possible to extend the algorithm library
and thus extend GenOpts applicability.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

97

GenOpt
Generic Optimization Program
Version 3.1.0

13

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Acknowledgments

The development of GenOpt was sponsored by grants from the Swiss Academy
of Engineering Sciences (SATW), the Swiss National Energy Fund (NEFF) and
the Swiss National Science Foundation (SNF) and is supported by the Assistant Secretary for Energy Efficiency and Renewable Energy, Office of Building
Technology Programs of the U.S. Department of Energy, under Contract No.
DE-AC02-05CH11231. I would like to thank these institutions for their generous support.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

98

GenOpt
Generic Optimization Program
Version 3.1.0

14

Legal

14.1

Copyright Notice

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

GenOpt Copyright (c) 1998-2011, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of
any required approvals from the U.S. Dept. of Energy). All rights reserved.
If you have questions about your rights to use or distribute this software,
please contact Berkeley Labs Technology Transfer Department at TTD@lbl.gov.
NOTICE. This software was developed under partial funding from the
U.S. Department of Energy. As such, the U.S. Government has been granted
for itself and others acting on its behalf a paid-up, nonexclusive, irrevocable,
worldwide license in the Software to reproduce, prepare derivative works, and
perform publicly and display publicly. Beginning five (5) years after the date
permission to assert copyright is obtained from the U.S. Department of Energy,
and subject to any subsequent five (5) year renewals, the U.S. Government
is granted for itself and others acting on its behalf a paid-up, nonexclusive,
irrevocable, worldwide license in the Software to reproduce, prepare derivative
works, distribute copies to the public, perform publicly and display publicly,
and to permit others to do so.

14.2

License agreement

GenOpt Copyright (c) 1998-2011, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of
any required approvals from the U.S. Dept. of Energy). All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
3. Neither the name of the University of California, Lawrence Berkeley National Laboratory, U.S. Dept. of Energy nor the names of its contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS
AND CONTRIBUTORS AS IS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

99

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT


OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
You are under no obligation whatsoever to provide any bug fixes, patches, or
upgrades to the features, functionality or performance of the source code (Enhancements) to anyone; however, if you choose to make your Enhancements
available either publicly, or directly to Lawrence Berkeley National Laboratory,
without imposing a separate written license agreement for such Enhancements,
then you hereby grant the following license: a non-exclusive, royalty-free perpetual license to install, use, modify, prepare derivative works, incorporate
into other computer software, distribute, and sublicense such enhancements or
derivative works thereof, in binary and source code form.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

100

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Benchmark Tests
This section lists the settings used in the benchmark tests on page 52.

The settings in OptimizationsSettings and Algorithm are the same for


all runs expect for Accuracy, which is listed in the result chart on page 53.
The common settings were:
OptimizationSettings{
MaxIte
= 1500;
WriteStepNumber = false;
}
Algorithm{
Main = NelderMeadONeill;
Accuracy = see page 53;
StepSizeFactor = 0.001;
BlockRestartCheck = 5;
ModifyStoppingCriterion = see page 53;
}
The benchmark functions and the Parameter settings in the Vary section are
shown below.

A.1

Rosenbrock

The Rosenbrock function that is shown in Fig A.1 is defined as


2
f (x) , 100 x2 (x1 )2 + (1 x1 )2 ,

(A.1)

where x R2 . The minimum is at x = (1, 1), with f (x ) = 0.


The section Vary of the optimization command file was set to
Vary{
Parameter {
Name =
x1 ;
I n i = 1.2;
Step =
1;
}
Parameter {
Name =
x2 ;
Ini =
1;
Step =
1;
}
}

Min = SMALL;
Max = BIG ;

Min = SMALL;
Max = BIG ;

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

101

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

100
80
60
f(x)
40
20
0.2
0
0.2
0.4
x2

0.6
0.8
1.0
1.2

1.2

1.0

0.8

0.6

0.4

0.2

0 x1

0.2

0.4

0.6

0.8

1.2

Figure A.1 : Rosenbrock function.

A.2

Function 2D1

This function has only one minimum point. The function is defined as
f (x) ,

3
X

f i (x),

(A.2)

 
1
b,
,
2

(A.3)

i=1

with
1

f (x)
f 2 (x)
3

f (x)



10 6
Q,
,
6 8

, 100 arctan (2 x1 )2 + (2 x2 )2 ,

, 50 arctan (0.5 + x1 )2 + (0.5 + x2 )2 ,
1
, hb, xi + hx, Q xi,
2

(A.4)
(A.5)

where x R2 . The function has a minimum at x = (1.855340, 1.868832),


with f (x ) = 12.681271. It has two regions where the gradient is very small
(see Fig. A.2).

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

102

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group
4

x2 2

2
x1

Figure A.2 :
(A.2).

Contour plot of

df (x)
x1

= 0 and

df (x)
x2

= 0, where f (x) is as in

The section Vary of the optimization command file is


Vary{
Parameter {
Name = x0 ; Min = SMALL;
I n i = 3; Max = BIG ;
Step = 0 . 1 ;
}
Parameter {
Name = x1 ; Min = SMALL;
I n i = 3; Max = BIG ;
Step = 0 . 1 ;
}
}

A.3

Function Quad

The function Quad is defined as


1
f (x) , hb, xi + hx, M xi,
2

(A.6)

where b, x R10 , M R1010 , and


b , (10, 10, . . . , 10)T .

(A.7)

This function is used in the benchmark test with two different positive definite
matrices M . In one test case, M is the identity matrix I and in the other test
case M is a matrix, called Q, with a large range of eigenvalues. The matrix Q
has elements

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

103

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

579.7818

227.6855

49.2126

60.3045

152.4101

207.2424

8.0917

33.6562

204.1312

3.7129

227.6855

236.2505

16.7689

40.3592

179.8471

80.0880

64.8326

15.2262

92.2572

40.7367

49.2126

16.7689

84.1037

71.0547

20.4327

5.1911

58.7067

36.1088

62.7296

7.3676

60.3045

40.3592

71.0547

170.3128

140.0148

8.9436

26.7365

125.8567

62.3607

21.9523

152.4101

179.8471

20.4327

140.0148

301.2494

45.5550

31.3547

95.8025

164.7464

40.1319

207.2424

80.0880

5.1911

8.9436

45.5550

178.5194

22.9953

39.6349

88.1826

29.1089
32.2344

8.0917

64.8326

58.7067

26.7365

31.3547

22.9953

124.4208

43.5141

75.5865

33.6562

15.2262

36.1088

125.8567

95.8025

39.6349

43.5141

261.7592

86.8136

22.9873

204.1312

92.2572

62.7296

62.3607

164.7464

88.1826

75.5865

86.8136

265.3525

1.6500

3.7129

40.7367

7.3676

21.9523

40.1319

29.1089

32.2344

22.9873

1.6500

49.2499

The eigenvalues of Q are in the range of 1 to 1000.


The functions have minimum points x at
Matrix M:
0
x
1
x
2
x
3
x
4
x
5
x
6
x
7
x
8
x
9
x
f (x )

I
10
10
10
10
10
10
10
10
10
10
500

Q
2235.1810
1102.4510
790.6100
605.2480
28.8760
228.7640
271.8830
3312.3890
2846.7870
718.1490
0

Both test functions have been optimized with the same parameter settings.
The settings for the parameters x0 to x9 are all the same, and given by
Vary{
Parameter {
Name = x0 ; Min = SMALL;
I n i = 0 ; Max = BIG ;
Step = 1 ;
}
}

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

104

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Bibliography
[AD03]

Charles Audet and John E. Dennis, Jr. Analysis of generalized


pattern searches. SIAM Journal on Optimization, 13(3):889903,
2003.

[BP66]

M. Bell and M. C. Pike. Remark on algorithm 178. Comm. ACM,


9:685686, September 1966.

[CD01]

A. Carlisle and G. Dozier. An off-the-shelf PSO. In Proceedings of


the Workshop on Particle Swarm Optimization, Indianapolis, IN,
2001.

[CK02]

Maurice Clerc and James Kennedy. The particle swarm explosion, stability, and convergence in a multidimensional complex
space. IEEE Transactions on Evolutionary Computation, 6(1):58
73, February 2002.

[CLW+ 01] Drury B. Crawley, Linda K. Lawrie, Frederick C. Winkelmann,


Walter F. Buhl, Y. Joe Huang, Curtis O. Pedersen, Richard K.
Strand, Richard J. Liesen, Daniel E. Fisher, Michael J. Witte, and
Jason Glazer. EnergyPlus: creating a new-generation building
energy simulation program. Energy and Buildings, 33(4):443457,
2001.
[DV68]

R. De Vogelaere. Remark on algorithm 178. Comm. ACM, 11:498,


July 1968.

[EK95]

R. C. Eberhart and J. Kennedy. A new optimizer using particle


swarm theory. In Sixth International Symposium on Micro Machine and Human Science, pages 3943, Nagoya, Japan, October
1995. IEEE.

[ES01]

R. C. Eberhart and Y. Shi. Particle swarm optimization: Developments, applications and resources. In Proceedings of the 2001
Congress on Evolutionary Computation, volume 1, pages 8486,
Seoul, South Korea, 2001. IEEE.

[HJ61]

R. Hooke and T. A. Jeeves. Direct search solution of numerical


and statistical problems. Journal of the Association for Computing
Machinery, 8(2):212229, 1961.

[KDB76]

S. A. Klein, J. A. Duffie, and W. A. Beckman. TRNSYS A


transient simulation program. ASHRAE Transactions, 82(1):623
633, 1976.

[KE95]

J. Kennedy and R. C. Eberhart. Particle swarm optimization. In


IEEE International Conference on Neural Networks, volume IV,
pages 19421948, Perth, Australia, November 1995.

[KE97]

J. Kennedy and R. C. Eberhart. A discrete binary version of the


particle swarm algorithm. In Proc. of Systems, Man, and Cybernetics, volume 5, pages 41044108. IEEE, October 1997.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

105

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

[Kel99a]

C. T. Kelley. Detection and remediation of stagnation in the


NelderMead algorithm using a sufficient decrease condition.
SIAM Journal on Optimization, 10(1):4355, 1999.

[Kel99b]

C. T. Kelley. Iterative methods for optimization. Frontiers in Applied Mathematics. SIAM, 1999.

[KES01]

James Kennedy, Russell C. Eberhart, and Yuhui Shi. Swarm Intelligence. Morgan Kaufmann Publishers, 2001.

[KLT03]

Tamara G. Kolda, Robert Michael Lewis, and Virginia Torczon.


Optimization by direct search: New perspectives on some classical
and modern methods. SIAM Review, 45(3):385482, 2003.

[KM02]

J. Kennedy and R. Mendes. Population structure and particle


swarm performance. In David B. Fogel, Mohamed A. El-Sharkawi,
Xin Yao, Garry Greenwood, Hitoshi Iba, Paul Marrow, and Mark
Shackleton, editors, Proceedings of the 2002 Congress on Evolutionary Computation CEC2002, pages 16711676. IEEE, 2002.

[LPV02a]

E. C. Laskari, K. E. Parsopoulos, and M. N. Vrahatis. Particle


swarm optimization for integer programming. In Proceedings of
the 2002 Congress on Evolutionary Computation, volume 2, pages
15821587, Honolulu, HI, May 2002. IEEE.

[LPV02b]

E. C. Laskari, K. E. Parsopoulos, and M. N. Vrahatis. Particle swarm optimization for minimax problems. In Proceedings of
the 2002 Congress on Evolutionary Computation, volume 2, pages
15761581, Honolulu, HI, May 2002. IEEE.

[LRWW98] Jeffrey C. Lagarias, James A. Reeds, Margaret H. Wright, and


Paul E. Wright. Convergence properties of the Nelder-Mead simplex method in low dimensions. SIAM Journal on Optimization,
9(1):112147, 1998.
[McK98]

K. I. M. McKinnon. Convergence of the Nelder-Mead simplex


method to a nonstationary point. SIAM Journal on Optimization,
9(1):148158, 1998.

[NM65]

J. A. Nelder and R. Mead. A simplex method for function minimization. The computer journal, 7(4):308313, January 1965.

[ON71]

R. ONeill. Algorithm AS 47 Function minimization using a


simplex procedure. Appl. Stat. 20, 20:338345, 1971.

[PFTV93]

W. H. Press, B. P. Flannery, S. A. Tuekolsky, and W. T. Vetterling. Numerical Recipes in C: The Art of Scientific Computing,
chapter 20. Cambridge University Press, 1993.

[Pol97]

Elijah Polak. Optimization, Algorithms and Consistent Approximations, volume 124 of Applied Mathematical Sciences. Springer
Verlag, 1997.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

106

GenOpt
Generic Optimization Program
Version 3.1.0

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

[PV02a]

K. E. Parsopoulos and M. N. Vrahatis. Particle swarm optimization method for constrained optimization problems. In P. Sincak, J. Vascak, V. Kvasnicka, and J. Pospichal, editors, Intelligent
Technologies Theory and Applications: New Trends in Intelligent
Technologies, volume 76 of Frontiers in Artificial Intelligence and
Applications, pages 214220. IOS Press, 2002. ISBN: 1-58603-2569.

[PV02b]

K. E. Parsopoulos and M. N. Vrahatis. Recent approaches to global


optimization problems through Particle Swarm Optimization. Natural Computing, 1:235306, 2002.

[PW03]

Elijah Polak and Michael Wetter. Generalized pattern search algorithms with adaptive precision function evaluations. Technical Report LBNL-52629, Lawrence Berkeley National Laboratory,
Berkeley, CA, 2003.

[SE98]

Y. Shi and R. C. Eberhart. A modified particle swarm optimizer.


In Evolutionary Computation, IEEE World Congress on Computational Intelligence, pages 6973, Anchorage, AK, May 1998. IEEE.

[SE99]

Y. Shi and R. C. Eberhart. Empirical study of particle swarm


optimization. In Proceedings of the 1999 Congress on Evolutionary
Computation, volume 3, pages 19451950, Carmel, IN, 1999. IEEE.

[Smi69]

Lyle B. Smith. Remark on algorithm 178. Comm. ACM, 12:638,


November 1969.

[Tor89]

Virginia Torczon. Multi-Directional Search: A Direct Search Algorithm for Parallel Machines. PhD thesis, Rice University, Houston,
TX, May 1989.

[Tse99]

Paul Tseng. Fortified-descent simplicial search method: A general


approach. SIAM Journal on Optimization, 10(1):269288, 1999.

[vdBE01]

F. van den Bergh and A.P Engelbrecht. Effects of swarm size on


cooperative particle swarm optimisers. In GECCO, San Francisco,
CA, July 2001.

[WBB+ 93] F. C. Winkelmann, B. E. Birsdall, W. F. Buhl, K. L. Ellington,


A. E. Erdem, J. J. Hirsch, and S. Gates. DOE-2 supplement,
version 2.1E. Technical Report LBL-34947, Lawrence Berkeley
National Laboratory, Berkeley, CA, USA, November 1993.
[WP03]

Michael Wetter and Elijah Polak. A convergent optimization


method using pattern search algorithms with adaptive precision
simulation. In G. Augenbroe and J. Hensen, editors, Proc. of the
8-th IBPSA Conference, volume III, pages 13931400, Eindhoven,
NL, August 2003.

[Wri96]

M. H. Wright. Direct search methods: once scorned, now respectable. In D. F. Griffiths and G. A. Watson, editors, Numerical
Analysis 1995, pages 191208. Addison Wesley Longman (Harlow),
1996.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

107

GenOpt
Generic Optimization Program
Version 3.1.0
[WW03]

Lawrence Berkeley National Laboratory


Building Technologies Department
Simulation Research Group

Michael Wetter and Jonathan Wright. Comparison of a generalized


pattern search and a genetic algorithm optimization method. In
G. Augenbroe and J. Hensen, editors, Proc. of the 8-th IBPSA
Conference, volume III, pages 14011408, Eindhoven, NL, August
2003.

Copyright (c) 1998-2011


The Regents of the University of California (through Lawrence Berkeley National Laboratory),
subject to receipt of any required approvals from U.S. Department of Energy. All rights reserved.

108

You might also like