You are on page 1of 126

Assoc. Prof. Dr.

Nildem Tayşi
University of Gaziantep

Optimization Methods
Textbook:
 Textbook: S.S. Rao, Engineering optimization (Theory and practice),
Willey, 1996.
 References:
 Arora, J.S., Introduction to Optimum Design, Second Edition, Elsevier
Academic Press, San Diego, CA, 2004.
 Bhatti, M. A., Practical Optimization Methods with Mathematica
Applications, Springer-Verlag, 2000.
 Belegundu, A.D. and Chandrupatla, T.R., Optimization Concepts and
Applications in Engineering, Prentice Hall, New Jersey, 1999.

 http://nptel.iitm.ac.in/courses/Webcourse-contents/IISc-
BANG/OPTIMIZATION%20METHODS/New_index1.html

Optimization Methods
Goals:
 Goal of the course is to teach students the basic concepts of
optimization of engineering systems. Formulation of a problem as
an optimization problem is covered. Basic concepts of
optimization and optimality conditions are covered in class with
simple examples. Simplex method for linear optimization
problems is covered. Post-optimality analysis is discussed.
Numerical methods for solving unconstrained and constrained
optimization problems are presented and illustrated. Throughout
the course, students work on practical design optimization
projects.

Optimization Methods
Learning Objectives:
1. Introduction to the process of designing new systems or
improving existing systems.
2. Formulation of a problem as an optimization problem –
examples.
3. Graphical solution of optimization problems to illustrate basic
concepts.
4. Basic principles of optimum design with application to simple
problems: Optimality conditions.
5. Optimization of linear systems: Linear programming.
6. Optimization of nonlinear systems: Nonlinear programming.
7. Solution of optimization problems using programs.

Optimization Methods
Introduction
Optimization defined as the process of finding the conditions that
give the minimum or maximum value of a function, where the
function represents the effort required or the desired benefit.
 Optimization : The act of obtaining the best result under the
given circumstances.
 Design, construction and maintenance of engineering systems
involve decision making both at the managerial and the
technological level
 Goals of such decisions :
 to minimize the effort required or
 to maximize the desired benefit

Optimization Methods
Historical Development
 Existence of optimization methods can be traced to the days of
Newton, Lagrange, and Cauchy.

 Development of differential calculus methods of optimization was


possible because of the contributions of Newton and Leibnitz to
calculus.

 Foundations of calculus of variations, dealing with the


minimizations of functions, were laid by Bernoulli Euler, Lagrange,
and Weistrass

Optimization Methods
Historical Development (contd.)

 The method of optimization for constrained problems, which


involve the inclusion of unknown multipliers, became known by
the name of its inventor, Lagrange.

 Cauchy made the first application of the steepest descent method


to solve unconstrained optimization problems.

Optimization Methods
Recent History

 High-speed digital computers made implementation of the


complex optimization procedures possible and stimulated further
research on newer methods.

 Massive literature on optimization techniques and emergence of


several well defined new areas in optimization theory followed.

Optimization Methods
Milestones
 Development of the simplex method by Danzig in 1947 for linear
programming problems.

 The enunciation of the principle of optimality in 1957 by Bellman for


dynamic programming problems.

 Work by Kuhn and Tucker in 1951 on the necessary and sufficient


conditions for the optimal solution of problems laid the foundation for
later research in non-linear programming.

Optimization Methods
Milestones (contd.)
 The contributions of Zoutendijk and Rosen to nonlinear programming
during the early 1960s

 Work of Carroll and Fiacco and McCormick facilitated many difficult


problems to be solved by using the well-known techniques of
unconstrained optimization.

 Geometric programming was developed in the 1960s by Duffin, Zener,


and Peterson.

 Gomory did pioneering work in integer programming. The most real


world applications fall under this category of problems.

 Dantzig and Charnes and Cooper developed stochastic programming


techniques.

Optimization Methods
Milestones (contd.)
 The desire to optimize more than one objective or a goal while
satisfying the physical limitations led to the development of multi-
objective programming methods; Ex. Goal programming.

 The foundations of game theory were laid by von Neumann in 1928;


applied to solve several mathematical, economic and military
problems, and more recently to engineering design problems.

 Simulated annealing, evolutionary algorithms including genetic


algorithms, and neural network methods represent a new class of
mathematical programming techniques that have come into
prominence during the last decade.

Optimization Methods
Optimization Methods
Optimization Methods
Optimization Methods
Optimization Methods
Optimization Methods
Optimization Methods
Optimization Methods
Optimization Methods
Optimization Methods
Optimization Methods
Optimization Methods
Optimization Methods
Optimization Methods
Optimization Methods
Optimization Methods
Optimization Methods
Optimization Methods
Optimization Methods
Optimization Methods
Optimization Methods
Optimization Methods
Optimization Methods
Optimization Methods
Optimization Methods
Optimization Methods
Optimization Methods
Optimization Methods
Engineering applications of optimization.
 Design of structural units in construction, machinery, and in space
vehicles.
 Maximizing benefit/minimizing product costs in various
manufacturing and construction processes.
 Optimal path finding in road networks/freight handling processes.
 Optimal production planning, controlling and scheduling.
 Optimal Allocation of resources or services among several activities to
maximize the benefit.

Optimization Methods
 Design of civil engineering structures such as frames, foundations,
bridges, towers, chimneys and dams for minimum cost.
 Design of minimum weight structures for earth quake, wind and other
types of random loading.
 Optimal plastic design of frame structures (e.g., to determine the
ultimate moment capacity for minimum weight of the frame).
 Design of water resources systems for obtaining maximum benefit.
 Design of optimum pipeline networks for process industry.
 Design of aircraft and aerospace structure for minimum weight
 Finding the optimal trajectories of space vehicles.
 Optimum design of linkages, cams, gears, machine tools, and other
mechanical components.
 Selection of machining conditions in metal-cutting processes for
minimizing the product cost.
 Design of material handling equipment such as conveyors, trucks and
cranes for minimizing cost.

Optimization Methods
 Design of pumps, turbines and heat transfer equipment for maximum efficiency.

 Optimum design of electrical machinery such as motors, generators and transformers.


 Optimum design of electrical networks.
 Optimum design of control systems.
 Optimum design of chemical processing equipments and plants.
 Selection of a site for an industry.
 Planning of maintenance and replacement of equipment to reduce operating costs.
 Inventory control.
 Allocation of resources or services among several activities to maximize the benefit.
 Controlling the waiting and idle times in production lines to reduce the cost of
production.
 Planning the best strategy to obtain maximum profit in the presence of a competitor.
 Designing the shortest route to be taken by a salesperson to visit various cities in a
single tour.
 Optimal production planning, controlling and scheduling.
 Analysis of statistical data and building empirical models to obtain the most accurate
 representation of the statistical phenomenon.
 However, the list is incomplete.

Optimization Methods
Art of Modeling : Model Building
 Development of an optimization model can be divided
into five major phases.
 Collection of data
 Problem definition and formulation
 Model development
 Model validation and evaluation or performance
 Model application and interpretation of results

Optimization Methods
Data collection
 Data collection
 may be time consuming but is the fundamental basis of
the model-building process
 extremely important phase of the model-building
process
 the availability and accuracy of data can have
considerable effect on the accuracy of the model and on
the ability to evaluate the model.

Optimization Methods
Problem Definition
 Problem definition and formulation, steps involved:
 identification of the decision variables;
 formulation of the model objective(s);
 the formulation of the model constraints.
 In performing these steps one must consider the following.
 Identify the important elements that the problem
consists of.
 Determine the number of independent variables, the
number of equations required to describe the system,
and the number of unknown parameters.
 Evaluate the structure and complexity of the model
 Select the degree of accuracy required of the model

Optimization Methods
Model development
 Model development includes:
 the mathematical description,
 parameter estimation,
 input development, and
 software development
 The model development phase is an iterative process
that may require returning to the model definition and
formulation phase.

Optimization Methods
Model Validation and Evaluation
 This phase is checking the model as a whole.
 Model validation consists of validation of the assumptions and parameters of
the model.
 The performance of the model is to be evaluated using standard performance
measures such as Root mean squared error and R2 value.
 Sensitivity analysis to test the model inputs and parameters.
 This phase also is an iterative process and may require returning to the model
definition and formulation phase.
 One important aspect of this process is that in most cases data used in the
formulation process should be different from that used in validation.

Optimization Methods
Modeling Techniques
 Different modeling techniques are developed to meet the
requirement of different type of optimization problems. Major
categories of modeling approaches are:
 classical optimization techniques,
 linear programming,
 nonlinear programming,
 geometric programming,
 dynamic programming,
 integer programming,
 stochastic programming,
 evolutionary algorithms, etc.
 These approaches will be discussed in the subsequent modules.

Optimization Methods
Basic components of an
optimization problem
 An objective function expresses the main aim of the model
which is either to be minimized or maximized.
 A set of unknowns or variables which control the value of
the objective function.
 A set of constraints that allow the unknowns to take on
certain values but exclude others.

The optimization problem is then to:


find values of the variables that minimize or
maximize the objective function while satisfying the
constraints.

Optimization Methods
Objective Function
 As already defined the objective function is the mathematical function one wants to
maximize or minimize, subject to certain constraints. Many optimization problems
have a single objective function (When they don't they can often be reformulated so
that they do). The two interesting exceptions are:
 No objective function. The user does not particularly want to optimize anything so there is
no reason to define an objective function. Usually called a feasibility problem.

 Multiple objective functions. In practice, problems with multiple objectives are


reformulated as single-objective problems by either forming a weighted combination of the
different objectives or by treating some of the objectives by constraints.

Optimization Methods
Statement of an optimization problem

 x1 
 
To find X =  x2  which maximizes f(X)
 . 
 
 . 
Subject to the constraints  x 
 n
gi(X) <= 0 , i = 1, 2,….,m
lj(X) = 0 , j = 1, 2,….,p

Optimization Methods
Statement of an optimization problem

where
 X is an n-dimensional vector called the design vector
 f(X) is called the objective function, and
 gi(X) and lj(X) are known as inequality and equality constraints,
respectively.
 This type of problem is called a constrained optimization problem.
 Optimization problems can be defined without any constraints as
well. Such problems are called unconstrained optimization
problems.

Optimization Methods
Objective Function Surface
 If the locus of all points satisfying f(X) = a constant c is considered, it
can form a family of surfaces in the design space called the objective
function surfaces.
 When drawn with the constraint surfaces as shown in the figure we can
identify the optimum point (maxima).
 This is possible graphically only when the number of design variable is
two.
 When we have three or more design variables because of complexity in
the objective function surface we have to solve the problem as a
mathematical problem and this visualization is not possible.

Optimization Methods
Objective function surfaces to find the
optimum point (maxima)

Optimization Methods
Stationary points
 For a continuous and differentiable function f(x) a
stationary point x* is a point at which the function
vanishes, i.e. f ’(x) = 0 at x = x*. x* belongs to its domain
of definition.
 A stationary point may be a minimum, maximum or an
inflection point

54
Stationary points

Figure showing the three types of stationary points (a) inflection point
(b) minimum (c) maximum

55
Relative and Global Optimum
• A function is said to have a relative or local minimum at x = x* if
f ( x* )  f ( x  h) for all sufficiently small positive and negative
values of h, i.e. in the near vicinity of the point x.
• Similarly, a point x* is called a relative or local maximum if
f ( x* )  f ( x  h) for all values of h sufficiently close to zero.
• A function is said to have a global or absolute minimum at x = x* if
f ( x* )  f ( x) for all x in the domain over which f(x) is defined.
• Similarly, a function is said to have a global or absolute maximum at x
= x* if f ( x* )  f ( x) for all x in the domain over which f (x) is
defined.

56
Relative and Global Optimum
…contd.
A1, A2, A3 = Relative maxima
A2 = Global maximum
B1, B2 = Relative minima
B1 = Global minimum

.
A2
Relative minimum is
also global optimum

.
f(x) f(x)

.
A1
.
A3

.
B2
.
B1
x x
a b a b

Fig. 2

57
Functions of two variables
 The concept discussed for one variable functions may be
easily extended to functions of multiple variables.
 Functions of two variables are best illustrated by contour
maps, analogous to geographical maps.
 A contour is a line representing a constant value of f(x) as
shown in the following figure. From this we can identify
maxima, minima and points of inflection.

58
A contour plot

59
Necessary conditions
 As can be seen in the above contour map, perturbations from
points of local minima in any direction result in an increase in
the response function f(x), i.e.
 the slope of the function is zero at this point of local
minima.
 Similarly, at maxima and points of inflection as the slope is
zero, the first derivative of the function with respect to the
variables are zero.

60
Necessary conditions …contd.
 Which gives us f  0; f  0 at the stationary points. i.e. the
x1 x2

gradient vector of f(X),  x f at X = X* = [x1 , x2] defined as follows,


must equal zero:
 f 
 x ( *) 
x f   1
0
 f 
 x (  *) 
 2 

This is the necessary condition.

61
Sufficient conditions
 Consider the following second order derivatives:

2 f 2 f 2 f
; 2;
x1 x2 x1x2
2

 The Hessian matrix defined by H is made using the above second order
derivatives.

 2 f 2 f 
 
 x 2
x1x2 
H 2 1
  f 2 f 
 2 
 x1x2 x2 [ x , x ]
1 2

62
Sufficient conditions …contd.
 The value of determinant of the H is calculated and
 if H is positive definite then the point X = [x1, x2] is a
point of local minima.
 if H is negative definite then the point X = [x1, x2] is a
point of local maxima.
 if H is neither then the point X = [x1, x2] is neither a
point of maxima nor minima.

63
Variables and Constraints
 Variables
 These are essential. If there are no variables, we cannot define the
objective function and the problem constraints.

 Constraints
 Even though Constraints are not essential, it has been argued that
almost all problems really do have constraints.
 In many practical problems, one cannot choose the design variable
arbitrarily. Design constraints are restrictions that must be satisfied to
produce an acceptable design.

Optimization Methods
Constraints (contd.)
 Constraints can be broadly classified as :

 Behavioral or Functional constraints : These represent limitations on the


behavior and performance of the system.

 Geometric or Side constraints : These represent physical limitations on


design variables such as availability, fabricability, and transportability.

Optimization Methods
Constraint Surfaces
 Consider the optimization problem presented earlier with only
inequality constraints gi(X) . The set of values of X that satisfy the
equation gi(X) forms a boundary surface in the design space
called a constraint surface.

 The constraint surface divides the design space into two regions:
one with gi(X) < 0 (feasible region) and the other in which gi(X) >
0 (infeasible region). The points lying on the hyper surface will
satisfy gi(X) =0.

Optimization Methods
The figure shows a hypothetical two-dimensional design space
where the feasible region is denoted by hatched lines.

Behavior
constraint
Infeasible g2  0
region

Side
constraint Feasible region
g3 ≥ 0 Behavior
constraint
. g1 0

. Bound
acceptable point.

Free acceptable point


Free unacceptable point Bound
unacceptable point.

Optimization Methods
Formulation of design problems as mathematical
programming problems

 The following steps summarize the procedure used to


formulate and solve mathematical programming problems.

1. Analyze the process to identify the process variables and specific


characteristics of interest i.e. make a list of all variables.
2. Determine the criterion for optimization and specify the objective
function in terms of the above variables together with coefficients.

Optimization Methods
3. Develop via mathematical expressions a valid process model that relates the
input-output variables of the process and associated coefficients.
a) Include both equality and inequality constraints
b) Use well known physical principles
c) Identify the independent and dependent variables to get the number of
degrees of freedom
4. If the problem formulation is too large in scope:
a) break it up into manageable parts/ or
b) simplify the objective function and the model
5. Apply a suitable optimization technique for mathematical statement of the
problem.
6. Examine the sensitivity of the result to changes in the coefficients in the
problem and the assumptions.

Optimization Methods
1. Introduction
• Mathematical optimization problem:
minimize f 0 ( x)
subject to g i ( x)  bi , i  1,...., m

• f0 : Rn R: objective function
• x=(x1,…..,xn): design variables (unknowns of the problem,
they must be linearly independent)
• gi : Rn R: (i=1,…,m): inequality constraints

• The problem is a constrained optimization problem

70
1. Introduction
• If a point x* corresponds to the minimum value of the function f (x), the
same point also corresponds to the maximum value of the negative of
the function, -f (x). Thus optimization can be taken to mean
minimization since the maximum of a function can be found by seeking
the minimum of the negative of the same function.

71
1. Introduction
Constraints

• Behaviour constraints: Constraints that represent limitations on


the behaviour or performance of the system are termed behaviour or
functional constraints.

• Side constraints: Constraints that represent physical limitations on


design variables such as manufacturing limitations.

72
1. Introduction
Constraint Surface
• For illustration purposes, consider an optimization problem with only
inequality constraints gj (X)  0. The set of values of X that satisfy
the equation gj (X) =0 forms a hypersurface in the design space and
is called a constraint surface.

73
1. Introduction
Constraint Surface
• Note that this is a (n-1) dimensional subspace, where n is the
number of design variables. The constraint surface divides the
design space into two regions: one in which gj (X)  0and the other
in which gj (X) 0.

74
1. Introduction
Constraint Surface
• Thus the points lying on the hypersurface will satisfy the constraint
gj (X) critically whereas the points lying in the region where gj (X) >0
are infeasible or unacceptable, and the points lying in the region
where gj (X) < 0 are feasible or acceptable.

75
1. Introduction
Constraint Surface
• In the below figure, a hypothetical two dimensional design space is
depicted where the infeasible region is indicated by hatched lines. A
design point that lies on one or more than one constraint surface is
called a bound point, and the associated constraint is called an
active constraint.

76
1. Introduction
Constraint Surface
• Design points that do not lie on any constraint surface are known as
free points.

77
1. Introduction
Constraint Surface

Depending on whether a
particular design point belongs to
the acceptable or unacceptable
regions, it can be identified as one
of the following four types:

• Free and acceptable point

• Free and unacceptable point

• Bound and acceptable point

• Bound and unacceptable point

78
1. Introduction
• The conventional design procedures aim at finding an acceptable or
adequate design which merely satisfies the functional and other
requirements of the problem.

• In general, there will be more than one acceptable design, and the
purpose of optimization is to choose the best one of the many
acceptable designs available.

• Thus a criterion has to be chosen for comparing the different


alternative acceptable designs and for selecting the best one.

• The criterion with respect to which the design is optimized, when


expressed as a function of the design variables, is known as the
objective function.

79
1. Introduction
• In civil engineering, the objective is usually taken as the
minimization of the cost.

• In mechanical engineering, the maximization of the mechanical


efficiency is the obvious choice of an objective function.

• In aerospace structural design problems, the objective function for


minimization is generally taken as weight.

• In some situations, there may be more than one criterion to be


satisfied simultaneously. An optimization problem involving multiple
objective functions is known as a multiobjective programming
problem.

80
1. Introduction
• With multiple objectives there arises a possibility of conflict, and one
simple way to handle the problem is to construct an overall objective
function as a linear combination of the conflicting multiple objective
functions.

• Thus, if f1 (X) and f2 (X) denote two objective functions, construct a new
(overall) objective function for optimization as:

f (X)  1 f1 (X)   2 f 2 (X)

where 1 and 2 are constants whose values indicate the relative


importance of one objective function to the other.

81
1. Introduction
• The locus of all points satisfying f (X) = c = constant forms a
hypersurface in the design space, and for each value of c there
corresponds a different member of a family of surfaces. These surfaces,
called objective function surfaces, are shown in a hypothetical two-
dimensional design space in the figure below.

82
1. Introduction
• Once the objective function surfaces are drawn along with the constraint
surfaces, the optimum point can be determined without much difficulty.
• But the main problem is that as the number of design variables exceeds
two or three, the constraint and objective function surfaces become
complex even for visualization and the problem has to be solved purely
as a mathematical problem.

83
Example
Example:

Design a uniform column of tubular section to carry a compressive load P=2500 kgf
for minimum cost. The column is made up of a material that has a yield stress of 500
kgf/cm2, modulus of elasticity (E) of 0.85e6 kgf/cm2, and density () of 0.0025 kgf/cm3.
The length of the column is 250 cm. The stress induced in this column should be less
than the buckling stress as well as the yield stress. The mean diameter of the column
is restricted to lie between 2 and 14 cm, and columns with thicknesses outside the
range 0.2 to 0.8 cm are not available in the market. The cost of the column includes
material and construction costs and can be taken as 5W + 2d, where W is the weight
in kilograms force and d is the mean diameter of the column in centimeters.

84
Example
Example:

The design variables are the


mean diameter (d) and tube
thickness (t):
 x1  d 
X  
 x2  t 
The objective function to be
minimized is given by:

f (X)  5W  2d  5ldt  2d  9.82x1 x2  2x1


85
Example
• The behaviour constraints can be expressed as:

stress induced ≤ yield stress

stress induced ≤ buckling stress

• The induced stress is given by:

P 2500
induced stress   i  
dt x1 x2

86
Example
• The buckling stress for a pin connected column is given by:

Euler buckling load  2 EI


buckling stress   b   2
cross  sectional area l dt

where I is the second moment of area of the cross section of the


column given by:

 
I (d o4  d i4 )  (d o2  d i2 )( d o  d i )( d o  d i )
64 64


64
(d  t ) 2

 (d  t ) 2 (d  t )  (d  t )(d  t )  (d  t )

 
 dt (d 2  t 2 )  x1 x2 ( x12  x22 )
8 8
87
Example
• Thus, the behaviour constraints can be restated as:
2500
g1 ( X)   500  0
x1 x2
2500  2 (0.85 106 )( x12  x22 )
g 2 ( X)   0
x1 x2 8(250) 2

• The side constraints are given by:

2  d  14
0.2  t  0.8

88
Example
• The side constraints can be expressed in standard form as:

g 3 ( X)   x1  2  0
g 4 ( X)  x1  14  0
g 5 ( X)   x2  0.2  0
g 6 ( X)  x2  0.8  0

89
Example
• For a graphical solution, the constraint surfaces are to be
plotted in a two dimensional design space where the two axes
represent the two design variables x1 and x2. To plot the first
constraint surface, we have:
2500
g1 ( X)   500  0 x1 x2  1.593
x1 x2
• Thus the curve x1x2=1.593 represents the constraint surface
g1(X)=0. This curve can be plotted by finding several points on
the curve. The points on the curve can be found by giving a
series of values to x1 and finding the corresponding values of x2
that satisfy the relation x1x2=1.593 as shown in the Table below:
x1 2 4 6 8 10 12 14
x2 0.7965 0.3983 0.2655 0.199 0.1593 0.1328 0.114

90
Example
• The infeasible region represented by g1(X)>0 or x1x2< 1.593 is
shown by hatched lines. These points are plotted and a curve P1Q1
passing through all these points is drawn as shown:

91
Example
• Similarly the second
constraint g2(X) < 0 can
be expressed as:

x1 x2 ( x  x )  47.3
2
1
2
2

• The points lying on the


constraint surface g2
(X)=0 can be obtained as
follows (These points are
plotted as Curve P2Q2:
x1 2 4 6 8 10 12 14

x2 2.41 0.716 0.219 0.0926 0.0473 0.0274 0.0172


92
Example
• The plotting of side
constraints is simple
since they represent
straight lines.
x1 x2 ( x12  x22 )  47.3

• After plotting all the six


constraints, the feasible
region is determined as
the bounded area
ABCDEA

93
Example
• Next, the contours of the
objective function are to be
plotted before finding the
optimum point. For this, we
plot the curves given by:
f ( X)  9.82 x1 x2  2 x1  c
 constant

for a series of values of c. By


giving different values to c, the
contours of f can be plotted
with the help of the following
points.

94
Example
• For f (X)  9.82x1x2  2x1  50.0
x2 0.1 0.2 0.3 0.4 0.5 0.6 0.7

x1 16.77 12.62 10.10 8.44 7.24 6.33 5.64

• For f (X)  9.82x1x2  2x1  40.0


x2 0.1 0.2 0.3 0.4 0.5 0.6 0.7

x1 13.40 10.10 8.08 6.75 5.79 5.06 4.51

• For f (X)  9.82x1x2  2x1  31.58 (passing through t he corner point C)


x2 0.1 0.2 0.3 0.4 0.5 0.6 0.7

x1 10.57 7.96 6.38 5.33 4.57 4 3.56

• For f (X)  9.82x1x2  2x1  26.53 (passing through t he corner point B)


x2 0.1 0.2 0.3 0.4 0.5 0.6 0.7

x1 8.88 6.69 5.36 4.48 3.84 3.36 2.99

95
Example
• These contours are shown in the
figure below and it can be seen
that the objective function can not
be reduced below a value of 26.53
(corresponding to point B) without
violating some of the constraints.
Thus, the optimum solution is
given by point B with d*=x1*=5.44
cm and t*=x2*=0.293 cm with
fmin=26.53.

96
Classification of Optimization
Problems
Optimization problems can be classified based on the
type of constraints, nature of design variables,
physical structure of the problem, nature of the
equations involved, deterministic nature of the
variables, permissible value of the design variables,
separability of the functions and number of objective
functions. These classifications are briefly discussed
in this lecture.

Optimization Methods
Classification based on existence of constraints

 Constrained optimization problems: which are


subject to one or more constraints.

 Unconstrained optimization problems: in which no


constraints exist.

Optimization Methods
Classification based on the nature of the design
variables

There are two broad categories of classification within this


classification
 First category : the objective is to find a set of design
parameters that make a prescribed function of these
parameters minimum or maximum subject to certain
constraints.
 For example to find the minimum weight design of a strip footing
with two loads shown in the figure, subject to a limitation on the
maximum settlement of the structure.

Optimization Methods
The problem can be defined as follows

subject to the constraints

The length of the footing (l) the loads P1 and P2 , the distance between the loads are
assumed to be constant and the required optimization is achieved by varying b and d.
Such problems are called parameter or static optimization problems.
Optimization Methods
Classification based on the nature of the design variables (contd.)

 Second category: the objective is to find a set of design


parameters, which are all continuous functions of some
other parameter, that minimizes an objective function
subject to a set of constraints.

 For example, if the cross sectional dimensions of the


rectangular footing is allowed to vary along its length as shown
in the following figure.

Optimization Methods
l

The problem can be defined as follows

subject to the constraints

The length of the footing (l) the loads P1 and P2 , the distance between the loads are
assumed to be constant and the required optimization is achieved by varying b and d.
Such problems are called trajectory or dynamic optimization problems.
Classification based on the physical structure of the
problem
Based on the physical structure, we can classify optimization
problems are classified as optimal control and non-optimal
control problems.
(i) An optimal control (OC) problem is a mathematical
programming problem involving a number of stages, where
each stage evolves from the preceding stage in a prescribed
manner.
 It is defined by two types of variables: the control or design
variables and state variables.

Optimization Methods
 The problem is to find a set of control or design variables such that the total
objective function (also known as the performance index, PI) over all stages
is minimized subject to a set of constraints on the control and state
variables. An OC problem can be stated as follows:

subject to the constraints:

 Where xi is the ith control variable, yi is the ith state variable, and f i is the
contribution of the ith stage to the total objective function. gj, hk, and qi are
the functions of xj, yj ; xk, yk and xi and yi, respectively, and l is the total
number of states. The control and state variables xi and yi can be vectors in
some cases.

(ii) The problems which are not optimal control problems are
called non-optimal control problems.
Optimization Methods
Classification based on the nature of the equations
involved

Based on the nature of expressions for the objective function and


the constraints, optimization problems can be classified as linear,
nonlinear, geometric and quadratic programming problems.

Optimization Methods
Classification based on the nature of the equations
involved (contd.)
(i) Linear programming problem
If the objective function and all the constraints are linear functions of the design
variables, the mathematical programming problem is called a linear programming
(LP) problem.
often stated in the standard form :
subject to

Optimization Methods
Classification based on the nature of the equations
involved (contd.)
(ii) Nonlinear programming problem
If any of the functions among the objectives and constraint functions is
nonlinear, the problem is called a nonlinear programming (NLP) problem
this is the most general form of a programming problem.

Optimization Methods
Classification based on the nature of the equations
involved (contd.)
(iii) Geometric programming problem
 A geometric programming (GMP) problem is one in which the objective
function and constraints are expressed as polynomials in X.
A polynomial with N terms can be expressed as

 Thus GMP problems can be expressed as follows: Find X which


minimizes :
subject to:

Optimization Methods
Classification based on the nature of the equations
involved (contd.)
 where N0 and Nk denote the number of terms in the objective and kth constraint
function, respectively.
(iv) Quadratic programming problem
 A quadratic programming problem is the best behaved nonlinear programming
problem with a quadratic objective function and linear constraints and is concave
(for maximization problems). It is usually formulated as follows:

Subject to:

where c, qi, Qij, aij, and bj are constants.

Optimization Methods
Classification based on the permissible values of the
decision variables

Under this classification problems can be classified as integer and


real-valued programming problems

(i) Integer programming problem


 If some or all of the design variables of an optimization problem are restricted to
take only integer (or discrete) values, the problem is called an integer
programming problem.
(ii) Real-valued programming problem
 A real-valued problem is that in which it is sought to minimize or maximize a real
function by systematically choosing the values of real variables from within an
allowed set. When the allowed set contains only real values, it is called a real-
valued programming problem.

Optimization Methods
Classification based on deterministic nature of the
variables

Under this classification, optimization problems


can be classified as deterministic and stochastic
programming problems

(i) Deterministic programming problem


• In this type of problems all the design variables are deterministic.
(ii) Stochastic programming problem
 In this type of an optimization problem some or all the parameters
(design variables and/or pre-assigned parameters) are probabilistic
(non deterministic or stochastic). For example estimates of life
span of structures which have probabilistic inputs of the concrete
strength and load capacity. A deterministic value of the life-span is
non-attainable.

Optimization Methods
Classification based on separability of the functions

Based on the separability of the objective and constraint functions


optimization problems can be classified as separable and non-
separable programming problems
(i) Separable programming problems
 In this type of a problem the objective function and the constraints are separable.
A function is said to be separable if it can be expressed as the sum of n single-
variable functions and separable programming problem can be expressed in
standard form as :

subject to :

where bj is a constant.

Optimization Methods
Classification based on the number of objective
functions
Under this classification objective functions can be classified as
single and multiobjective programming problems.
(i) Single-objective programming problem in which there is only a single
objective.
(ii) Multi-objective programming problem
 A multiobjective programming problem can be stated as follows:

 where f1, f2, . . . f k denote the objective functions to be minimized simultaneously.


subject to :

Optimization Methods
Classical and Advanced
Techniques for
Optimization

Optimization Methods
Classical Optimization Techniques
 The classical optimization techniques are useful in finding the
optimum solution or unconstrained maxima or minima of continuous
and differentiable functions.
 These are analytical methods and make use of differential calculus in
locating the optimum solution.
 The classical methods have limited scope in practical applications as
some of them involve objective functions which are not continuous
and/or differentiable.
 Yet, the study of these classical techniques of optimization form a basis
for developing most of the numerical techniques that have evolved into
advanced techniques more suitable to today’s practical problems

Optimization Methods
Classical Optimization Techniques (contd.)
 These methods assume that the function is differentiable twice with respect to the
design variables and the derivatives are continuous.

 Three main types of problems can be handled by the classical optimization


techniques:

 single variable functions

 multivariable functions with no constraints,

 multivariable functions with both equality and inequality constraints. In


problems with equality constraints the Lagrange multiplier method can be
used. If the problem has inequality constraints, the Kuhn-Tucker conditions can
be used to identify the optimum solution.

 These methods lead to a set of nonlinear simultaneous equations that may be


difficult to solve. These methods of optimization will not be given here.

Optimization Methods
Numerical Methods of Optimization

 Linear programming: studies the case in which the objective function f is


linear and the set A is specified using only linear equalities and inequalities. (A
is the design variable space)

 Integer programming: studies linear programs in which some or all variables


are constrained to take on integer values.

 Quadratic programming: allows the objective function to have quadratic


terms, while the set A must be specified with linear equalities and inequalities

 Nonlinear programming: studies the general case in which the objective


function or the constraints or both contain nonlinear parts.

Optimization Methods
Numerical Methods of Optimization (contd.)
 Stochastic programming: studies the case in which some of the
constraints depend on random variables.
 Dynamic programming: studies the case in which the optimization
strategy is based on splitting the problem into smaller sub-problems.
 Combinatorial optimization: is concerned with problems where the set
of feasible solutions is discrete or can be reduced to a discrete one.
 Infinite-dimensional optimization: studies the case when the set of
feasible solutions is a subset of an infinite-dimensional space, such as a
space of functions.
 Constraint satisfaction: studies the case in which the objective function
f is constant (this is used in artificial intelligence, particularly in
automated reasoning).

Optimization Methods
Advanced Optimization Techniques
 Hill climbing: it is a graph search algorithm where the current path is
extended with a successor node which is closer to the solution than the
end of the current path.
 In simple hill climbing, the first closer node is chosen whereas in
steepest ascent hill climbing all successors are compared and the
closest to the solution is chosen. Both forms fail if there is no closer node.
This may happen if there are local maxima in the search space which are
not solutions.
 Hill climbing is used widely in artificial intelligence fields, for reaching a
goal state from a starting node. Choice of next node/ starting node can be
varied to give a number of related algorithms.

Optimization Methods
Thank You

Optimization Methods
Simulated annealing
 The name and inspiration come from annealing process in metallurgy,
a technique involving heating and controlled cooling of a material to
increase the size of its crystals and reduce their defects.

 The heat causes the atoms to become unstuck from their initial positions (a
local minimum of the internal energy) and wander randomly through
states of higher energy;
 the slow cooling gives them more chances of finding configurations with
lower internal energy than the initial one.

 In the simulated annealing method, each point of the search space is


compared to a state of some physical system, and the function to be
minimized is interpreted as the internal energy of the system in that
state. Therefore the goal is to bring the system, from an arbitrary initial
state, to a state with the minimum possible energy.

Optimization Methods
Genetic algorithms

 A genetic algorithm (GA) is a local search technique used to find


approximate solutions to optimization and search problems.

 Genetic algorithms are a particular class of evolutionary


algorithms that use techniques inspired by evolutionary biology
such as inheritance, mutation, selection, and crossover (also called
recombination).

Optimization Methods
Genetic algorithms (contd.)
 Genetic algorithms are typically implemented as a computer simulation,
in which a population of abstract representations (called chromosomes)
of candidate solutions (called individuals) to an optimization problem,
evolves toward better solutions.

 The evolution starts from a population of completely random individuals


and occurs in generations.

 In each generation, the fitness of the whole population is evaluated,


multiple individuals are stochastically selected from the current
population (based on their fitness), and modified (mutated or
recombined) to form a new population.

 The new population is then used in the next iteration of the algorithm.

Optimization Methods
Ant colony optimization
 In the real world, ants (initially) wander randomly, and upon finding food return to
their colony while laying down pheromone trails. If other ants find such a path,
they are likely not to keep traveling at random, but instead follow the trail laid by
earlier ants, returning and reinforcing it if they eventually find food

 Over time, however, the pheromone trail starts to evaporate, thus reducing its
attractive strength. The more time it takes for an ant to travel down the path and
back again, the more time the pheromones have to evaporate.

 A short path, by comparison, gets marched over faster, and thus the pheromone
density remains high

 Pheromone evaporation has also the advantage of avoiding the convergence to a


locally optimal solution. If there were no evaporation at all, the paths chosen by
the first ants would tend to be excessively attractive to the following ones. In that
case, the exploration of the solution space would be constrained.

Optimization Methods
Ant colony optimization (contd.)
 Thus, when one ant finds a good (short) path from the colony to a food source,
other ants are more likely to follow that path, and such positive feedback
eventually leaves all the ants following a single path.

 The idea of the ant colony algorithm is to mimic this behavior with "simulated
ants" walking around the search space representing the problem to be solved.

 Ant colony optimization algorithms have been used to produce near-optimal


solutions to the traveling salesman problem.

 They have an advantage over simulated annealing and genetic algorithm


approaches when the graph may change dynamically. The ant colony algorithm can
be run continuously and can adapt to changes in real time.

 This is of interest in network routing and urban transportation systems.

Optimization Methods
Optimization Methods

You might also like