Professional Documents
Culture Documents
OA4201 Nonlinear Programming
January 9, 2013
Overview
• Today:
– Welcome; syllabus
; y
– Nonlinear formulation example:
portfolio optimization
– Overview of some heuristic algorithms
– Intro to min‐max decomposition
• Deliverables for next week:
– Baseline survey by COB Friday
1
1/9/2013
Course Logistics: Personnel/Contact
• Instructor: Prof. Emily Craparo
• Email: emcrapar@nps.edu
• Office: GL‐238
• Office hours: Thurs/Fri 1400‐1500 or by appt
• Course website: http://cle.nps.edu
– Repository for handouts, lecture notes;
computational assignment turn‐in.
Course Logistics: Homework
• No regular graded homework
• Practice problems and solutions distributed
Practice problems and solutions distributed
approximately weekly
• Small in‐class quizzes; similar (or identical)
to practice problems
– Grading scale: 0‐5
– I will give notice of an upcoming quiz
• Some take‐home
formulation/implementation assignments
2
1/9/2013
Course Logistics: Exams & Grading
• Two midterm exams:
– Friday, February 1
– Friday, March 1
• One 2‐hour final exam:
– Thursday, March 21
Final exam: 30%
Fi l 30%
Midterms: 25% (each)
Homework quizzes and exercises: 20% (total)
Optimization Courses at NPS
• OA3201: Linear Programming (LP)
Introduction to modeling and solving problems with linear variables and
g g g( )
constraints. Introduction to Mixed‐Integer Programming (MIP) models.
• OA4202: Network Flows and Graphs
Formulation and solution of network problems. Network interdiction.
• OA4201: Nonlinear Programming (NLP)
Advanced formulation. Modeling and solving problems with nonlinear
objectives and/or constraints. Solving models by decomposition.
• OA4203: Optimization Seminar
Advanced topics in optimization.
3
1/9/2013
Optimization Nomenclature (Review)
• Decision variables: Unknowns that are under our control.
• We want to determine the best values for the decision variables;
these values are the output of our model.
• Any term we have control over is a decision variable.
• A solution is a set of values for the decision variables.
• Parameters/data: Values that are fixed in the model, i.e., input.
• Parameter: An aspect of a model that is fixed.
• Data: Specific values for the parameters.
• Any term we have no control over is a parameter.
• Constraints: Restrictions imposed on the decision variables.
Constraints: Restrictions imposed on the decision variables
• A feasible solution satisfies all constraints.
• Objective function: A measure indicating the value of a solution.
• The optimal solution gives the best possible value of the objective function
while satisfying all of the constraints.
Optimization Modeling
Word problem Mathematical
formulation
maximize: z x1 x2 y
s tt.::
s. x1 2 x2 5
y x12 0
x1 , x2 0
y {0,1}
Computer Post‐optimality
implementation analysis
4
1/9/2013
Nonlinear Optimization
• Relative to LP, NLP requires a greater
emphasis on analysis.
• Want to be able to answer questions such as:
– Can we expect the solver to give us a globally
optimal solution, or is a locally optimal solution
the best we can hope for?
– Can we verify
Can we verify that the solution returned by the
that the solution returned by the
solver is optimal (or at least locally optimal)?
– How quickly can we expect the solver to return a
solution?
Example: Portfolio Optimization
• Lockheed Martin (LMT): Expected return = r1 = 1.05.
( ) p
• Delta Airlines (DAL): Expected return = r2 = 1.08.
• Goal: maximize expected return of portfolio.
• Decision variables:
5
1/9/2013
Portfolio Optimization: Maximize Return
• Formulation:
Portfolio Optimization: Mitigate Risk
• How to model risk?
r1:
r2:
6
1/9/2013
Portfolio Optimization: Mitigate Risk
• Goal: minimize variance of portfolio value such that
expected value is at least 1.06.
• Formulation:
7
1/9/2013
Portfolio Optimization: Mitigate Risk
• Standard deviation of portfolio value: .066
– Expected return: 1.06
p
• Standard deviation when risk is ignored: .2
– Expected return: 1.08
• Big reduction in risk for small decrease in
expected return.
• Negative correlation allows hedging.
Negative correlation allows hedging
• Results are obvious when only two stocks are
involved; much less so when hundreds are
involved.
Roadmap (Approximate)
• Decomposition methods
• Unconstrained optimization
– Optimality conditions, algorithms, convergence
• Constrained optimization
– Convexity
• Duality
D lit
8
1/9/2013
Types of Algorithms
• Exact algorithms produce optimal solutions (or as
close to optimal as you want).
• Approximation algorithms
A i ti l ith produce near‐optimal
d ti l
solutions.
– Solution quality is guaranteed a priori for all
problem instances.
– Performance guarantees can be constant‐
f t
factor, or they can be a function of the
th b f ti f th
problem size.
• Heuristics produce solutions of unknown quality.
– Bounds can sometimes be derived a posteriori.
17
Types of Algorithms
• Exact algorithms produce optimal solutions (or as
close to optimal as you want).
– Simplex algorithm for linear programming
Si l l ith f li i
– Branch and bound / simplex for integer linear
programming
– Gradient descent (for convex optimization)
• For many interesting problems, it is thought that all
exact algorithms require an exponential number of
operations.
– In practice, this means that we can only solve
small instances in a reasonable amount of time.
18
9
1/9/2013
Types of Algorithms
• Approximation algorithms produce near‐optimal
solutions and run more quickly than exact
q y
algorithms.
– For example, you may be able to show that a
greedy approach yields a solution that is at
least half as good as the optimal solution.
– “Near‐optimal” only means that the algorithm
p y g
has a non‐trivial performance guarantee.
19
Types of Algorithms
• Heuristics produce solutions of unknown quality.
– Gradient descent (for nonconvex optimization)
– Greedy ordering of cities (for the traveling
G d d i f iti (f th t li
salesperson problem)
• Basically, a heuristic is something that seems like a
good idea, but isn’t backed by theoretical analysis.
• Heuristics are often used as a last resort for very large
or difficult problems.
diffi lt bl
A given approach can be an exact algorithm for some
problem classes, an approximation algorithm for others,
and a heuristic for others. Example: greedy strategy.
20
10
1/9/2013
Heuristic: Greedy Strategy
• Usually used for selection or ordering problems.
• Start with some initial solution (usually an empty
Start with some initial solution (usually an empty
set or list).
• Make the best possible incremental improvement
to the solution.
• Repeat until a complete solution is formed, or no
further improvements are possible
further improvements are possible.
21
Heuristic: Greedy Strategy
22
11
1/9/2013
Greedy Strategy
• From a group of runners, construct the fastest 4‐person
relay team.
• From a group of runners, construct the fastest 4‐person
relay team of a single gender
relay team of a single gender.
Greedy Strategy as Approximation Alg.
• Example: maximize a nonnegative, nondecreasing
submodular function subject to multiple‐choice
constraints
constraints.
• Submodular function:
f ( S {i , j}) f ( S { j}) f ( S {i}) f ( S )
• Multiple‐choice constraints: items available for
selection belong to classes. Can select at most one
it
item from each class.
f h l
• Greedy strategy gives objective value at least half
of optimal objective value.
12
1/9/2013
Heuristic: Local Search
• Start with some incumbent solution.
• Pick a neighborhood of the current solution.
Pick a neighborhood of the current solution
– For example, solutions reachable via a single
pairwise exchange.
• Evaluate all solutions in the neighborhood.
– If no solution within neighborhood is better,
stop with a locally optimal solution
stop with a locally optimal solution.
– Otherwise, pick the best solution within
neighborhood and continue.
25
Heuristic: Local Search
26
13
1/9/2013
Heuristic: Tabu Search
• Also called extended local search.
• Pick a neighborhood of the current solution .
– For example, pairwise exchanges.
• Pick the best solution within neighborhood that
– is not “taboo” (need to decide restriction and
list length)
– has a good cost
has a good cost
– goes in a good direction
– avoids a bad direction
– etc.
27
Heuristic: Tabu Search
28
14
1/9/2013
Heuristic: Simulated Annealing
• Named after the metallurgical annealing process
(heating and cooling the metal).
• Pick an initial solution, x, with objective function
value f(x).
• Pick an initial temperature T.
• Pick a random neighbor, y, with objective value f(y).
Then,
Δf = f(y) –
Δf f( ) f(x)
f( )
If Δf < 0, x ←y f
Else x ←y with probability e T T 0
If a sufficient number of iterations have passed, reduce T
29
Heuristic: Simulated Annealing
30
15
1/9/2013
Heuristic: Genetic Algorithm
• Inspired by biological evolution.
p p
• Pick an initial “population” of feasible solutions.
• “Breed” solutions to produce new ones.
– Break parent solutions at the same point,
recombine solutions to produce children.
• Introduce random “mutations” to explore new
regions of the solution space
regions of the solution space.
• “Cull” weak solutions to maintain population size.
• Repeat as desired.
31
Heuristic: Genetic Algorithm
32
16
1/9/2013
Decomposition
• Decomposition‐based
Decomposition based techniques are exact
techniques are exact
algorithms designed to make very large or
complicated problems tractable by breaking
them into smaller pieces.
• An alternative to using a heuristic; not always
applicable (requires ingenuity to recognize
applicable (requires ingenuity to recognize
decomposable structure).
Example: Counterinsurgency
• Consider an adversary who allocates effort between 2
missions:
• We can influence the adversary’s payoff through our
counterinsurgency effort y:
17
1/9/2013
Example: Counterinsurgency
• Full formulation:
• Observations:
Adversary’s Problem
u2
1
1 u1
• Possible optimal solutions (depending on y):
18
1/9/2013
Adversary’s Problem
1-2y
max -1+y
0
h( y )
Our goal:
Our Problem
h( y ) h(y)
1
1-2y
max -1+y
0
1 2 y
• Observations:
19
1/9/2013
Transform Problem
min h( y )
0 y 2
20
1/9/2013
Some Terminology
• Master problem: the outer (min) optimization problem,
with any subset of the adversary’s extreme points
considered.
min z
y,z
• Subproblem: the inner (max) optimization problem.
– When considering only the inner problem, decision
When considering only the inner problem decision
variables for the outer (min) problem are fixed.
max (1 y )u1 (1 2 y )u2
u1 ,u2
s.t. u1 u2 1
u1 , u2 0
21
1/9/2013
Iterative Accumulation of Extreme Pts.
• Subproblem for a given y:
max (1 y )u1 (1 2 y )u2
u1 ,u2
s.t. u1 u2 1
u1 , u2 0
• “Guess” a solution y, solve Subproblem:
Iterative Accumulation of Extreme Pts.
• Solve the Master Problem using all extreme
points accumulated so far:
• Repeat…
22
1/9/2013
23
1/9/2013
Min/Max Decomposition Algorithm
1. “Guess” an initial solution y.
2. Solve the Subproblem using this y.
– Accumulate another extreme point.
Accumulate another extreme point
– Update upper bound if possible. If the upper bound is updated,
set y*=y (i.e., save the current y as a possible optimal solution).
– If upper bound = lower bound, stop.
Return y* as the optimal solution.
3. Solve the Master Problem using all accumulated extreme points.
– Update lower bound.
Update lower bound
– If upper bound = lower bound, stop.
Return y* as the optimal solution.
4. Return to Step 2. Solve the Subproblem using the optimal y from Step 3.
24