You are on page 1of 13

1/30/2013

OA4201 Exam 1 Review

January 30, 2013

Question 1
Consider the following function:
f(x) = x12 + 3x1x2 + (x2 – 1)2
Is this function convex? Explain.

1
1/30/2013

Question 2
Consider minimizing the following function:
f(x) = x12 + 3x1x2 + (x2 – 1)2
For each of the following candidate solutions x*, use the first order necessary
condition, the second order necessary condition, and/or the second order
sufficient condition to evaluate x* as a potential local minimum.
• x* = [0 0]’
• x* = [6/5 -4/5]’

What have we covered so far?


• Types of algorithms
– Exact algorithms
– Approximation algorithms
– Heuristics
• Decomposition methods
– Min/max decomposition
– Benders decomposition
• Unconstrained optimization
– Necessary and sufficient conditions for local optimality
• Convexity of functions and sets
– Definitions
– Relevance to nonlinear optimization
• Descent-based methods:
– Basic approach
– Descent directions
– Steepest descent algorithm

2
1/30/2013

Types of Algorithms
• Exact algorithms produce optimal solutions (or as
close to optimal as you want).
• Approximation algorithms produce near-optimal
solutions.
– Solution quality is guaranteed a priori for all
problem instances.
– Performance guarantees can be constant-
factor, or they can be a function of the
problem size.
• Heuristics produce solutions of unknown quality.
– Bounds can sometimes be derived a posteriori.
5

Heuristics

• Greedy strategy
• Local search
• Simulated annealing
• Tabu search
• Genetic algorithm

3
1/30/2013

Min/Max Decomposition
1. “Guess” an initial solution y.
2. Solve the Subproblem using this y.
– Accumulate another extreme point.
– Update upper bound if possible. If the upper bound is updated, set
y* =y (i.e., save the current y as a possible optimal solution).
– If upper bound = lower bound, stop.
Return y* as the optimal solution.
3. Solve the Master Problem using all accumulated extreme points.
– Update lower bound.
– If upper bound = lower bound, stop.
Return y* as the optimal solution.
4. Return to Step 2. Solve the Subproblem using the optimal y from Step 3.

Subproblem and Master Problem


• Subproblem: the inner (max) optimization problem.
– When considering only the inner problem, decision
variables for the outer (min) problem are fixed.
max (1  y)u1  (1  2 y )u2
u1 ,u2

s.t. u1  u2  1
u1 , u2  0
• Master problem: the outer (min) optimization problem,
with any subset of the inner problem’s extreme points.
min z
y,z

s.t. z  (1  y)u1l (1  2 y)u2l l  1,..., k


0 y2

4
1/30/2013

Benders Decomposition
• Benders decomposition is used for problems in which
variables can be separated into a vector x of “easy”
variables and a vector y of “hard” variables.
– Given values of the “hard” variables y, it is easy to
optimize with respect to the “easy” variables x.
• Consider the problem
z*  min f ( y )  c ' x
x, y

s.t. Ax  By  b
x0
y Y

Benders Decomposition
• For a fixed y, the “easy” subproblem is:
h( y )  f ( y )  min c ' x
x

s.t. Ax  b  By
x0

• For fixed y, this is simply a linear program. Therefore, it


has a dual with the same optimal objective value:
h( y )  f ( y )  max(b  By ) ' u
u

s.t. A ' u  c

5
1/30/2013

Consider the following problem: min 3 y1  5 y2  2 x1  x2


y1 , y2 ,
x1 , x2

s.t. x1  x2  1
2x1  3x2  12 y1
x2  3 y2  1
x1 0
y1 , y2  {0,1}

Let y1, y2 be the “hard” variables and x1, x2 be the “easy” variables.
a) Write the primal subproblem.

b) Write the dual subproblem.

c) Write the master problem.

6
1/30/2013

Local and Global Optimality


• From Bertsekas (p. 4):
“A vector x* is an unconstrained local minimum of f if it
is no worse than its neighbors; that is, if there exists an
 >0 such that
f  x *  f  x  ,  x with x  x *   .
A vector x* is an unconstrained global minimum of f if
it is no worse than all other vectors; that is, if
f  x *  f  x  ,  x Rn .
The unconstrained local or global minimum x* is said
to be strict if the corresponding inequality above is
strict for x=x*.”

Necessary and Sufficient Conditions


• Necessary conditions:
If A is a necessary condition for B, then:
– If A is true, B may or may not be true.
– If A is not true, B is definitely not true.

• Sufficient conditions:
If A is a sufficient condition for B, then:
– If A is true, B is definitely true.
– If A is not true, B may or may not be true.

7
1/30/2013

Optimality Conditions
• First order necessary condition:

• Second order necessary condition:

• Second order sufficient condition:


From Bertsekas Prop. 1.1.3 (p.15):
If f(x*)=0 and 2f(x*) is positive definite, then
x* is a strict unconstrained local minimum of f.
• A matrix A is positive definite if
y’Ay > 0 for all y = 0

Convexity and Concavity


• When solving a convex optimization problem, we can
make conclusions about the global quality of a
solution using only local information.
• The following conditions must hold:
– For minimization problems,
the objective function must be convex.
– For maximization problems,
the objective function must be concave.
– For both types of problems,
the feasible region must be convex.
16

8
1/30/2013

Convex Functions
• A function f(x) is convex if:

f ( x  (1   ) x)   f ( x )  (1   ) f ( x) x , x,  [0,1]

• The Hessian matrix of a convex function is


positive semidefinite for all x.
17

Convex Sets
• A set S is convex if

 x  (1   ) x  S x , x  S ;  [0,1]

18

9
1/30/2013

Why Convexity is Important


• Descent-based methods cannot be guaranteed to
find the optimal solution for a nonconvex
objective function:

Why Convexity is Important


• Descent-based methods cannot be guaranteed to
find the optimal solution for a nonconvex feasible
region:

10
1/30/2013

Consequences of Convexity
• If f is convex,
every local minimum is a global minimum.
• If f is convex and x* is a stationary point,
then x* is a global minimum.
• If f is strictly convex,
then there exists at most one global minimum.
f ( x  (1   ) x)   f ( x )  (1   ) f ( x) x , x,  [0,1]

Descent Directions
• A direction d is a descent direction at a point x if
f(x)’d < 0.

11
1/30/2013

Steepest Descent

• A popular approach: try to go “downhill.”


• Steepest descent method:
– Step 0: Pick x0  n, set k = 0.
– Step 1: Set xk+1 = xk + αkdk
• αk is the step size
• dk = -f(xk) is the descent direction
– Step 2: Evaluate xk+1.
If “good enough,” stop and return xk+1.
Else, set k = k + 1 and return to Step 1.

a) Write the gradient and Hessian of the following function:


x14 2 x13
f  x    x22
4 3

b) Consider minimizing the function from part (a):


x14 2 x13
min2 f  x     x22
xR 4 3
Is this a convex optimization problem? Why or why not?

12
1/30/2013

x14 2 x13
min2 f  x     x22
xR 4 3
c) Evaluate each of the following candidate solutions x*. In particular, state whether or not
x* can be confirmed as a local minimum or a global minimum of f(x), and give a rationale
for your statement. If you’re sure that x* is not local minimum of f(x), find a direction d
that is a descent direction at x*.
x* = [0 0]’ x* = [-2 0]’ x* = [1 1]’

13

You might also like