You are on page 1of 16

1/22/2013

OA4201 Lecture Notes 3

January 22, 2013

Overview
• Last time:
– Further details on min/max decomposition
– Benders decomposition
• Today:
– Intro to unconstrained optimization
• Reading: Bertsekas p. 2-15 (minus
“Sensitivity,” p. 12-13).
• Upcoming:
– Quiz on Practice Problem Sets 1 & 2 tomorrow.
– Lab tomorrow (2nd part of class).

1
1/22/2013

Unconstrained Optimization

• In this part of the course, we’ll be trying to solve


problems of the form
minn f ( x)
xR

• Why study unconstrained optimization?


– Many interesting problems are unconstrained,
• E.g., maximum likelihood estimation
– Methods we develop for unconstrained optimization
will be useful when we add constraints.

Local and Global Optimality


• From Bertsekas (p. 4):
“A vector x* is an unconstrained local minimum of f if it
is no worse than its neighbors; that is, if there exists an
 >0 such that
f  x *  f  x  ,  x with x  x *   .
A vector x* is an unconstrained global minimum of f if
it is no worse than all other vectors; that is, if
f  x *  f  x  ,  x  Rn .
The unconstrained local or global minimum x* is said
to be strict if the corresponding inequality above is
strict for x=x*.”

2
1/22/2013

Necessary and Sufficient Conditions


• Necessary conditions:
If A is a necessary condition for B, then:
– If A is true, B may or may not be true.
– If A is not true, B is definitely not true.

• Sufficient conditions:
If A is a sufficient condition for B, then:
– If A is true, B is definitely true.
– If A is not true, B may or may not be true.

Optimality Conditions
• We will learn various necessary and sufficient
conditions for optimality of solutions.
• Ex: consider minimizing f(x) = x2:

3
1/22/2013

Optimality Conditions
• Suppose x* is a local minimum of f(x), i.e.,
f(x) > f(x*) for x “close” to x*:

4
1/22/2013

Optimality Condition
• We have derived a condition: If x* is an
unconstrained local minimum of f(x), then f ( x*)  0.
• Does this give us a necessary condition for local
optimality? A sufficient condition?

Note: This condition also holds in multiple dimensions:


If x* is a local minimum of f(x), then f(x*)=0.

5
1/22/2013

Second-Order Conditions
• Can we use second-order information
(i.e., second derivatives and Hessians) to obtain
another condition?

6
1/22/2013

Example
• Example: consider minimizing f(x)=x4.
Does x*=0 satisfy the first and second order
necessary conditions?

Example
• Example: consider minimizing f(x)=(x1-1) 2-(x2-2)2.
Does x*=[1 2]’ satisfy the first and second order
necessary conditions?

7
1/22/2013

Positive Semidefinite Matrices


• A matrix A is positive semidefinite (PSD) if
y’Ay > 0 for all vectors y  R n
• A matrix A is positive definite (PD) if
y’Ay > 0 for all vectors y=0.
• How to determine whether a symmetric matrix is
PSD or PD? (See handout and Bertsekas App. A.4):
– Use the definition.
– Use eigenvalues. This can be easy if you have software
that computes them for you; it’s also easy for diagonal
matrices (diagonal elements are eigenvalues).
– Use determinants.

2nd Order Sufficient Condition


• From Bertsekas Prop. 1.1.3 (p.15):
If f(x*)=0 and 2f(x*) is positive definite, then
x* is a strict unconstrained local minimum of f.

8
1/22/2013

Example
• Consider f(x) = x2. What can we say about x* = 0?

Example
• Consider f(x) = (x1-3)2 + (x2+1)2.
What can we say about x* = [3 -1]’?

9
1/22/2013

Example
• Consider f(x) = x3. What can we say about x* = 0?

Example
• Consider f(x) = x4. What can we say about x* = 0?

10
1/22/2013

Convexity and Concavity


• For certain types of NLPs, it’s possible to establish
global optimality using local conditions.
• In a convex optimization problem, every local
optimum is a global optimum.
• A convex optimization problem has the following
attributes:
– If minimizing, the objective function is convex.
– If maximizing, the objective function is concave.
– For both types of problems, the feasible region is
convex.
21

Convex Functions
• A function f(x) is convex if:

f ( x  (1   ) x)   f ( x )  (1   ) f ( x) x , x,  [0,1]

• The Hessian matrix of a convex function is


positive semidefinite for all x.
22

11
1/22/2013

Concave Functions
• A function f(x) is concave if:

f ( x  (1   ) x)   f ( x )  (1   ) f ( x) x , x,  [0,1]

• The Hessian matrix of a concave function is


negative semidefinite for all x.
23

Convex Sets
• A set S is convex if

 x  (1   ) x  S x , x  S ;  [0,1]

24

12
1/22/2013

What about these functions?


• Affine: f(x) = ax+b

• Exponential: f(x) = eax

• Logarithm: f(x) = ln(x)

• Sine: f(x) = sin(x)

• Absolute value: f(x) = |x|


25

Convexity–Preserving Operations
• Some operations preserve convexity, for example:
• Nonnegative weighted sum

• Composition (under certain conditions)

• Pointwise maximum

• Vector norm of affine function 26

13
1/22/2013

How to Show that a Function is Convex


• Use the definition:
f ( x  (1   ) x)   f ( x )  (1   ) f ( x) x , x,  [0,1]

• Show that the Hessian is positive semidefinite:

• Show that the function can be obtained from other


convex functions via convexity-preserving operations.
27

How to Show that a Function is Nonconvex


• Use the definition:
f ( x  (1   ) x)   f ( x )  (1   ) f ( x) x , x,  [0,1]

• Show that the Hessian is not positive semidefinite:

28

14
1/22/2013

Gradient Methods
• To solve a minimization problem:
1. Start at some initial solution (a guess).
2. Find a feasible descent direction.
• I.e., a direction in which we can move and still
stay within the feasible region, and in which
our objective value improves.
3. Move some distance in that direction; arrive at
new solution.
4. Evaluate new solution; stop if it is “satisfactory.”
Otherwise, go back to Step 2.
29

Why Convexity is Important


• Gradient-based methods cannot be guaranteed
to find the optimal solution for a nonconvex
objective function:

15
1/22/2013

Why Convexity is Important


• Gradient methods cannot be guaranteed to find
the optimal solution for a nonconvex feasible
region:

Consequences of Convexity: Unconstrained Optimization

• If f is convex,
every local minimum is a global minimum.
• If f is convex and x* is a stationary point,
then x* is a global minimum.
• If f is strictly convex,
then there exists at most one global minimum.
f ( x  (1   ) x)   f ( x )  (1   ) f ( x) x  x,  [0,1]

16

You might also like