You are on page 1of 24

Motivating Examples

Optimization

Linearity vs. Nonlinearity

Course Overview

Nonlinear Optimization
Lecture 1: Introduction
Professor Frank E. Curtis
Lehigh University
Spring 2015

Nonlinear Optimization

Lecture 1: Introduction (1 of 24)

Motivating Examples

Optimization

Linearity vs. Nonlinearity

Course Overview

Outline

Motivating Examples

Optimization

Linearity vs. Nonlinearity

Course Overview

Nonlinear Optimization

Lecture 1: Introduction (2 of 24)

Motivating Examples

Optimization

Linearity vs. Nonlinearity

Course Overview

Outline

Motivating Examples

Optimization

Linearity vs. Nonlinearity

Course Overview

Nonlinear Optimization

Lecture 1: Introduction (3 of 24)

Motivating Examples

Optimization

Linearity vs. Nonlinearity

Course Overview

Data fitting
Suppose we have 10 values of solar spectroscopy data yi taken at wavelengths ti .

Physicists have been studying the solar spectrum since Newton used a prism
to split white light into a spectrum of colors.

Spectroscopy data can be used to derive properties of distant stars, such as


their distance, chemical composition, temperature, mass, etc.

Nonlinear Optimization

Lecture 1: Introduction (4 of 24)

Motivating Examples

Optimization

Linearity vs. Nonlinearity

Course Overview

Data fitting
For example, as a function of wavelength, the measurements of interest may
resemble a noisy bell curve.
yi

t0 t1 t2 t3

Nonlinear Optimization

t4

t5

t6

t7

t8

t9

Lecture 1: Introduction (5 of 24)

Motivating Examples

Optimization

Linearity vs. Nonlinearity

Course Overview

Data fitting
Suppose we want to find the bell curve that is closest to the data.
I

Suppose that the true value at ti is y(ti ).

We want to determine x = (x1 , . . . , x4 ) in the function


y(x, t) = x1 + x2 e(t+x3 )

/x4

with the idea that for some optimal x we have y(x , ti ) y(ti ).
I

The difference (i.e. residual) between y(x, ti ) and yi is given by


ri (x) := y(x, ti ) yi .

We can define the closest curve to be the solution to


X
X
2
min
ri2 (x) =
(x1 + x2 e(ti +x3 ) /x4 yi )2 .
xR4

Nonlinear Optimization

Lecture 1: Introduction (6 of 24)

Motivating Examples

Optimization

Linearity vs. Nonlinearity

Course Overview

Optics

Suppose your view is from point z and


there is an object at point y in the
same room. You see point y through a
reflection in a curved surface. Where
on the curve do you see point y?

Nonlinear Optimization

Lecture 1: Introduction (7 of 24)

Motivating Examples

Optimization

Linearity vs. Nonlinearity

Course Overview

Optics
We have two points, y and z, and a curve
c(x) = 0.
If a light source at y reflects off the curve and hits points z, then at what point on
the curve did the reflection occur?
x2

c(x) = 0
x1

Nonlinear Optimization

Lecture 1: Introduction (8 of 24)

Motivating Examples

Optimization

Linearity vs. Nonlinearity

Course Overview

Optics

Fermats principle is that the light will travel to minimize distance.


I

We only want to consider x satisfying


c(x) = 0.

We want to find x as the solution to


min ky xk + kz xk.

xR2
I

(Later, we will see from the optimality conditions of this problem that x is
the point at which the angles between the curves normal vector and each
of (y x) and (z x) are equal.)

Nonlinear Optimization

Lecture 1: Introduction (9 of 24)

Motivating Examples

Optimization

Linearity vs. Nonlinearity

Course Overview

Robotics

Suppose we want to minimize the time it takes a robot arm to travel from A to B.

Nonlinear Optimization

Lecture 1: Introduction (10 of 24)

Motivating Examples

Optimization

Linearity vs. Nonlinearity

Course Overview

Robotics

We can optimize over the arm length (t), the horizontal and vertical angles
((t), (t)), the controls (of motion) (u (t), u (t), u (t)), and the final time tf .

We have equations of motion


L00 = u , I 00 = u , and I 00 = u ,
where

(L )3 + 3
(L )3 + 3
sin2 () and I =
3
3
along with appropriate boundary conditions.
I =

We also have the bounds


(t) [0, L], |(t)| , 0 (t) , |u | 1, |u | 1, and |u | 1.

Nonlinear Optimization

Lecture 1: Introduction (11 of 24)

Motivating Examples

Optimization

Linearity vs. Nonlinearity

Course Overview

Robotics

Nonlinear Optimization

Lecture 1: Introduction (12 of 24)

Motivating Examples

Optimization

Linearity vs. Nonlinearity

Course Overview

Outline

Motivating Examples

Optimization

Linearity vs. Nonlinearity

Course Overview

Nonlinear Optimization

Lecture 1: Introduction (13 of 24)

Motivating Examples

Optimization

Linearity vs. Nonlinearity

Course Overview

Motivation

This course is motivated by the mathematical optimization problem to


minimize over x a function f of x subject to x being in a set X ,
or, more concisely,
min f (x) s.t. x X .
x

For the most part, we consider problems where X is implicitly defined, as in


min f (x) s.t. c(x) = 0, c(x) 0.
x

Goals to optimize are pervasive in the world, whether by humans or in nature.


I

Humans aim to optimize designs, decisions, processes, etc.

Nature aims to optimize time, energy, etc.

Solving models of these forms help us make decisions and understand nature.

Nonlinear Optimization

Lecture 1: Introduction (14 of 24)

Motivating Examples

Optimization

Linearity vs. Nonlinearity

Course Overview

Solving an optimization problem is difficult

Besides well-known classical examples such as


I

the shortest distance between two points is a straight line,1

optimization problems typically do not have trivial and/or analytical solutions.


I

Weather prediction (data assimilation, nonlinear differential equations)

Portfolio optimization (budget/external limitations, data uncertainty)

Electricity grid optimization (system stability, uncertainty, network design)

...

1
Even this axiom falters in the more general context of a curved space, such as the surface
of the earth, where the (locally) shortest path follows a geodesic.
Nonlinear Optimization

Lecture 1: Introduction (15 of 24)

Motivating Examples

Optimization

Linearity vs. Nonlinearity

Course Overview

Classifications

Contemporary experts typically specialize in certain classes of problems:


I

Continuous

Linear vs. nonlinear

Convex vs. nonconvex

Conic

Discrete

Network

Stochastic

Global vs. local

Of course, real-world problems often share a subset of these classifications.

Nonlinear Optimization

Lecture 1: Introduction (16 of 24)

Motivating Examples

Optimization

Linearity vs. Nonlinearity

Course Overview

Outline

Motivating Examples

Optimization

Linearity vs. Nonlinearity

Course Overview

Nonlinear Optimization

Lecture 1: Introduction (17 of 24)

Motivating Examples

Optimization

Linearity vs. Nonlinearity

Course Overview

A linear vs. a nonlinear world

Everyone loves linearity.


I

Linear equations are easy to solve.

Linear regression is easy and intuitive.

Linear differential equations are easy to solve.

Linear optimization is easy and fast.

Thats all very nice, but as we have seen, the world is nonlinear.

Nonlinear Optimization

Lecture 1: Introduction (18 of 24)

Motivating Examples

Optimization

Linearity vs. Nonlinearity

Course Overview

Linear approximations
It would be nice to replace nonlinear functions with linear ones, but should we?

A poor linear approximation puts the minimizer at a particular point.

A better approximation puts it anywhere in an interval around that point.

Tipping the red line can make it at either extreme!

Nonlinear Optimization

Lecture 1: Introduction (19 of 24)

Motivating Examples

Optimization

Linearity vs. Nonlinearity

Course Overview

Handling nonlinearity

We should know nonlinear optimization, even if we dont use it.


I

What does it mean to have a solution?

How do we characterize solutions in general?

How might solutions be different if we use approximations?

What makes a nonlinear problem difficult?

What are the options for solving nonlinear problems?

What are the computational issues not seen in linear optimization?

Nonlinear Optimization

Lecture 1: Introduction (20 of 24)

Motivating Examples

Optimization

Linearity vs. Nonlinearity

Course Overview

Outline

Motivating Examples

Optimization

Linearity vs. Nonlinearity

Course Overview

Nonlinear Optimization

Lecture 1: Introduction (21 of 24)

Motivating Examples

Optimization

Linearity vs. Nonlinearity

Course Overview

Topics we will cover

We will study nonlinear equations and nonlinear optimization problems.

The optimization problems can be unconstrained or constrained.

Besides one set of notes, we assume functions are smooth.

We will learn how to characterize and understand optimality conditions.

We will learn about modern optimization algorithms.

We will study the theoretical convergence properties of these algorithms.

Nonlinear Optimization

Lecture 1: Introduction (22 of 24)

Motivating Examples

Optimization

Linearity vs. Nonlinearity

Course Overview

Topics we will not cover

No one studies or knows how to solve the hardest types of optimization problems!
Nonlinear optimization (in this course) means we consider problems that are
I

Finite dimensional; i.e., x Rn .

Deterministic; i.e., not stochastic/random.

Smooth (for the most part); i.e., gradients exist.

Continuous; i.e., no binary/integer variables.

We focus on algorithms, but also discuss convergence theory for these algorithms.

Nonlinear Optimization

Lecture 1: Introduction (23 of 24)

Motivating Examples

Optimization

Linearity vs. Nonlinearity

Course Overview

Topics you are expected to know already


Calculus:
I

Derivatives

Gradients

Hessians

Real Analysis:
I

Sequences (subsequences, boundedness, accumulation points, etc.)

Continuity and limits

Linear Algebra:
I

Vectors/matrices

Matrix properties (e.g., symmetric, positive (semi)definite, (non)singular, etc.)

Determinants, eigenvalues, and eigenvectors

Vector and matrix norms

Matrix factorizations

Condition numbers

Please see the mathematical background document on Course Site!

Nonlinear Optimization

Lecture 1: Introduction (24 of 24)

You might also like