You are on page 1of 24

AOE 5734

Convex Optimization

AOE 5734: Convex Optimization

Instructor: Mazen Farhood (farhood@vt.edu)

Text: Convex Optimization (Boyd & Vandenberghe)


http://www.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf

Grading: Homework 20%, Take-Home Midterm 40%,


Take-Home Final 40%

Homework: Weekly; assigned each Friday and due


the following Friday by 4PM; students are
encouraged to work in small groups but each has to
hand in his/her own work

Lecture 1: Introduction

Mathematical optimization

Least-squares & linear programming

Convex optimization

Nonlinear optimization

Mathematical Optimization

Mathematical Optimization

Linear program if objective and constraint


functions are linear, i.e. for

Otherwise, nonlinear program

Mathematical Optimization

Focus: convex optimization problems,


where objective and constraint functions
are convex, i.e.,

Convex optimization is a generalization of


linear programming

Mathematical Optimization / Examples


Portfolio Optimization
Asset 1

Capital
Asset n

x describes the overall portfolio allocation

Constraints: limit on amounts to be invested,


nonnegative investments, min acceptable return

Objective function: a measure of the overall risk

Problem: choose portfolio allocation that minimizes risk


among all allowable allocations

Mathematical Optimization / Examples


Data Fitting

Find a model, from a family of potential models, that best


fits some observed data and prior information

Variables: parameters in the model

Constraints: prior information or limitations on parameters

Objective function: measure of misfit or prediction error


between observed data and values predicted by model

Problem: find model parameter values consistent with


prior information and that give the smallest misfit

Numerous applications in engineering,


Embedded real-time optimization

Mathematical Optimization / Solving

Effectiveness of solution methods (algorithms) varies and


depends on:

Particular forms of objective & constraint functions


Number of variables
Number of constraints
Special structure such as sparsity (constraints depends on only a
small number of variables)

Even when objective & constraint functions are smooth (ex


polynomials), optimization problem may be difficult to solve

Effective and reliable algorithms exist for following problem


classes:

Least-squares problems (very widely known)


Linear programming (very widely known)
Convex optimization (less well-known)

Least-squares problems

No constraints

Analytical solution:

Good algorithms with software implementations plus high


accuracy, high reliability

Given a known constant, solution time proportional to

100s of variables, 1000s of terms, few seconds (desktop)

Reliable enough for embedded optimization

If A is sparse, solution time faster than

(tens of
thousands variables, hundreds of thousands terms, about a minute)

Using least-squares

Mature technology

Basis for regression analysis, optimal control, and many


parameter estimation and data fitting methods

Has statistical interpretations e.g. as maximum likelihood


estimation of a vector x given linear measurement
corrupted by Gaussian measurement errors

Recognition is easy: (1) quadratic objective function; and


(2) associated quadratic form is positive semidefinite

In addition to basic problem, we have weighted least


squares

Another technique is regularization:

Linear programming (LP)

No simple analytical formula for the solution

A variety of very effective solution methods including


Dantzigs simplex method and interior point methods

Complexity in practice is order


constant than for least-squares)

Quite reliable methods (not as much as those for least


squares problems though)

100s variables, 1000s constraints, in a matter of seconds

(less characterized

Linear programming

If problem is sparse or with exploitable structure, we can


solve ones with 10s or 100s of thousands of variables &
constraints

Extremely large problems are still a challenge

Mature technology

Solvers can be (and are) embedded in many tools and


applications

Using linear programming

Some applications lead to standard forms; in others,


problems are transformed to equivalent linear programs

Chebyshev approximation problem

Recognizing LP more involved than least squares but only


few tricks used (can be partially automated)

Convex optimization

Least-square problems and linear programs are both


special cases of the general convex optimization problems

Solving convex optimization problems

In general, no analytical formula for solution

Very effective, reliable methods e.g. interior point methods

Interior-point methods can solve problems in a # of steps


or iterations almost always in the range between 10 & 100

Each step requires on the order of


operations, where F is the cost of evaluating the 1st and 2nd
derivatives of the objective and constraint functions

100s variables, 1000s constraints, a few tens of seconds

Exploiting structure such as sparsity, then can solve larger


problems with many thousands of variables & constraints

Not mature technology, still active research area, no


consensus on best method

Using convex optimization problems

With only a bit of exaggeration, if you formulate a practical


problem as a convex optimization problem, then you have
solved the original problem

Many more tricks for transforming convex problems than


for transforming linear ones

Recognizing a convex optimization problem can be


challenging

This course will give you the background needed to do this

Many problems can be solved via convex optimization

Nonlinear programming (NP)

Not linear but not known to be convex

No effective methods for solving general NP

Even simple looking problems can be extremely


challenging

So compromise is the way to go in this case

Local optimization

Locally optimal point i.e. minimizes the objective function


among feasible points that are near it

Well-developed, fast, widely-applicable methods that can


handle large-scale problems (only require differentiability
of the objective and constraint functions)

Used, for instance, to improve performance of an


engineering design

Disadvantages:

Initial guess of optimization variable (can greatly affect the


objective value of local solution obtained)
Sensitive to algorithm parameter values
Little info on how far local solution is from globally optimal value

More art than technology: art in solving problem whereas


in the convex case, art is in formulating the problem

Global optimization

Worst-case complexity of methods grows exponentially


with the problem sizes n and m

Used when small # of variables and when long computing


times are not critical

Applications: worst-case analysis or verification of a safetycritical system (worst-case values of design parameters
are acceptable we can certify system as safe or reliable)

Role of convex optimization in nonconvex problems

Initialization for local optimization

Find an approximate convex formulation


Use solution of approximate problem as the starting
point for a local optimization method

Bounds for global optimization

Bounds on optimal value via convex relaxation of


nonconvex constraints

Two problems

Problem (A) (channel communication problem):

Problem (B) (design of a truss):

Two problems

Based on the looks, (A) seems easier than (B)

(A) is as difficult as an optimization problem can be.


Worst-case computational effort within absolute
accuracy 0.5 is approx. 2n operations. For n = 256
(alphabet of bytes), 2n = 1077

(B) is quite computationally tractable. Let k = 6 (6 loading


scenarios), m = 100 (a 100 bar construction), then we
end up with 701 variables (2.7 the # of variables of (A)).
(B) can be reliably solved within 6 accuracy digits in a
couple of minutes

(A) is nonconvex whereas (B) is convex

Goals of the course

To recognize/formulate/solve convex
optimization problems

To present applications in which convex


optimization problems arise

You might also like