You are on page 1of 66

Non-linear Programming

Problem

Classical Optimization
Classical optimization theory uses differential
calculus to determine points of maxima and
minima for unconstrained and constrained
functions.
This chapter develops necessary and sufficient
conditions for determining maxima and
minima of unconstrained and constrained
functions.
2

Quadratic forms
Consider the function

f (X ) = a x + a x ++ a x +
2
11 1

2
22 2

2
nn n

a12 x1 x2 + + an1,n xn1 xn


The above function is called the quadratic or quadratic
form in n-variables.
The above function can be written in the form XTAX,
where X = [x1, x2, , xn]T be a n-vector and A = (aij) be a
n n symmetric matrix.
3

Quadratic forms
Then the quadratic form
Q( X ) = X T A X = aii xi2 + 2 aij xi x j
i

1i j n

(or the matrix A (symmetric matrix)) is called


Positive semi definite if XTAX 0 for all X 0.
Positive definite if XTAX > 0 for all X 0.
Negative semi definite if XT A X 0 for all X 0.
Negative definite if XTA X < 0 for all X 0.

The above quantities of the quadratic form depend on the symmetric


matrix A, therefore we have
4

1. Matrix minor test


A necessary and sufficient condition for A (any square
matrix ) to be :
Positive definite (positive semi definite) is that all the n
principal minors of A are > 0 ( 0).
Negative definite (negative semi definite) if kth principal
minor of A has the sign of (-1)k, k = 1, 2, ,n
(kth principal minor of A is zero or has the sign of (-1)k,
k=1,2,,n )
Indefinite, if none of the above cases happen
5

2. Eigenvalue Test
Since matrix A is a real symmetric matrix in XTAX, it
follows that its eigenvalues ( i) are real. Then XTAX is
Positive definite (positive semi definite) if i > 0
(i 0) i =1, 2, ,n.
Negative definite (negative semi definite) if i < 0 ,
(i 0) i=1,2,,n
Indefinite, if A has both positive and negative eigenvalues.
6

Examples : Decide the definiteness of the following


quadratic functions:
(1) x12 + 3 x 22 + 5 x 32
2
1

2
2

2
3

(2) 7 x 10x 7 x + 4 x1x2 2 x1x3 + 4 x2 x3


(3)

2
1

x x

2
2

Local maximum/Local minimum


Let f (X) = f (x1, x2,,xn) be a real-valued function of the n variables
x1, x2, , xn (we assume that f (X) is at least twice differentiable ).
A point X0 is said to be a local maximum of f (X) if there exists an
> 0 such that

Here

"
f ( X 0 + h ) f ( X 0 ) for all h j

"
"
T
0
0
0 T
h = (h1 , h2 ,.., hn ) , X 0 = ( x1 , x2 ,.., xn ) and X 0 + h = ( x10 + h1 , x20 + h2 ,.., xn0 + hn )T .

A point X0 is said to be a local minimum of f (X) if there exists an


> 0 such that

"
f ( X 0 + h ) f ( X 0 ) for all h j
8

Absolute maximum/Absolute minimum


X0 is called an absolute maximum or global maximum of f (X) if

f (X ) f (X0) X.
X0 is called an absolute minimum or global minimum of f (X) if

f (X ) f (X0) X.
Taylor Series expansion for functions of several variables

1 T
f ( X 0 + h) f ( X 0 ) = f ( X 0 )h + h Hh +
2

Theorem
A necessary condition for X0 to be an optimum
point of f (X) is that f ( X 0 ) = 0
(that is all the first order partial derivatives
zero at X0.)

f
xi

are

Definition: A point X0 for which f ( X 0 ) = 0 is called


a stationary point of f (X) (potential candidate for
local maximum or local minimum).

10

Theorem
Let X0 be a stationary point of f (X). A sufficient condition
for X0 to be a
local minimum of f (X) is that the Hessian matrix
H(X0) is positive definite;
local maximum of f (X) is that the Hessian matrix
H(X0) is negative definite.

11

Hessian matrix
2 f

x
1

2 f

x2 x1

H ( x) =
.

.
2
f
xn x1

2 f
x1x2

2 f
x22

2 f
. .
xn x2

2 f

x1xn
2 f

x2 xn
.

2 f
xn2

12

Example 1
Examine the following function for extreme
points
f (x1, x2,x3) = x1 +2 x3 +x2 x3 x12 x 22 x32

13

Definition:
A function f (X)=f (x1,x2,xn) of n variables is said to be
convex if for each pair of points X, Y on the graph, the line
segment joining these two points lies entirely above or on
the graph.
i.e. f((1-) X + Y) (1-) f (X)+ f (Y)
for all 0 1.

14

f is said to be strictly convex if for each pair of points


X,Y on the graph,
f ((1-) X + Y) < (1-) f (X)+ f (Y)
for all such that 0 < < 1.

f is called concave (strictly concave) if f is convex


(strictly convex).

15

Convexity test for function of one variable


A function of one variable f(x) is
convex if

d2 f
0
2
dx

concave if

d2 f
0
2
dx

16

Convexity test for functions of 2 variables


quantity

convex

Strictly
convex

concave

Strictly
concave

fxx fyy - (fxy)2

>0

>0

fxx

>0

<0

fyy

>0

<0
17

Results
(1) The Sum of two convex functions is convex.
(2) Let f(X) = XTAX. Then f(X) is convex if XTAX
is positive semi-definite.
Example: Show that the following function is
convex
f (x1, x2) = -2 x1 x2 +x12 + 2x22
18

Constraint Optimization
Problems

Constrained optimization Problems


KarushKuhnTucker (KKT) conditions:
Consider the problem
maximize z = f(X) = f (x1, x2,, xn)
subject to g(X) 0 [g1(X) 0
g2(X) 0
.
gm(X) 0]
(the non-negativity restrictions, if any, are included in the
above).
20

We define the Lagrangian function


L(X, S, ) = f(X) [g(X) + s2]
where s, s12, s22,..,sm2 ,are the non negative slack variables
added to g1(X) 0 ,. gm(X) 0 to make them into
equalities. Therefore
L(X, S, ) = f(X) [1{g1(X) + s12} + 2{g2(X) +s22}
+ +m{gm(x) + sm2}]

21

KKT conditions
KKT necessary conditions for optimality are given
by ,

L
= f ( X ) g ( X ) = 0
X
L
= 2 i S i = 0, i = 1, 2, ..., m
S i
L
= ( g ( X ) + S 2 ) = 0

22

KKT conditions
The KKT necessary conditions for maximization
problem are :
0

f ( X ) g ( X ) = 0

i g i ( X ) = 0
gi ( X ) 0

i=1,2m

These conditions apply to the minimization case as well,


except that 0.
23

KKT conditions
In scalar notation, this is given by
i 0 i=1,2,.m
f
g1
g 2
g m
1
2
.. m
=0
x j
x j
x j
x j
j = 1, 2, . . . , n
i g i ( X ) = 0 i=1,2,.m
gi ( X ) 0

i=1,2,.m
24

IMPORTANT: The KKT condition can be satisfied at a local minimum (or max.),
a global minimum (or max.) as well as at a saddle point.

We can use the KKT condition to characterize all the stationary points
of the problem, and then perform some additional testing to determine
the optimal solutions of the problem.

25

Sufficiency of the KKT conditions:


Required conditions
Sense of optimization

Objective function Solution space

maximization

Concave

Convex set

minimization

Convex

Convex set

26

Example: Use the KKT conditions to find the optimal


solution for the following problem:
maximize f(x1, x2) = x1+ 2x2 x23
subject to x1 + x2 1
x1

0
x2 0

27

Solution: Here there are three constraints ,


g1(x1,x2) = x1+x2 - 1 0
g2(x1,x2) = -x1

g3(x1,x2) = -x2
0
The KKT necessary conditions for maximization
problem are :
i 0 f ( X ) g ( X ) = 0

i g i ( X ) = 0
gi ( X ) 0

i=1,2m
28

Hence the KKT conditions become


g 3
g1
g 2
f
1
2
3
=0
x1
x1
x1
x1
g 3
g 1
g 2
f
1
2
3
=0
x 2
x 2
x 2
x 2

1g1(x1,x2) = 0
2g2(x1,x2) = 0

(note: f is concave, gi are

3g3(x1,x2) = 0

convex, maximization problem

g1(x1,x2) 0

these KKT conditions are

g2(x1,x2) 0

sufficient at the optimum point)

g3(x1,x2) 0

and 10, 20, 30


29

i.e.

1 1 + 2 = 0

(1)

2 3x22 1 + 3 = 0 (2)
1(x1 + x2 1) = 0

(3)

2 x1

=0

(4)

3 x2

=0

(5)

x1 + x2 1 0

(6)

x1

(7)

x2

(8)

1 0

(10)

and 3 0

(9)
(11)
30

(1) gives 1 = 1 + 2 1 >0 (using 10)


Hence (3) gives x1 + x2 = 1 (12)
Thus both x1, x2 cannot be zero, therefore let x1>0, then (4)
gives 2 = 0. therefore 1 = 1
if now x2 = 0, then (2) gives 2 0 1 + 3 = 0
=> 3 < 0 not possible
Therefore x2 > 0, hence (5) gives 3 = 0 and then (2) gives
x22 = 1/3 => x2 =1/3.
And so x1 = 1- 1/3. Therefore
Max f = 1 - 1/3 + 2/3 1/33 = 1 + 2/33.
31

Example: Use the KKT conditions to derive an optimal


solution for the following problem:
minimize f (x1, x2) = x12+ x2
subject to x12 + x22 9
x1 + x2 1
Solution: Here there are two constraints,
g1(x1,x2) = x12+x22 - 9 0
g2(x1,x2) = x1 + x2 -1 0
32

Thus the KKT conditions are:


1 0, 2 0 as it is a minimization problem
2x1 - 21x1 - 2 = 0
1 - 21x2 - 2 = 0
2
1

2
2

1 ( x + x 9) = 0
2 ( x1 + x2 1) = 0
x12 + x22 9
x1 + x2 1
33

Now 1 = 0

(from 2) gives 2 = 1 Not possible.

Hence 1 0 and so

2
1

2
2

x +x =9

(5)

Assume 2 = 0. So (1st equation of ) (2) gives

2 x1 (1 1 ) = 0

Since 1 0

we get x1= 0

From (5), we get x2 = 3


2nd equation of (2) says (with 1 < 0, 2 = 0 ) x2 = -3
Thus the optimal solution is:
1
x1 = 0, x2 = 3, 1 = , 2 = 0
6

The optimal value is :

z = 3
34

Use the KKT conditions to find an optimal solution of the


following problem:

Maximize f(x) = 20x1 + 10 x2


Subject to x12 + x22 1
x1 + 2x2 2
x1 0, x2 0

max f occurs at x1 = 2/5, x2 = 1/5


35

Quadratic Programming

36

Quadratic Programming
A quadratic programming problem is a non-linear
programming problem of the form

Maximize z = CX + X T DX
subject to AX b, X 0
T

where X = [ x1, x2 ,..., xn ] , b = [b1 , b2 ,..., bn ] , C = [c1, c2 ,..., cn ]


a11 a12 . . a1n
a

a
.
.
a
2n
21 22

A= .

am1 am2 . . amn

d11 d12 . . d1n


d d

.
.
d
2n
21 22

D= .

dn1 dn2 . . dnn

Quadratic Programming
!

Assume that the n n matrix D is symmetric and


negative-definite.

This means that the objective function is strictly


concave.

Since the constraints are linear, the feasible region


is a convex set.
38

Quadratic Programming
In scalar notation, the quadratic programming
problem reads:
n

Maximize z = c j x j + d jj x 2j + 2
j =1

j =1

x xj

ij i

1 i < j n

subject to a11 x1 + a12 x 2 + . . . + a1 n x n b1


a 21 x1 + a 22 x 2 + . . . + a 2 n x n b 2

a m 1 x1 + a m 2 x 2 + . . . + a m n x n b m
x1 , x 2 , . . . , x n 0
39

Wolfes Method to solve a Quadratic Programming


Problem:
!

The solution to this problem is based on the KKT


conditions. Since the objective function is strictly
concave and the solution space is convex, the
KKT conditions are also sufficient for optimum.

Since there are m + n constraints, we have m + n


Lagrange multipliers; the first m of them are
denoted by 1, 2 , , m and the last n of them
are denoted by 1, 2 , , n.
40

Wolfes Method
The KKT (necessary) conditions are:

1. 1, 2 ,. . ., m , 1, 2 ,. . ., n 0
n

i =1

i =1

2 . c j + 2 d ij xi i a ij + j = 0 , j = 1, 2 , . . . , n

3. i aij x j bi = 0, i = 1, 2,. . ., m
j =1

j x j = 0 , j = 1, 2,. . ., n
n

4.

a x
ij

bi , i = 1, 2,. . ., m and x j 0, j = 1, 2,. . ., n

j =1

41

Wolfes Method
Denoting the (non-negative) slack variable for the ith constraint
n

a ij x j b i

j =1

by si, the 3rd condition can be written in an equivalent form


as:

3.

i si = 0, i = 1, 2,. . ., m
j x j = 0 , j = 1, 2,. . ., n

(Referred to as " Restricted Basis " conditions).


42

Wolfes Method
Also condition(s) (2) can be rewritten as:
n

i =1

i =1

2 d ij x i + i a ij j = c j ,
j = 1, 2 , . . . , n

and condition(s) (4) can be rewritten as:


n

4.

ij

x j + si = bi , i = 1, 2,. . ., m

j =1

x j 0 , j = 1, 2,. . ., n
43

Wolfes Method
Thus we have to find a solution of the following m + n
linear equations in the 2n + m unknowns x j , i , j
n

i =1

i =1

2 d ij xi + i a ij j = c j , j = 1, 2 , . . . , n
n

a ij x

+ s i = b i , i = 1, 2 , . . . , m

j =1

i si = 0 ,
i = 1, 2 , . . . , m ,
j x j = 0 , j = 1, 2 , . . . , n
i 0 , si 0 ,
j 0, x j 0 ,

i = 1, 2 , . . . , m ,
j = 1, 2 , . . . , n
44

Wolfes Method
Since we are interested only in a " feasible solution
of the above system of linear equations, we use
Phase-I method to find such a feasible solution. By
the sufficiency of the KKT conditions, it will be
automatically the optimum solution of the given
quadratic programming problem.

45

Example-1:

2
1

2
2

Maximize z = 8x1 + 4x2 x x


subject to x1 + x 2 2
x1 , x 2 0.

46

Denoting the Lagrange multipliers by 1, 1, and 2,


the KKT conditions are:
1. 1 , 1 , 2 0
2.

8 2 x1 1 + 1 = 0
4 2 x2 1 + 2 = 0
i.e. 2 x1 + 1 1

=8

2 x2 + 1

2 = 4
3. x1 + x2 + s1 = 2, 1s1 = 1 x1 = 2 x2 = 0
All variables 0.
47

Introducing artificial variables R1, R2, we thus have to


Minimize
r = R1 + R 2
subject to the constraints

2 x1

+ R1

+ 1 1
2 x 2 + 1

+ R2

x1 + x 2

=8
=4
+ S1 = 2

1 S 1 = 1 x1 = 2 x 2 = 0
All variables 0 (We solve by " Modified Simplex " Algorithm).
48

R1

R2

-1
0
0
-1
0

0
-1
1
0
0

0
-1
0
1
0

0
0
0
0
1

12
0
8
4
2

-1
-1
0
0

-1
0
-1
0

0
1
0
0

0
0
1
0

-2
-2
0
1

8
4
4
2

1
-1
1
0

-1
0
-1
0

-2
1
-1
0

0
0
1
0

2
-2
2
1

0
4
0
49
2

1 1 2

Basic r

x1

x2

r
R1
R2
S1

1
0
0
0

2
0
2
0
1

2
0
0
2
1

2
0
1
1
0

-1
0
-1
0
0

r
R1
R2
x1

1
0
0
0

0
0
0
1

0
-2
2
1

2
1
1
0

1
0
0
0

0
0
0
1

4
-2
4
1

0
1
0
0

1
R2
x1

s1

Sol

Thus we have got the feasible solution


x1 = 2, x2 = 0, 1 = 4, 1 = 0, 2 = 0
and the optimal value is: z = 12

50

Example-2
Maximize

2
1

z = 8x1 x + 2x2 + x3

subject to

x1 + 3 x 2 + 2 x 3 12
x1 , x 2 , x 3 0

51

Example-2: Solution
Denoting the Lagrange multipliers by 1,1, 2, and 3,
the KKT conditions are:

1. 1 , 1 , 2 , 3 0
2. 8 2 x1 1 + 1 = 0
2

31

2 1

+ 2 = 0
+ 3 = 0

i.e. 2 x1 + 1 1
31 2
2 1

=8
=2

3 = 1
52

3. x1 + 3 x2 + 2 x3 + S1 = 12,

1S1 = 0
1 x1 = 2 x2 = 3 x3 = 0

All variables 0.

Solving this by " Modified Simplex Algorithm ", the optimal


solution is:
11
25
x1 = , x2 = , x3 = 0
3
9
and the optimal z =

193
9
53

Basic r

x1

x2

x3 1 1 2

3 R1 R2 R3 S1

Sol

r 1
R1 0

2
0
2

0
0
0

0
0
0

6 -1
0 0
1 -1

-1
0
0

-1
0
0

0 0 0
-1 -1 -1
1 0 0

0
0
0

11
0
8

R2 0

-1

R3 0

-1

S1 0

12

Since 1 S1 = 0 and S1 is in the basis, 1 cannot enter.


So we allow x1 to enter the basis and of course by minimum
ratio test R1 leaves the basis.
54

Basic r

x1

x2

x3 1 1 2

-1

-1

x1 0

0 1/2 -1/2 0

R2 0

-1

R3 0

-1

S1 0

2 -1/2 1/2

3 R1 R2 R3 S1
-1

Sol

1/2 0

-1/2 0

Since 1 S1 = 0 and S1 is in the basis, 1 cannot enter.


So we allow x2 to enter the basis and of course by minimum
ratio test S1 leaves the basis.
55

x1

x2 x3 1 1 2

R1 R2 R3 S1

-1

-1

-1 0

x1 0

0 1/2 -1/2 0

1/2 0

R2 0

-1

R3 0

-1

x2 0

2/3 -1/6 1/6

-1/6 0

1/3 8/3

Basic r

Sol

As S1 is not in the basis, now 1 enters the basis .


And by minimum ratio test R3 leaves the basis.

56

Basic

x1

x2

x3 1 1

-1 3/2

x1 0

0 -1/2

1/4 1/2 0

-1/4 0 15/4

R2 0

-1

3/2

0 1

-3/2 0

1/2

1 0

0 -1/2

1/2

1/2

x2 0

1 2/3 0 1/6

R1

R2 R3

-1 0

0 -1/12 -1/6 0

-5/2

S1

Sol

1/2

1/12 1/3 11/4

Now 3 enters the basis .


And by minimum ratio test R2 leaves the basis.

57

Basic

x1

x2 x3 1 1

x1 0

0 -1/2 1/6

3 0
1 0

0
0

0 0 0
0 0 1

x2 0

1 2/3 0 1/6 -1/18 0

0
0

0 -2/3 1
0 -1/3 0

This is the end of Phase I.


Thus the optimal solution is:
11
25
x1 = , x2 = , x3 = 0
3
9

R1

R2 R3

-1 -1

S1 Sol

-1

1/2 -1/6 0

0 11/3

0 2/3 -1
0 1/3 0

0 1/3
0 2/3

-1/6 1/18 0 1/3 25/9


Thus the optimal
value z is: 193
9
58

Example-3

Maximize

2
1

2
2

z = 6x1 + 3x2 2x 4x1x2 3x

subject to

x1 + x 2 1
2 x1 + 3 x 2 4
x1 , x 2 0

59

Denoting the Lagrange multipliers by 1, 2, 1, and 2,


the KKT conditions are:

1.

1 , 2 , 1 , 2 0
2. 6 4 x1 4 x2 1 2 2 + 1 = 0
3 4 x1 6 x2 1 32 + 2 = 0
i.e. 4 x1 + 4 x2 + 1 + 2 2 1 = 6
4 x1 + 6 x2 + 1 + 32 2 = 3
60

3.

x1 + x2 + S1 = 1,
2 x1 + 3 x2 + S 2 = 4,

1S1 = 2 S 2 = 0
1 x1 = 2 x2 = 0
and all variables 0.
Solving this by " Modified Simplex Algorithm ", the
optimal solution is:
x1 = 1, x2 = 0 and the optimal z = 4.
61

Basic

x 1 x2

1 2 1 2

R1

R2 S1

S2 Sol

r 1
R1 0

8
0
4

10
0
4

2
0
1

5 -1
0 0
2 -1

-1
0
0

0
-1
1

0
-1
0

0
0
0

0
0
0

9
0
6

R2 0

-1

S1 0

S2 0

1 4/3 0

1/3 0

-1

2/3

-5/3

R1 0 4/3 0

1/3 0

-1

2/3

-2/3

x2 0 2/3 1

1/6 1/2 0 -1/6

1/6

1/2
1/2

S1 0 1/3

0 -1/6 1/2 0

1/6

-1/6

S2 0

0 -1/2 -3/2 0

1/2

-1/2

5/2

62

Basic r

x 1 x2

1 2 1 2

R1

R2 S1 S2 Sol

-2

-1

-1

-2

R1 0

-1

x1 0

3/2 1/4 3/4 0 -1/4

1/4

3/4

S1 0

-1/2 -1/43/4 0

1/4

0 -1/4

1/4

S2 0

0 -1/2 -3/2 0

1/2

-1/2

5/2

-1

-1

-4

R1 0

-1

-4

x1 0

2 0 0

-2
1

-1 3 0
0 0 0

1
0

0
0

-1
0

4
-2

0
1

1
2

S2 0

63

Basic r

x 1 x2

1 2 1 2

R1

R2 S1 S2 Sol

-1

-1

1 0

-1

-4

x1 0

2 0 0

-2

0 1

-1

-1

S2 0

-2

Thus the optimum solution is:


x1 = 1, x2 = 0, 1 = 2, 2 = 0, 1 = 0, 2 = 3
And the optimal value is: z = 4
64

Solve the following quadratic programming problem


Maximize
subject to

z = 20x1 + 50x2 20x12 + 18x1 x2 5x22


x1 + x 2 6
x1 + 4 x 2 18
x1 , x 2 0
Max z = 224

Using Excel solver, the opt solution is: x1=2, x2=4


65

Remark:
If the problem is a minimization problem, say, Minimize z,
we convert it into a maximization problem, Maximize -z.

66

You might also like