You are on page 1of 12

Quadratic Programming

Optimization of Chemical Processes

Submitted to

Submitted by

Dr. Rajeev Kumar Dohare

Avlesh Meena
(2014uch1576)
Amit Kumar Singh (2014uch1577)
Pranshu Pareek
(201uch1579)
Harshit Agarwal (2014uch1580)
Shruti Trivedi
(2014uch1583)

Quadratic Programming: Introduction

A linearly constrained optimization problem with a


quadratic objective function is called a quadratic
program (QP).

Because of its many applications, quadratic


programming is often viewed as a discipline in and
of itself. More importantly, though, it forms the
basis of several general nonlinear programming
algorithms.

We begin this section by examining the KarushKuhn-Tucker conditions for the QP and see that
they turn out to be a set of linear equalities and
complementarity constraints.

Quadratic programming (QP) problem is an optimization


problem in which a d quadratic objective function of n variables
is minimized subject to m linear inequality or equality
constraints.
A convex QP is the simplest form of a nonlinear
programming problem with inequality constraints.
A number of practical optimization problems, such as
constrained least squares and optimal control of linear systems
with quadratic cost functions and linear constraints, are
naturally posed as QP problems.

Generic Problem
Minimize:
f(x) = cT x+ x TQx
Subject to: Ax = b
-(a)
X>= 0
where c is a vector of constant coefficients, A is an (m X n)
matrix, and Q is a symmetric matrix.
The vector x can contain slack variables, so the equality
constraints may contain some constraints that were originally
inequalities but have been converted to equalities by inserting
slacks.
Codes for quadratic programming allow arbitrary upper and lower
bounds on x; we assume x >= 0 only for simplicity.
If the equality constraints are independent then, the KTC are the
necessary conditions for an optimal solution of the QP. In
addition, if Q is positive-semidefinite , the QP objective
function is convex.

Because the feasible region of a QP is defined by linear


constraints, it is always convex, so the QP is then a convex
programming problem, and any local solution is a global
solution.

Also, the KTC are the sufficient conditions for a minimum,


and a solution meeting these conditions yields the global
optimum.

If Q is not positive semi-definite, the problem may have


an unbounded solution or local minima.

To write the KTC, start with the Lagrangian function


L = xT c+ 1/2 x TQ x + T(Ax - b) uTx

Then equate the gradient of L (with respect to x T) to zero


(note that T(Ax - b) = (Ax - b)T = (xTAT - bT) and uTx =
xTu)
Then the KTC reduce to the following set of equations:
c + Qx + A T u = 0
Ax b = 0
x>=0 ; u>=0
and uTx=0

-(1)
-(2)
-(3)
-(4)

where the ui and j, are the Lagrange multipliers.


If Q is positive semi-definite, any set of variables (x*, u*,
*) that satisfies (1) to (4) is an optimal solution to (a).
Some QP solvers use these KTC directly by finding a solution
satisfying the equations.
They are linear except for (4), which is called a
complementary slackness condition. Applied to the nonnegativity conditions in (a), complementary slackness implies
that at least one of each pair of variables (ui, xi) must be
zero.

Hence, a feasible solution to the KTC can be found by


starting with an infeasible complementary solution to the
linear constraints (1)-(3), and using LP pivot operations to
minimize the sum of infeasibilities while maintaining
complementarity.
Because(1) and (2) have n and m constraints respectively,
the effect is roughly equivalent to solving an LP with (n + m)
rows.

Sample problem
Question:
Minimize f(x) = 8 x 1 16 x 2 + x 1 2 + 4x 2 2
Subject to: x1 + x2 >= 5
x 1 <= 3
x 1 >= 0
x 2 >= 0
Solution: The data and variable definitions are
given below. As can be seen, the Q matrix is
positive definite so the KKT conditions are
necessary and sufficient for a global optimum.

cT=

Q=

b=

A=

x = (x 1 , x 2 ) , y = (y 1 , y 2 ) , = ( 1 , 2 ), v = (v 1 ,
v2)

Linear constraints are :


2x 1

8x 2
x1+x2
x1

+ 1 + 2 y1
+1
y2

+v

=8
= 16
=5
+v2 =3

To create the appropriate linear program, we add artificial


variables to each constraint and minimize their sum.
Minimize a1 + a2 + a3 + a4
Subject to :
2x 1

+ 1+ 2 y 1
8x 2 + 1
y2

16
x1+x2
x1
3

+ a1
+v1

+v2

=8
=

+ a2
+ a3

all variables >= 0 and complementary conditions

=5
+ a4 =

Applying the modified simplex technique to this example, yields


the sequence of iterations given in table below. The optimal
solution to the original problem is(x1 * , x 2* ) = (3, 2)
Iterati Basic
Solutions Objectiv Entering
ons
Variables
e Value Variable
1
(a1 , a2 , a3 , (8, 16, 5, 32
x2
3)
a4 )

Leaving
Variable
a2

(a1 , x

, a3 ,

x1

a3

a4 )
(a1 , x

,x

a4

(2, 2, 3,
0)

a1

(2, 2, 3,
0)

----

-----

, a4 )
(a1 , x 2 , x
1

4
5

, 1 )
(2 , x 2 , x
1

, 1 )

(8, 2, 3, 14
3)
(2, 2, 3,
0)

You might also like