You are on page 1of 23

IEOR E4007 Handout 8

Prof. Iyengar September 25th, 2002

Lecture 4
Linear Programming

Brief recap

Duality

Farkas lemma

Slopes and duality

Economic interpretations of dual vectors

Sensitivity analysis: b, c, A

Changes in constraint structure


Linear Programming 42

Our story so far ...

Defined a linear program and its variations


Saw some examples in class and several more in homework
Defined standard form LP
established an algebraic representation of extreme points of standard form
LP
used this algebraic representation to develop a simple algorithm for solving
LPs

What is to come

We have this neat method for solving LPs but in reality model parameters
are never known exactly. What if the parameter change a little bit ? Do we
have to solve the LP all over again ? The answer lies in sensitivity
analysis.
Before we come to sensitivity analysis we will study duality theory.
What do when the number of constraints change ? or is too large ?
Is there an algorithm that converges in polynomial time ?
Linear Programming 43

Duality

Consider the standard form LP:


min cT x,
s.t. Ax = b,
x 0.
x, c Rn , b Rm , A Rmn
Suppose we want a lower bound l for this LP. Why ?
Suppose only want -optimal solution: stop if cT x (1 + )l
leads to primal-dual algorithms
best lower bound will play a role in sensitivity analysis.
Construction of the dual
Let u Rm and s Rn+ , i.e. si 0 for all i = 1, . . . , n.
Then all x feasible for the above LP satisfy:
uT Ax + sT x uT b + sT 0 = uT b, (why ?)
m
(AT u + s)T x bT y
Suppose (u, s) are chosen so AT u + s = c. Then, for all feasible x:
(AT u + s)T x = cT x bT u. Thus, bT u is a lower bound.
Optimal lower bound is the solution of the dual LP
max bT u,
max bT u,
s.t. AT u + s = c,
s.t. AT u c
s 0.
Linear Programming 44

Rules for constructing duals

for a constraint in primal we introduce a variable in the dual

for a variable in primal we introduce a constraint in the dual


The relation between the primal-dual variables and constraints are
summarized below

PRIMAL max min DUAL


objective cT x bT u objective
ai x b i ui 0
constraints ai x b i ui 0 variables
ai x = b i ui free
T j
xj 0 u a cj
variables xj 0 u T aj c j constraints
xj free u T aj = c j

Example:
max 3x1 2x2 + x3 min 4u1 + u2
s.t. 2x1 + 4x2 + 5x3 4, s.t. u1 0,
x2 + 7x3 = 1, u2 = free,
x1 0, 2u1 3,
x2 0, 4u1 + u2 2,
x3 = free, 5u1 + 7u2 = 1.

Easy to check that the dual of the dual is the primal


Let p denote the optimal value of the primal and d denote the optimal
value of the dual
Linear Programming 45

Properties of duals

Weak Duality: For primal feasiblex and is dual feasible u


uT b d p c T x

Strong Duality: If p < then d = p .

Above properties imply that only four possibilities exist:


1. Primal = Dual
2. Primal infeasible, Dual infeasible
3. Primal unbounded, Dual infeasible
4. Primal infeasible, Dual unbounded
Complementary slackness : Let x, u be primal, dual feasible. Then x, u
is primal, dual optimal if and only if
ui (ai x bi ) = 0, i,
(cj uT aj )xj = 0, j
Proof: Define
i = ui (ai x bi )
j = (cj uT aj )xj .
From table it follows that i 0 and j 0. Also,

X X X X
i + j = ui (ai x bi ) + (cj uT aj )xj ,
i j i j
T T
= c xu b0
By strong duality, if x and u are optimal then cT x = uT b, which implies
that i = j = 0 for all i, j.
Conversely, if i = j = 0 then cT x = uT b. But uT b d = p cT x,
therefore x and u are both optimal.
Linear Programming 46

Application of duality: Farkas lemma


Farkas Lemma : Let A Rmn and b Rm . Then, exactly one of the
following two alternatives holds :

(a) there exists x 0 such that Ax = b


(b) there exists u such that uT A 0 and uT b < 0.

Proof: Suppose (a) holds. Then for all u such that uT A 0,


uT b = uT Ax 0. Thus (b) cannot hold.

Now suppose (a) does not hold. Consider the pair of dual linear programs
max 0T x min uT b,
s.t. Ax = b, s.t. uT A 0,
x0
The first is infeasible. So the dual can either be infeasible or unbounded. But
u = 0 is feasible for the dual, therefore it is unbounded. So there exists a
feasible u, i.e. uT A 0, such that uT b < 0.
Linear Programming 47

Farkas Lemma and asset pricing

market trades for one period with n assets and current price of the assets is
p
there are m possible states of nature
one share of asset i pays rsi dollars in state s.
Let P be the m n matrix describing the payoffs of each asset in each
state of nature
r11 . . . r1n ,
P = ... . . . ...
rm1 . . . rmn
portfolio x Rn costs pT x.
P
wealth in state s is nj=1 rsj xj , thus wealth vector w = Px.
Problem : what are allowable price vector p ?
No arbitrage condition: If you always make profit then you should have
paid for it, i.e if w = Rx 0 then pT x 0.
Farkas Lemma to the rescue : If xT PT 0 then pT x 0, i.e. (b) cannot
hold. So (a) must hold, i.e. there exists q 0 such that
p = RT q,
Pm
i.e. pi = s=1 qs rsi .

The price vector p is a linear combination of the payoff vectors


rs = (rs1 , . . . , rs,n )T deflated by the state prices q = (q1 , . . . , qs , . . . , qm ).
q is called the Arrow-Debreu price. If there is a risk free asset q is a
probability that makes the price a martingale.
Linear Programming 48

Marginal price interpretation

Consider the following min-cost LP


min 20x1 +15x2 , (cost)
subject to 0.3x1 +0.4x2 2 (gasoline demand)
0.4x1 +0.2x2 1.5 (jet fuel demand)
x1 0
x2 0

Suppose there are two markets : one for crude and another for finished
products. The cost of Saudi crude and Venezuelan crude is 20 and 15 resp.
What is the fair price of finished goods ?
Let u1 be the fair price of gasoline and u2 be the fair price of jetfuel.
In the crude market, 1 unit of saudi oil costs 20. In the finished good
market 1 unit of crude is equivalent to 0.3 units of gasoline and 0.4
units of jetfuel, i.e. its cost is 0.3u1 + 0.4u2 .
If 0.3u1 + 0.4u2 > 20 then more and more crude will be converted into
finished products (of course ignoring other costs) so no equilibrium.
Therefore 0.3u1 + 0.4u2 20
If 0.3u1 + 0.4u2 < 20 then Saudi crude never used, i.e. x1 = 0, i.e.
(0.3u1 + 0.4u2 20)x1 = 0.
Thus, fair prices satisfy the dual constraints and complementary slackness,
i.e. the optimal dual variables are the fair prices.
Linear Programming 49

Dual variables as constraint prices

consider the following lp:


maximize cT x,
such that aTi x bi , i = 1, . . . , m,
x0

think of x as levels of production activities and b as raw materials.


one unit of activity j consumes aij units of raw material j
let uPbe the price of raw materials, then the total cost of 1 unit of activity
j is m i=1 ui aij . therefore, the cost of x is
n X
X m
ui aij xj = uT Ax
j=1 i=1

and the total value of the raw materials is


Xm
ui b i = u T b
i=1

Form the simple budget constrained production problem,


Pn
max i=1 ci xi ,
such that uT Ax uT b
x0

If the prices u = u then the optimal solution of this problem is the


solution of the original LP.
Thus, the dual variables are the constraint prices
Linear Programming 4 10

Slopes and duality

Consider a simple LP :
F (b) = maximize cT x,
such that Ax b,
xj 0, j = 1, . . . , p
where x Rn , b Rm and A Rmn . Let F (b) R be the optimal
value of LP when the RHS is b
F (b)
Want to evaluate the partial derivatives ui = bi , i.e. the marginal
increase in F (b) as bi is increased
Now, ui 0 (why? ). More generally

Primal Constraint =
min problem ui 0 ui 0 ui free
max problem ui 0 ui 0 ui free
Let x be the optimal solution of the LP then the slopes corresponding to
slack contraints are zero (why ?), i.e.
ui (aTi x bi ) = 0.

x = x + ej , (i.e. xj = xj + , xk = xk for k 6= j) is feasible for

F (b + aj ) = maximize cT x,
such that Ax b + aj ,
xj 0, j = 1, . . . , p
therefore,
m
X

T T j
c x = c x + cj F (b + a ) F (b) + ui aij = cT x + uT aj
i=1
Linear Programming 4 11

comparing the first and last term we get


(cj uT aj ) 0

what does this imply ...


xj is free for j > p. therefore can be , i.e. cj = uT aj .
if xj > 0 for j m, then again can be , i.e. cj = uT aj
if xj = 0 for j m, then has to be +, therefore cj uT aj

collecting everything we have the following constraints on u:



ui 0, i = 1, . . . , m
Feasibility : uT aj cj , j = 1, . . . , p,
T j
u a = cj , j = p + 1, . . . , m.

ui (aTi x bi ) = 0, i = 1, . . . , m
Comp. slackness :
xj (uT aj cj ) = 0, j = 1, . . . , n

by the complementary slackness theorem, u must be the optimal dual


vector ... thus

Optimal solutions of dual LP are slopes of the primal LP


Linear Programming 4 12

Piece-wise constant slopes

Story so far: at any b, the partial derivatives of F (b) given by u , the


optimal solution of the dual LP.

The dual feasible region has only a finite number of extreme points, so as
one changes b only a finite number of slopes are possible.
Therefore, the slope ui is a piece-wise constant function of b, i.e. the slope
remains constant over a certain interval.

let b1 , b2 Rm and u1 , u2 be the corresponding slopes then


F (b2 ) F (b1 ) = F (b2 ) uT1 b1
uT1 (b2 b1 ) (u1 may not be opt. to b2 )
thus, F (b2 ) F (b1 ) + uT1 (b2 b1 ), i.e. the numerical value of the slope
decreases as b increases. More generally

Constraint
max problem ui 0, |ui | ui 0, |ui |
max problem ui 0, |ui | ui 0, |ui |

how does find the range over which u is constant ?


Linear Programming 4 13

Sensitivity with respect to b

Easiest to answer for standard form LP :


max cT x,
s.t. Ax = b,
x 0.

Let B be optimal basis, then optimal dual vector uT = cTB A1


B

The slopes are u as long as the basis B remains optimal


Example: Simple crude oil mixing LP :
min 20x1 +15x2 , (cost)
subject to 0.3x1 +0.4x2 2 (gasoline demand)
0.4x1 +0.2x2 1.5 (jet fuel demand)
x1 9 (Saudi avail)
x2 6 (Venezuelan avail)
x1 0
x2 0
Convert to standard form:

min 20 15 0 0 z,

0.3 0.4 1 0 0 0 2
0.4 0.2 0 1 0 0
such that z = 1.5 ,
1 0 0 0 1 0 9
0 1 0 0 0 1 6
T
z = x 1 x 2 s1 s2 s3 s4 0
T
optimal vector z = 2.0 3.5 0 0 7.0 2.5
T
optimal dual vector u = 20 35 0 0
optimal basis B = {x1 , x2 , s3 , s4 }.
Linear Programming 4 14

Geometrical view of sensitivity


Feasible region of the LP

7.5

x
PSfrag replacements

3.75 6.67 9

The basis B = {x1 , x2 , s3 , s4 } remains optimal as long as x is at the


intersection of constraints 1 and 2.

Sensitivity with respect to b1

7.5

5
2.625
PSfrag replacements

1.1125

3.75 6.67 9
Linear Programming 4 15

Sensitivity with respect to b2

7.5

PSfrag replacements 5 2.6667

3.75 6.67 9

Sensitivity with respect to b3

7.5

6
5 2
PSfrag replacements

3.75 6.67 9

Sensitivity with respect to b4

7.5

6
5 3.5
PSfrag replacements

3.75 6.67 9
Linear Programming 4 16

Algebraic view of sensitivity

basis B optimal if and only if


xB feasible, i.e. xB = A1
B b 0
reduced cost cT = cT cTB A1
B A 0

Suppose bi = bi + and bj = bj , j 6= i. Then B is still optimal if


A1 1
B (b + ei ) = xB + AB ei 0.

let i = (1i , . . . , mi )T be the i-th column of A1


B , then B is optimal if

xB(j) + ji 0, j = 1, . . . , m
i.e.
xB(j) xB(j)
max min
{j|ji >0} ji {j|ji <0} ji
Continue with example:
B = {1, 2, 5, 6} therefore

2 4 0 0
4 3 0 0
A1
B =
2 4 1 0

4 3 0 1
bounds for 1

2.0 2
3.5
xB + 1 1 = + 4 0 0.875 1 = 0.625
7.0 2
2.5 4
Other bounds
0.5 2 1.1667, 7 3 , 2.5 4
Linear Programming 4 17

What good is sensitivity ?

How sensitive is the profit to the supply of Saudi oil ?


The constraint on Saudi oil has the dual variable u3 = 0 and RHS range
[2, ]. Thus, as long as the Saudi oil supply remains in this range the cost
is independent of the supply.

How sensitive is the profit to the demand of gasoline ?


The constraint for gasoline has a dual variable u1 = 20 and RHS range
[1.125, 2.625]. So the cost increases at the rate 20 in the range
[1.125, 2.625]. Below 1.125 the cost increases at a rate 20 and above
2.625 the cost increases at a rate 20. (why ?)

Suppose the demand for gasoline increase to 2.5. what is the increase in
cost ?
2.5 [1.125, 2.625], therefore, the additional cost is given by
20 (2.5 2) = 10.

What will the reduction in cost if the demand for jetfuel fell to 0.5 ?
b2 = 0.5 6 [1, 2.6667] so cannot answer exactly. but can say that the
reduction will be greater than u2 (1.5 0.5) = 35 (1.5 0.5) = 35 units.
The actual reduction is 55 units.
Linear Programming 4 18

Sensitivity with respect to c

Let F (c) be the value of the crude oil LP as a function of c.


Suppose new objective b ci = ci , i 6= j
cj = cj + , b
If x remains optimal, F (c + ej ) = F (c) + xj
For what range of is x still optimal ?

7.5

PSfrag replacements 6
5
c1 c
c2

3.75 6.67 9

Primal feasibility unaffected. Focus only on the optimality condition:


reduced cost non-negative (min problem)
ck = b cTB A1
ck b k
B a 0, k

Two cases :
Case I: j 6 B. Then b
cB = c B .
cj = (cj + ) cTB A1 j
B a 0 cj

Case II: j B.
Suppose j is the l-th basic. Then b
cB = cB + el .
New optimality conditions:
(cB + el )T A1 T T
B A c + ej .
Linear Programming 4 19

Let qTl be the l-th row A1B AN (this is available in the tableau). Then,
optimality conditions are:
qlk ck , kN

Example : B = {x1 , x2 , s3 , s4 } and c = [0, 0, 20, 35, 0, 0]T .


Add 1 to c1 . x1 basic so Case II. Row 1 of A1 B A is

q1 = [1, 0, 2, 4, 0, 0]
So we get the following bounds on 1
8.75 1 10
i.e.
11.25 c1 30

A similar analysis gives that


10 c2 26.667
Linear Programming 4 20

Changes in the A matrix

Analysis harder than in the past two cases


Suppose b aj = aj , j 6= i, i.e. only i-th row gets perturbed.
ai = ai + q and b
Then
b = A + ei qT
A
Matrix inversion lemma: Suppose C is an invertible matrix. Then

T 1 1 (C1 w)(C1 v)T


(C + wv ) =C
1 + vT C1 w
Using the Lemma we get

b 1 1 (A1 1
B ei )(AB qB )
T
( AB ) = A B
1 + qTB A
B 1ei

The basis B remains optimal if and only if


1. basic variable are non-negative, i.e.
1 T
b 1 b = A1 (AB qB ) b (A1 ei ) 0
A B B
1 + qTB A
B 1ei
B

2. reduced costs are non-negative, i.e.


b 1 Aj 0,
cj cTB A j N
B

m
For all j N
(cTB (A1 1 T j
B ei ))(AB qB ) (A + uj ei )
cj cTB A1
B (A
j
+ uj ei )
1 + qTB A
B 1ei

Range hard to compute but one can check if the basis remains optimal
Linear Programming 4 21

Adding variables and constraints

Suppose we introduce a new variable xn+1 into our system


min cT x + cn+1 xn+1 ,
such that Ax + an+1 xn+1 = b,
x 0.
Let x , B be the optimal solution and basis of original LP. we wish to
determine whether B is still optimal
1. (x, xn+1 ) = (x , 0) is still a BFS
2. therefore, B optimal if and only if
cn+1 = cn+1 cTB A1
B a
n+1
0,
if cn+1 < 0 then B is not optimal. In order to find optimal we start a
simplex routine with B as the starting BFS.

Suppose we add a new inequality aTm+1 x bm+1


min cT x + 0xn+1
,
A 0 x
such that = b,
aTm+1 1 xn+1
x 0.
if aTm+1 x bm+1 then x optimal (why ?)
Else take new basis B = B {xn+1 }. The corresponding basic solution
(x , aTm+1 x bm+1 ) is infeasible.
Now,

A1 0
A1 = T B 1 uT = cTB A1 = [cT cTB A1
B , 0]
B am+1 AB 1 B

This dual variable is a basic solution for the dual problem. Solve the
dual to optimality to obtain the value of lp.
Linear Programming 4 22

Technical appendix: Strong duality


Will prove it for the case of standard form LP
Standard form LP :
max cT x,
s.t. Ax = b,
x 0.

The duality table gives :


PRIMAL max min DUAL
constraints ai x = bi free variable
T j
variables xj 0 u a cj constraints
Dual problem :
min bT u,
s.t. AT u c.
Solve primal LP using simplex. At optimality the reduced cost vector
c = c cTB A1
B A 0

Define uT = cTB A1
B then u is dual feasible and

uT b = cT A1 T
B b =c x = p

Since uT b d p we have our result.


Linear Programming 4 23

Reading assigment

Rardin :
Sensitivity: Chapter 7
Bertsimas :
Chapter 4: Section 4.1 - Section 4.6
Chapter 5:
Luenberger :
Chapter 4

You might also like