You are on page 1of 32

70 Chapt er 1 I nt roduct i on to Li ne ar Programmi ng

15. Consider the linear programming problem


Maximize z = cTx
subject to
Ax <b
x>_O.
If x~ and x z are feasible solutions, show that
1 2
X - - ~X 1 - I - ~X 2
is a feasible solution.
16. Generalize the previous exercise to show that
x ---- r x 1 -I- SX 2
is a feasible solution if r + s = 1.
1.3 GEOMETRY OF LINEAR PROGRAMMING PROBLEMS
In this section we consider the geomet ry of linear pr ogr ammi ng prob-
lems by first looking at the geomet ri c i nt erpret at i on of a single constraint,
then at a set of constraints, and finally at the objective function. These
ideas give rise to a geomet ri c met hod for solving a linear pr ogr ammi ng
probl em that is successful only for probl ems with two or t hree variables.
However, the geomet ri c concepts t hat we discuss can be built into an
algebraic algorithm that can effectively solve very large problems. Aft er
casting the geomet ri c ideas in a linear algebra setting, we will present this
al gor i t hmmt he simplex me t hodmi n Section 2.1.
Geometry of a Constraint
A single constraint of a linear pr ogr ammi ng probl em in st andard form,
say the ith one,
a i l x I -t- a i 2 x 2 -t- " " + a i n X n < b i ,
can be written as
where
aTx < b i,
a T - - [ a i l ai 2 . . . a i n ] .
The set of points x = (x 1, x2 , . . . , x, ) in R ~ that satisfy this const rai nt is
called a closed half-space. If the inequality is reversed, the set of points
x = (Xl, x2 , . . . , Xn) in R n satisfying
aTx > b i
is also called a closed half-space.
1.3 Geometry of Linear Programming Problems 71
EXAMPLE 1.
space
Consi der the const rai nt 2x + 3y < 6 and the closed half-
- ([y]
3 [y] 6 /
which consists of the points satisfying the constraint. Not e t hat the poi nt s
(3, 0) and (1, 1) satisfy the inequality and t herefore are in H. Also, the
points (3, 4) and ( - 1 , 3) do not satisfy the inequality and t herefore are not
in H. Every poi nt on the line 2x + 3y = 6 satisfies the const rai nt and thus
lies in H. A
We can graph a closed half-space in R E by graphi ng the line and t hen
using a test poi nt to decide which side of the line is included in the
half-space. A simple way to graph the line is to find its x- and y-intercepts.
By setting y - 0 in the equat i on of the line and solving for x, we obt ai n
the x-intercept and plot it on the x-axis. Similarly, by setting x = 0 in the
equat i on of the line and solving for y, we obt ai n the y-i nt ercept and pl ot it
on the y-axis. We now connect the two points to sketch the graph of the
line. To choose a test point, we check whet her the origin is on the line. If it
is not, we use it as a test point, checking whet her the origin satisfies the
inequality. If it does, the side of the line containing the origin (our test
point) is the closed half-space H. If it does not, t hen the ot her side of the
line is the closed half-space. If the origin is on the line, some ot her poi nt
not on the line must be selected as the test point. Some possible choices
are (1, 0), (0, 1), or (1, 1).
EXAMPLE 1 (continued). We comput e the x-intercept to be x = 3
and the y-i nt ercept to be y = 2. These points have been pl ot t ed and the
line connecting t hem has been drawn in Fi gure 1.2a. Since the origin does
not lie on the line 2x + 3y = 6, we use the origin as the test point. The
coordi nat es of the origin satisfy the inequality, so t hat H lies below the
line and contains the origin as shown in Figure 1.2b. A
FIGURE 1.2 Closed half-space in two dimensions.
72 Chapter 1 Introduction to Linear Programming
T
z
~( 0, 0, 4) 20
(5,0,
FIGURE 1.3 Closed half-space in three dimensions.
EXAMPLE 2. The constraint in t hree variables, 4x + 2y + 5z < 20,
defines the closed half-space H in R 3, where
H = [4 2 5] < 20 .
We can graph H in R 3 by graphi ng the plane 4x + 2y + 5z = 20 and
checking a test point. To graph the plane, we graph the intersection of the
plane with each of the coordi nat e planes. These intersections are lines in
the coordi nat e planes. Thus, letting z = 0 yields the line 4x + 2y = 20 in
the xy plane, which can be graphed as described in Exampl e 1. Similarly,
letting y = 0 yields the line 4x + 5z = 20 in the x z plane. Finally, setting
x = 0 yields the line 2y + 5z = 20 in the yz plane. The graph of the plane
containing these lines is shown in Fi gure 1.3. The origin does not lie on the
plane and thus can be used as a test point. It satisfies the inequality so that
the closed half-space contains the origin as shown in Fi gure 1.3. A
In mor e t han t hree dimensions, it is impossible to sketch a closed
half-space. However, we can t hi nk about the geomet ry of closed half-spaces
in any di mensi on and use the lower di mensi on examples as model s for our
computations.
A typical constraint of a linear programmi ng probl em in canonical form
has the equat i on
aTx = b. (1)
1.3 Geometry of Linear Programming Problems ~
Its graph in R n is a hyperplane. If this equation were an inequality,
namely,
aTx _< b,
then the set of points satisfying the inequality would be a closed half-space.
Thus, a hyperplane is the boundary of a closed half-space. Intuitively, it
consists of the points that are in the half-space, but on its edge.
EXAMPLE 3. The equation 4x + 2y + 5z = 20 defines a hyperplane
in R 3. The graph of this hyperplane, which is really a plane in this case, is
shown in Figure 1.3. The hyperplane H is the boundary of the closed
half-space H 1 defined by the inequality 4x + 2y + 5z < 20, considered in
^
Example 2. The half-space H 1 extends below the hyperplane H and lies
behind the page. We also see that H is the boundary of the closed
half-space H 2 defined by the inequality 4x + 2y + 5z > 20. The half-space
H 2 extends above the hyperplane and reaches out of the page. /x
The hyperplane H defined by (1) divides R ~ into the two closed
half-spaces
Hi = {x ~ R" la T _<b}
and
H2 = { x ~Rn l a T >b }.
We also see that H 1 n H 2 = H, the original hyperplane. In ot her words, a
hyperplane is the intersection of two closed half-spaces.
Recall from Section 1.2 that a feasible solution to a linear programmi ng
problem is a point in R n that satisfies all the constraints of the problem. It
then follows that this set of feasible solutions is the intersection of all the
closed half-spaces det ermi ned by the constraints. Specifically, the set of
solutions to an inequality ( < or > ) constraint is a single closed half-space,
whereas the set of solutions to an equality constraint is the intersection of
two closed half-spaces.
Sketch the set of all feasible solutions satisfying the set of EXAMPLE 4.
inequalities
2x + 3y < 6
- x +2 y <4
x >0
y >0.
Solution. The set of solutions to the first inequality, 2x + 3y _< 6, is
shown as the shaded region in Figure 1.4a and the set of solutions to the
~ Chapter 1 Introduction to Linear Programming
second inequality, - x + 2y < 4, form the shaded region in Figure 1.4b. In
determining these regions, we have used the origin as a test point. The
regions satisfying the third and fourth constraints are shown in Figures
1.4c and 1.4d, respectively. The point (1, 1) was used as a test point to
determine these regions. The intersection of the regions in Figures
1. 4a-l . 4d is shown in Figure 1.4e; it is the set of all feasible solutions to
the given set of constraints. /x
1.3 Geometry of Linear Programming Problems 75
3
)"
FIGURE 1.5 Set of all feasible solutions (three dimensions).
EXAMPLE 5. Using the same technique as in the previous example, we
find the sketch of the region in R 3 defined by the inequalities
x >0
y >0
z >0
5 x +3y +5 z < 15
10x + 4y + 5z < 20.
The first three inequalities limit us to the first octant of R 3. The ot her two
inequalities define certain closed half-spaces. The region, which is
the intersection of these two half-spaces in the first octant, is shown in
Figure 1.5.
Geometry of the Objective Function
The objective function of any linear programmi ng problem can be
written as
cTx.
If k is a constant, then the graph of the equation
cXx = k
is a hyperplane. Assume that we have a linear programmi ng problem that
asks for a maximum value of the objective function. In solving this
problem, we are searching for points x in the set of feasible solutions for
which the value of k is as large as possible. Geometrically we are looking
7~ Chapter 1 Introduction to Linear Programming
for a hyperpl ane that intersects the set of feasible solutions and for which
k is a maximum. The value of k measures the distance from the origin to
the hyperplane. We can think of starting with very large values of k and
then decreasing t hem until we find a hyperpl ane that just touches the set
of feasible solutions.
EXAMPLE 6. Consider the linear programmi ng probl em
Maximize z =4x +3y
subject to
x + y <4
5x + 3y < 15
x >0, y >0
The set of feasible solutions (the shaded region) and the hyperplanes
z =9, z = 12, z = ~, and z = 15
are shown in Figure 1.6. Note that it appears that the maxi mum value of
5
the objective function is -~, which is obt ai ned when x = 3, y = 3- This
conjecture will be verified in a later section. A
A linear programmi ng problem may not have a solution if the set of
feasible solutions is unbounded. In the following example, we are asked to
maximize the value of the objective function, but we discover that no such
maximum exists.
FI fURE 1.6 Objective function hyperplanes (two dimensions).
1.3 Geometry of Linear Programming Problems ~
EXAMPLE 7. Consider the linear programmi ng probl em
Maximize z =2 x +5 y
subject to
- 3x + 2y < 6
x +2 y >2
x >0, y >0
The graph of the set of feasible solutions is shown as the shaded region in
Figure 1.7. We have also drawn the graphs of the hyperplanes
z =6 , z = 14, and z =2 0.
We see that in each case t here are points that lie to the right of the
hyperplane and that are still in the set of feasible solutions. Evidently the
value of the objective function can be made arbitrarily large. A
EXAMPLE 8. Consider a linear programmi ng probl em that has the
same set of constraints as in Example 7. However, assume that it is a
minimization probl em with objective function
z = 3x + 5y.
We have drawn the graph of the set of feasible solutions in Figure 1.8 (the
shaded region) and the hyperplanes
z =6 , z =9, and z = 15.
It appears that the opt i mum value of the objective function is z = 5 which
is obtained when x = 0, y = 1. Smaller values of the objective function,
such as z = 3, yield graphs of hyperplanes that do not intersect the set of
78 Chapter 1 Introduction to Li near Programmi ng
FIGURE 1.8 Minimization (two dimensions).
feasible solutions. You may want to sket ch t he hyper pl anes cor r espondi ng
to z = 3 and z = 5. A
Geometry of the Set of Feasible Solutions
We now expl ore t he quest i on of wher e in t he set of feasible sol ut i ons
we are likely to find a poi nt at whi ch t he obj ect i ve funct i on t akes on its
opt i mal value. We first show t hat if x 1 and x 2 are two feasible solutions,
t hen any poi nt on t he line segment j oi ni ng t hese two poi nt s is also a
feasible solution. The line segment j oi ni ng x I and x 2 is defi ned as
{x ~ R n Ix = ,~x I q- (1 - / ~) x 2 , 0 __<A __<1}.
Obser ve t hat , if A = 0, we get x 2 and, if A = 1, we get x 1. The poi nt s of
t he line segment at whi ch 0 < A < 1 are cal l ed t he i nt er i or poi nt s of t he
line segment , and x 1 and x 2 and cal l ed its end points.
Now suppose t hat x 1 and x 2 are feasi bl e sol ut i ons of a l i near pr ogr am-
mi ng pr obl em. If
aTx < b i
is a const rai nt of t he pr obl em, t hen we have
8TXl _~ b i and aTx2 __< b i .
1.3 Geomet ry o f Li near Programmi ng Probl ems ~
For any point x = Ax 1 + (1 - ) k) x2 , 0 _~< ,~ _~< 1, on the line segment joining
X 1 and x 2 , w e have
aTx = a T( AXl + (1 - A) x 2)
= AaTXl + (1 - A) aTx2
<_ Ab i + ( 1 - A) b i
--- bi .
Hence, x also satisfies the constraint. This result also holds if the inequal-
ity in the constraint is reversed or if the const rai nt is an equality. Thus, the
line segment joining any two feasible solutions to a linear pr ogr ammi ng
probl em is cont ai ned in the set of feasible solutions.
Consi der now two feasible solutions x I and x 2 to a linear pr ogr ammi ng
probl em in st andard form with objective function cTx. If the objective
function has the same value k at x I and x 2, t hen proceedi ng as above we
can easily show t hat it has the value k at any poi nt on the line segment
joining x I and x 2 (Exercise 32). Suppose t hat the value of the objective
function is different at x~ and x 2 and say
cTx 1 < cTX 2"
If x = Ax I + (1 - A)x 2, 0 < A < 1, is any interior poi nt of the line seg-
ment joining xl and x2, t hen
cTx- - " cT( , ~Xl + (1 -- X)X 2)
= XcTxl + (1 -- A)cTx2
< AcTx2 + (1 -- A)cTx2
= cTx2 .
That is, the value of the objective function at any interior poi nt of the line
segment is less t han its value at one end point. In the same manner we
may show t hat the value of t he objective function at any interior poi nt of
the line segment is great er t han its value at the ot her end poi nt (verify).
Summarizing, we conclude that, on a given line segment joining two
feasible solutions to a linear pr ogr ammi ng probl em, the objective function
either is a const ant or attains a maxi mum at one end poi nt and a mi ni mum
at the other. Thus, the propert y t hat a set contains the line segment joining
any two points in it has strong implications for linear programmi ng. The
following definition gives a name to this propert y.
DEFINITION. A subset S of R n is called convex if for any two distinct
points x I and x 2 in S the line segment joining x I and x2 lies in S. That is,
80 Chapter 1 Introduction to Linear Programming
S is convex if, whe ne ve r x I and x 2 ~ S, so does
x - - AX 1 + (1 - A) x 2 f or 0 < A < 1. A
EXAMPLE 9. The set s in R E in Fi gur es 1.9 and 1.10 ar e convex. The
set s in R E in Fi gur e 1.11 ar e not convex. /x
The f ol l owi ng r esul t s hel p t o i dent i f y convex sets.
THEOREM 1.1. A c l ose d h a l f - sp a c e is a c o n ve x set .
Pr oof. Let t he hal f - space H 1 be def i ned by cTx _< k. Let x 1 and
X 2 ~ H i and cons i der x = Ax 1 + (1 - A) x 2 , ( 0 < A < 1).
Th e n
c Tx = c T[ , ~X 1 "J- (1 - , ~) x 2 ]
- - Ac Tx 1 + (1 - A) c Tx 2 .
1.3 Geometry of Linear Programming Problems 81
Since h >_ 0 and 1 - h >_ O, we obt ai n
Thus,
c T x < Ak + ( 1 - A) k =k .
cT x _< k,
so t hat x ~ H 1. A
THEOREM 1.2. A hype r pl ane is a c o n ve x set .
P r o o f Exerci se. A
THEOREM 1.3. The i nt er sect i on o f a f i n i t e col l ect i on o f c o n ve x set s is
c o n o e x .
Proof. Exerci se. A
THEOREM 1.4. L e t A be an m n mat r i x , a n d l et b be a ve c t or i n R m.
The set o f sol ut i ons t o t he syst e m o f l i near e qu a t i o n s Ax = b, i f i t is n o t e mp t y,
is a c o n ve x set .
Proof. Exerci se. A
Convex sets ar e of t wo types: bounde d and unbounde d. To defi ne a
bounde d convex set, we first need t he concept of a rect angl e. A r ect angl e
in R ~ i s a s e t ,
R = {x ~ Rnl ai <X i <_~ b i } ,
wher e a i < b i, i = 1, 2 , . . . , n, are real number s. A bounded convex set is
one t hat can be encl osed in a r ect angl e in R n. An unbounded convex set
cannot be so encl osed. The convex sets i l l ust rat ed in Fi gur e 1.9 ar e
bounded; t hose in Fi gur e 1.10 are unbounde d.
1.3 EXERCISES
In Exercises 1-6 sketch the convex set formed by the intersections of the
half-space determined by the given inequalities. Also indicate whether the convex
set is bounded.
1. 2.
x + y < 5 x - y < - 2
2 x + y < 8 2 x - y < O
x >0, y >0 3x +y <6
x>_O, y>_O
3. 4.
4x + y >8 3x + y <6
3x + 2y > 6 2x + 3y > 4
x >0, y >0 x >0, y >0
8~ Chapter 1 Introduction to Linear Programming
2 x +5 y +5 z <2 0 4x + 5 y + 4z <2 0
4x +2 y + z <8 2 0x + 1 2 y + 1 5 z <6 0
x >0, y >0, z >0 x >0, y >0, z >0
In Exercises 7- 12 sketch the set of feasible solutions to the given set of inequali-
ties.
11.
7. 8.
- x +y < 2 x +y < 3
2x +y < 4 2x +y < 4
x >0, y >0 x >0, y >0
10.
x + y >3 - x + y < 2
- 3x + 2 y < 6 2 x + y < 2
x >0, y >0 y <l
x >0
12.
6 x +4y +9z <36 1 2 x +6 y + 1 6 z <84
2 x +5 y +4z <2 0 8x +5 y + 1 2 z <6 0
x >0, y >0, z >0 x >0, y >0, z >0
In Exercises 13-16 (a) sketch the set of feasible solutions to the given linear
programming problem, (b) draw the objective function z = cTx = k, for the indi-
cated values of k, and (c) conjecture the optimal value of z.
13. Maximize z = 3x + 4y
subject to
k = 6, 8, 10, and 12.
14. Maximize z = 2x + 3y
subject to
k = 4, 6, 8, and 10.
15. Maximize z = 3x + y
subject to
k -- 2, 6, 8, and 12.
x +3y <6
4x + 3y < 12
x >0, y >0
x + y < 4
3x +y <6
x +3y <6
x >0, y >0
- 2x + 3y < 6
x + y < 4
3x +y <6
x >0, y >0
1.3 Geometry of Linear Programming Problems 8~
16. Ma xi mi z e z = 4x 1 + 8x 2 + x 3
s ubj ect t o
8X 1 -~- 2 x 2 + 5x 3 < 68
5x 1 + 9x z + 7x 3< 120
13x 1 + l l x 2 + 43x 3 < 250
x 1 >0, x 2 >0, x 3>0
k = 80, 90, 100, a nd 110.
I n Exer ci s es 1 7- 2 4 d e t e r mi n e wh e t h e r t he gi ven set is convex.
84 Chapt er 1 Int roduct i on to Li near Programmi ng
25. Pr ove t hat R" is a convex set.
26. Pr ove t hat a subspace of R n is a convex set.
27. Show t hat a r ect angl e in R" is a convex set.
28. Let H 2 be t he hal f- space in R" def i ned by cXx > k. Show t hat H 2 is convex.
29. Show t hat a hyper pl ane H in R" is convex ( The or e m 1.2).
30. Show t hat t he i nt er sect i on of a finite col l ect i on of convex sets is convex
( The or e m 1.3).
31. Gi ve t wo pr oof s of The or e m 1.4. One pr oof shoul d use t he defi ni t i on of convex
sets and t he ot her shoul d use Th e o r e m 1.3.
32. Consi der t he l i near pr ogr ammi ng pr obl em
Maxi mi ze z - - c Tx
subj ect t o
Ax _ <b
x>_O.
Let x~ and x 2 be feasi bl e sol ut i ons t o t he pr obl em. Show t hat , if t he obj ect i ve
f unct i on has t he val ue k at bot h x 1 and x 2, t hen it has t he val ue k at any poi nt
on t he l i ne s egment j oi ni ng x I and x 2.
33. Show t hat t he set of all sol ut i ons t o Ax < b, if it is nonempt y, is a convex set.
34. Show t hat t he set of sol ut i ons t o Ax > b, if it is nonempt y, is a convex set.
35. A f unct i on mappi ng R" i nt o R m is cal l ed a l i near t r a ns f or ma t i on if f ( u + v)
= f ( u) + f(v), for any u and v in R n, and f ( r u ) = rf(u), for any u in R" and r
in R. Pr ove t hat if S is a convex set in R n and f is a l i near t r ans f or mat i on
mappi ng R" i nt o R m, t hen f ( S) = {f ( v) I v ~ S} is a convex set. A f unct i on f
defi ned on a convex set S in R" is cal l ed a convex f unct i on if
f ( Ax 1 + (1 -- A)x 2) < Af ( x 1) + (1 -- A) f ( x 2)
f o r 0_ <A_ < l a n d a n y x ~, x 2 ~S .
36. Show t hat a f unct i on f def i ned on a convex set S in R" is convex if t he l i ne
segment j oi ni ng any two poi nt s (Xl, f ( xl ) ) and (x e, f ( xe) ) does not lie bel ow its
graph. (See Fi gur e 1.12.)
37. Show t hat t he obj ect i ve f unct i on z = cXx of a l i near pr ogr a mmi ng pr obl em is a
convex f unct i on.
f ( x2 )
f ( x l )
t I
x 1 x2
FI GU RE 1. 12
1 . 4 Th e E x t r e m e P o i n t Th e o r e m 85
1.4 THE EXTREME POINT THEOREM
We cont i nue in this section t oward our goal of underst andi ng t he
geomet ry of a l i near pr ogr ammi ng probl em. We first combi ne t he results
of the last section to describe the geomet r y of a general l i near program-
ming probl em and t hen i nt roduce the concept of an ext reme point, or
comer point. These become candi dat es for solutions to the probl em.
We now consi der a general l i near pr ogr ammi ng probl em. The graph of
each const rai nt defined by an inequality is a closed half-space. The graph
of each const rai nt defined by an equality is a hyperpl ane, or i nt ersect i on of
two closed half-spaces. Thus, the set of all points t hat satisfy all t he
constraints of t he l i near pr ogr ammi ng pr obl em is exactly t he i nt ersect i on
of the closed half-spaces det er mi ned by t he constraints. Fr om the previ ous
results we see t hat this set of points, if it is nonempt y, is a convex set,
because it is the i nt ersect i on of a finite number of convex sets. In general,
the i nt ersect i on of a finite set of closed half-spaces is called a c onv e x
pol yhedron, and thus, if it is not empty, t he set of feasible solutions to a
general linear pr ogr ammi ng pr obl em is a convex pol yhedron.
We now t urn to describing the points at which an opt i mal solution to a
general linear pr ogr ammi ng probl em can occur. We first make the follow-
ing definition.
DEFINITION. A poi nt x ~ R" is a c onv e x c om b i na t i on of the points
Xl, X2, . . . , x r in R" if for some real number s c 1, c 2 , . . . , c r which satisfy
~C i - - 1 and C i > 0 , 1 < i < r ,
i =l
we have
4.
X - - 2__, C i X i .
i =1
THEOREM 1.5. The set o f al l c onve x c o mb i n a t i o n s o f a fi ni t e set o f p o i n t s
in R n is a c onve x set .
Pr o o f Exercise. /x
DEFINITION. A point u in a convex set S is called an ext reme poi nt of
S if it is not an i nt eri or point of any line segment in S. That is, u is an
ext reme poi nt of S if t here are no distinct poi nt s x I and x 2 in S such t hat
U- - " ,~X 1 d- (1 - A) x 2 , 0 < A < 1.
EXAMPLE 1. The only ext reme points of t he convex set in Fi gure 1.13
are A, B, C, D, and E (verify). A
86 Chapter 1 Introduction to Linear Programming
l
c
x
FIGURE 1.13
EXAMPLE 2. The ext reme points of t he convex sets shown in Fi gures
1.9 and 1.10 are given in t he following table:
Fi g u r e Extreme points
1.9a A, B, C, D
1.9b A, B, C
1.9c The entire edge of the ellipse
1.9d A, B, C, D
1.10a A
1.10b O
1.10c None
1.10d O
A
THEOREM 1.6. Le t S be a convex set in R' . A poi nt u in S is an extreme
poi nt o f S i f and onl y i f u is not a convex combi nat i on o f ot her poi nt s o f S.
Proof. Exercise. A
Since t he set of all feasible solutions to a general linear pr ogr ammi ng
probl em is a convex pol yhedron, it contains an infinite number of points.
An optimal solution to t he probl em occurs at one of these points. But how
do we find t he right poi nt among an infinite number ? The following
t heorem, whose proof we do not give, shows that, if a linear pr ogr ammi ng
probl em has an opt i mal solution, t hen this solution must occur at an
ext reme point. Al t hough an opt i mal solution can occur at a feasible
solution t hat is not an ext reme point, from t he geomet r y of t he situation it
suffices to consi der only ext reme points. In fact, t here are only a finite
number of ext reme points, but this number may be very large. In t he next
1.4 The Extreme Point Theorem 87
chapt er we show how to search t hrough the ext reme points in an orderly
manner to find an opt i mal solution after a relatively small number of steps.
THEOREM 1.7 (EXTREME POINT). L e t S be t he set o f feasi bl e sol ut i ons to
a general l i near pr ogr ammi ng pr obl em.
1. I f S is none mpt y a n d bounded, t hen an opt i mal sol ut i on to t he pr obl e m
exists a n d occurs at an ext reme poi nt .
2. I f S is none mpt y and not bounde d a n d i f an opt i mal sol ut i on to the
pr obl em exists, t hen an opt i mal sol ut i on occurs at an ext reme poi nt .
3. I f an opt i mal sol ut i on to t he pr obl e m does not exist, t hen ei t her S is
empt y or S is unbounded. /x
EXAMPLE 3.
in Section 1.1
Consi der the linear pr ogr ammi ng probl em of Exampl e 1
Maximize z = 120x + 100y
subject to
2x + 2y < 8
5x + 3y < 15
x >0, y >0.
The convex set of all feasible solutions is shown as the shaded region in
Figure 1.14.
The ext reme points of the convex set S are (0, 0), (3, 0), (0, 4), and (3, 5).
Since S is nonempt y and bounded, the objective function attains its
maxi mum at an extreme poi nt of S ( Theor em 1.7). We can find which
extreme poi nt is the opt i mal solution by evaluating the objective function
at each ext reme point. This evaluation is shown in Table 1.3. The maxi-
mum value of z occurs at the ext reme poi nt (3, ~). Thus, an opt i mal
3 5
solution is x = 5 and y = 5. In t erms of the model this means t hat the
l umber mill should saw 1500 boar d feet of finish-grade l umber and 2500
FIGU RE 1.14
88 Chapter 1 Introduction to Li near Programming
TABLE 1.3
Extreme poi nt Value o f objective funct i on
(x, y) z = 120x + l OOy
(0,0) 0
(3, o) 360
(0, 4) 400
(3,5) 430
board feet of construction-grade lumber per day. These amounts will yield
the maximum profit of $430 per day. A
Some linear programming problems have no solution.
EXAMPLE 4. Consider the linear programming problem:
Maximize z =2 x +5 y
subject to
2x + 3y >_ 12
3x + 4y _< 12
x >0, y >0.
The convex set of all feasible solutions consists of the points that lie in all
four half-spaces defined by the constraints. The sketch of these half-spaces
in Figure 1.15 shows that there are no such points. The set of feasible
solutions is empty. This situation will arise when conflicting constraints are
put on a problem. The assumptions for the model must be changed to yield
a nonempty set of feasible solutions. A
1.4 The Extreme Point Theorem 89
y
~~, , II
' ~ / 21- ~ 3x+5y=15
~. ~] I ~ ~' ~
-3x+2y=6 / ~ ~'
Y. . ill ....
1 2 "~3_ 4 5 ~
x +2 ~2 ~~
FIGURE 1.16
~x
We have already seen in Example 7 in Section 1.3 that a linear
programming problem with an unbounded convex set of feasible solutions
may have no finite optimal value for the objective function. On the other
hand, a linear programming problem with an unbounded convex set of
feasible solutions may have an optimal solution.
EXAMPLE 5. In our previous discussion of this example (Example 8,
Section 1.3), it seemed that the optimum value of the objective function
was z = 5, which occurs when x = 0 and y = 1. We shall now show that
this is the case.
We divide the set of feasible solutions into two regions with the
arbitrarily chosen hyperplane 3x + 5y = 15, as shown in Figure 1.16. All
the points in region II satisfy 15 < 3x + 5y, and all the points in region I
satisfy 15 > 3x + 5y. Thus, we need consider only the points in region I to
solve the minimization problem, since it is only those points at which the
objective function takes on values smaller than 15. Region I is closed and
bounded and has a finite number of extreme points: (0,3), (0, 1), (2, 0),
(5,0). Consequently, Theorem 1.7 applies. By evaluating the objective
function at these four points, we find that the minimum value is z = 5 at
(0,1). Note that other choices for the dividing hyperplane are possible. ZX
EXAMPLE 6. Consider the linear programming problem
Maximize z = 2x + 3y
subject to
x + 3y < 9
2x + 3y < 12
x>_O, y>_O.
90 Chapter 1 Introduction to Li near Programming
TABLE 1.4
Extreme poi nt Value o f z = 2x + 3y
(0,0) 0
(0,3) 9
(6,0) 12
(3,2) 12
The convex set of all feasible solutions is shown in Fi gure 1.17. The
extreme points and correspondi ng values of the objective function are
given in Tabl e 1.4. We see that bot h (6, 0) and (3, 2) are opt i mal solutions
to the probl em. The line segment joining these points is
( x , y ) = X(6,0) + (1 - A)(3, 2)
= (6A, 0) + (3 - 3A, 2 - 2A)
=( 3+3A, 2 - 2 A) for 0<A<I .
For any poi nt (x, y) on this line segment we have
z =2 x +3y =2 ( 3+3A) +3( 2 - 2 A)
= 6 + 6A + 6 - 6A
= 12.
Any point on this line segment is an opt i mal solution. A
1.4 EXERCISES
In Exercises 1-12 (a) find the extreme points of the set of feasible solutions for
the given linear programming program and (b) find the optimal solution(s).
1.4 The Extreme Point Theorem ~1
1. Maxi mi ze z =x +2 y
subj ect t o
3x + y <6
3x + 4y < 12
x >0, y >0.
3. Maxi mi ze z =3x +y
subj ect t o
- 3x + y >6
3x + 5y < 15
x >0, y >0.
5. Mi ni mi ze z =3x +5 y
subj ect t o t he s ame const r ai nt s
as t hose in Exer ci se 4.
7. Maxi mi ze z =2 x +5 y
subj ect t o
2 x +y >2
x + y < 8
x + y >3
2x + y < 12
x >_ 0, y >_ 0.
9. Maxi mi ze z = 2x I + 4x 2 + 3x 3
subj ect t o
x I + x 2 + x 3 < 12
x I + 3x 2 + 3x 3 < 24
3x I + 6x 2 + 4x 3 < 90
X 1 >_~ 0, X 2 ~_~ 0, X 3 >_~ 0.
11. Maxi mi ze z = 5X 1 + 2x 2 + 3x 3
subj ect t o
x I + x 2 + x 3 - 1
2x I + 5x 2 + 3x 3 < 4
4x I + x 2 + 3x 3 < 2
X 1 >__ 0, X 2 >__ 0, X 3 >__ 0.
13. Pr ove Th e o r e m 1.5.
14. Pr ove Th e o r e m 1.6.
2. Mi ni mi ze z =5 x - 3y
subj ect t o
x +2 y <4
x +3y >6
x >0, y >0.
4. Maxi mi ze z =2 x +3y
subj ect t o
3x + y <6
x + y <4
x +2 y <6
x>_O, y >0.
6. Maxi mi ze z--- i x +
subj ect t o
x +3y <6
x + y >4
x >_ 0, y >_ 0.
8. Maxi mi ze z = 2X 1 + 4X 2
subj ect t o
5X 1 "4- 3X 2 + 5X 3 < 15
1 0x I + 8x 2 -1- 1 5 x 3 < 40
x 1 >_. 0, x 2 >_~ 0, X 3 >_~ 0~
10. Mi ni mi ze z = 2Xl + 3x2 + x3
subj ect t o t he same const r ai nt s
as t hose in Exerci se 9.
12. Mi ni mi ze z = 2Xl + x3
subj ect t o
X 1%- X 2 -~- X 3 -- 1
2 x 1 + x 2 + 2 x 3 >__ 3
X 1 >_~ 0, X 2 ~_~ 0, X 3 ~ 0.
15. Show t hat a set S in R" is convex if and onl y if every convex combi nat i on of a
finite numbe r of poi nt s in S is in S.
92 Ch a p t e r I I n t r o d u c t i o n t o L i n e a r P r o g r a mmi n g
16. Show that if the optimal value of the objective function of a linear program-
ming problem is attained at several extreme points, then it is also attained at
any convex combination of these extreme points.
1.5 BASIC SOLU TIONS
In this section we connect the geomet ri c ideas of Section 1.3 and
Section 1.4 with the algebraic not i ons devel oped in Section 1.2. We have
already seen the i mport ant role pl ayed by the ext reme points of the set of
feasible solutions in obtaining an opt i mal solution to a l i near pr ogr ammi ng
probl em. However, t he ext reme points are difficult to comput e geomet ri -
cally when t here are mor e t han t hree variables in the probl em. In this
section we give an algebraic descri pt i on of ext reme points t hat will facili-
tate their comput at i on. This descri pt i on uses the concept of a basic
solution to a linear pr ogr ammi ng probl em. To lay t he foundat i on for t he
definition of a basic solution, we shall now prove two very i mpor t ant
general t heor ems about linear pr ogr ammi ng probl ems in canonical form.
Consi der the linear pr ogr ammi ng pr obl em in canonical form
Maximize z = cTx (1)
subject to
Ax = b (2)
x > 0 , ( 3)
where A is an m s matrix, c ~ R ~, x ~ R ~, and b ~ R m. Let t he col umns
of A be denot ed by A1, A2, . . . ,A~. We can t hen write (2) as
x1A 1 + x2A 2 + "'" +x sA s = b. (4)
We make two assumpt i ons about the const rai nt mat ri x A. We assume t hat
m < s and t hat t here are m col umns of A t hat are linearly i ndependent .
That is, t he rank of A is m. These assumpt i ons are t rue for a l i near
pr ogr ammi ng probl em in canonical form t hat arose from a probl em in
st andard form as given in Equat i ons (4), (5), and (6) in Section 1.2.
This set of m columns, and i ndeed any set of m linearly i ndependent
columns of A, forms a basis for R m. We can always r enumber t he col umns
of A (by r eor der i ng the component s of x), so t hat t he last m col umns of A
are linearly i ndependent . Let S be t he convex set of all feasible solutions
to the pr obl em det er mi ned by (1), (2), and (3).
THEOREM 1.8. Suppose that the last m col umns o f A, which we denot e by
t~1, t ~2 , . . . , t~m, are linearly i ndependent and suppose t hat
Xtl t ~ l "Jr- XP2 t ~ 2 "Jr- "'" "[- X m t ~ m = b, ( 5)
1 . 5 B a s i c So l u t i o n s 93
where x' i > 0 for i = 1, 2 , . . . , m. Then the poi nt
, , , )
X = ( 0, 0, . . . , 0, X l , X2 , . . . , x m
is an extreme poi nt o f S.
Pr oof We assumed x > 0 in t he st at ement of t he t heor em. Equat i on
(5) r epr esent s Ax = b, since t he first s - m component s of x are zero.
Thus, x is a feasible sol ut i on to t he l i near pr ogr ammi ng pr obl em given by
(1), (2), and (3). Assume t hat x is not an ext r eme poi nt of S. Then, x lies in
t he i nt eri or of a line segment in S. That is, t her e are poi nt s v and w in S
bot h di fferent from x and a number A, 0 < A < 1, such t hat
Now
and
x = Av + ( 1 - A) w .
, , , )
u - - ( Vl ~V2 , . . . , Vs _ m ~Vl , V2 ~. . . ~U m
(6)
? ! ! )
W "~ W1 , W2 , . . . , Ws _ m , W1 , W2 , . . . ~wm ,
wher e all t he component s of v and w are nonnegat i ve, since v and w are
feasible solutions. Subst i t ut i ng t he expressi ons for x, v, and w i nt o (6)7 we
have
0- - AU i a t- (1 - A) wi , 1 < i < s - m
(7)
! t
x j =Av ~+ ( 1 - A) w) , 1 < j < m .
Since all t he t erms in (7) are nonnegat i ve and A and 1 - A are positive, we
concl ude t hat v i =O and w i =0 for i = 1 , 2 , . . . , s - r e . Since v is a
feasible solution, we know t hat Av = b and, because t he first s - m
component s of v are zero, this equat i on can be wr i t t en as
l l l !
u1A~I -1- u2Ar 2 "q- "' " q - U m i ~ m -" b. (8)
If we now subt ract Equat i on (8) f r om Equat i on (5), we have
( Xq - - V ~)/ ~ 1 "~ ( X2 - - v2) At 2 -4- . . . " 4" ( X t m - v t m ) t ~m - - O.
Since we assumed t hat / ~l , Ar 2 , . . . , Ar m we r e linearly i ndependent , we
concl ude t hat
' ' for 1 <i <m
X i = V i - - _
and consequent l y t hat x = v. But we had assumed x # v. This cont radi c-
t i on i mpl i es t hat our assumpt i on t hat x is not an ext r eme poi nt of S is
false. Thus, x is an ext r eme poi nt of S, as we want ed to show. A
THEOREM 1.9. I f x = (X 1, X 2 , . . . , X, ) is an extreme poi nt o f S, t hen the
col umns o f A t hat correspond to posi t i ve xj fo r m a linearly i ndependent set o f
vectors in R m.
94 Chapter 1 Introduction to Linear Programming
Pr o o f To simplify the notation, renumber the columns of A and the
component s of x, if necessary, so that the last k components, denot ed by
! p t
x i , x 2 , . . . , Xk, are positive. Thus, Equat i on (4) can be written as
x~A' 1 + x~A' 2 + "- +x~,A' k = b. (9)
We must show that A'I, A' 2, . . . , / ~k are linearly independent. Suppose they
are linearly dependent. This means that there are numbers cj, 1 _< j _< k,
not all of which are zero, such that
C12~ 1 + CEAr2 + "" +-Ck2~ k -- 0. (10)
Say that c t ~ O. Multiply Equat i on (10) by a positive scalar d, and first add
the resulting equation to Equat i on (9), getting Equat i on (11), and second
subtract it from Equat i on (9), getting Equat i on (12). We now have
(X~ + dc l ) A' 1 + ( x~ + dc2)Ar2 + "'" +(XPk + d Ck) t ~ k = b (11)
(X~ -- dcl )APl + ( x~ - dc2)Ar2 + . . . +(X~k - - d Ck) t ~ k = b. (12)
Now consider the points in R ~,
! ! p
V - - ( 0, 0, . . . , 0, X 1 + d C l , X 2 + d c 2 , . . . , x k + d c k)
and
' - d c ' - d c 2 ' - d c k) w - ( O, O, . . . , O, Xl 1, x 2 , . . . , X k 9
Since d is any positive scalar, we may choose it so that
!
0<d < min xj c j ~: 0.
9 I - ~j l '
1
With this choice of d, the last k coordinates of both v and w are positive.
This fact t oget her with Equations (11) and (12) implies that v and w are
feasible solutions. But we also have
1 1
x - ~v + ~w,
contradicting the hypothesis that x is an extreme point of S. Thus our
assumption that the last k columns of A are linearly dependent is false;
they are linearly independent. A
COROLLARY 1.1. I f X is an ext r eme p o i n t a n d X i l , . . . , X i r are t he r
posi t i ve c o mp o n e n t s o f x, t hen r < m, a n d t he set o f c o l u mn s Al l , . . . , Ai , can
be ext ended t o a set o f rn linearly i nde pe nde nt vect ors in R m by adjoi ni ng a
sui t abl y chosen set o f rn - r c o l u mn s o f A.
Proof. Exercise. A
THEOREM 1.10. At mo st m c o mp o n e n t s o f any ext r eme p o i n t o f S can be
posi t i ve. The rest mu st be zero.
1.5 Basic Solutions ~
Proof. Theor em 1.9 says t hat t he col umns of A correspondi ng to t he
positive component s of an ext reme poi nt x of t he set S of feasible
solutions are linearly i ndependent vectors in R m. But t here can be no
mor e t han m linearly i ndependent vectors in R m. Therefore, at most m of
t he component s of x are nonzero. A
An i mpor t ant feat ure of the canoni cal form of a l i near pr ogr ammi ng
pr obl em is t hat the const rai nt s Ax = b form a system of m equat i ons in s
unknowns. Our assumpt i on t hat t here are m linearly i ndependent col umn
vectors means t hat t he rank of A is m, so t he rows of A are also linearly
i ndependent . That is, r edundant equat i ons do not occur in the system of
constraints. In Theor ems 1.8 and 1.9 we showed the rel at i onshi ps bet ween
t he ext reme points of the set S of feasible solutions and t he linearly
i ndependent col umns of A. We now use i nformat i on about solutions to a
system of equat i ons to describe furt her t he poi nt s of S. Not e t hat S is j ust
t he set of solutions to Ax = b wi t h nonnegat i ve component s (x > 0).
Consi der a system of m equat i ons in s unknowns (m < s) and write it
in mat ri x form as Ax = b. Assume t hat at least one set of m col umns of A
is linearly i ndependent . Choosi ng any set T of m linearly i ndependent
col umns of A (which is choosing a basis for Rm), set the s - m variables
correspondi ng to t he remai ni ng col umns equal to zero. The equat i on
Ax = b may be wri t t en as
x 1 A 1 + x 2 A 2 + "' " + X s A s = b, (13)
where Ai is t he i t h col umn of A. But we have set s - m of t he variables
equal to zero. Let i l , i 2 , . . . , i m be t he indices of the variables t hat were
not set to zero. They are also t he indices of the col umns of A in t he set T.
Consequent l y, ( 13) r educes to
x i l Ai l + x i 2 Ai 2 + . . . +Xi mAi m - - b,
which is a system of m equat i ons in m unknowns and has a uni que
solution. (Why?) The values of t he m variables obt ai ned from solving this
system along with the s - m zeros form a vect or x t hat is called a basi c
sol uti on to Ax = b.
EXAMPLE 1. Consi der t he system of t hree equat i ons in six unknowns
x1
[ I 11
1 0 1 0 1 0 x3
0 - 1 - 1 0 - 1 - 1 - b2 .
1 2 2 1 1 1 x4 b3
x 5
x6
- -
~ Chapter 1 Introduction to Linear Programming
Set t i ng x 2 - - X 3 - - " X 5 = 0, we get t he syst em of t hr ee equat i ons in t hr ee
unknowns given by
[1 0 0]IXl I ibl l
0 0 - 1 x4 " - - b2 9
1 1 1 x 6 b 3
The col umns of this coefficient mat r i x are l i nearl y i ndependent , and this
syst em has t he sol ut i on x I = b~, x 4 = b 2 + b 3 - bl, x 6 - - b 2. Conse-
quent l y, a basic sol ut i on to t he ori gi nal syst em is
X = ( b l , 0, 0, b 2 + b 3 - b l , 0 , - b 2 ) .
On t he ot her hand, if we set x 1 - - X 3 = X 5 "-- 0, we obtain the system
[o o o]I x21 ibl l
- 1 0 - 1 x4 - - b2 9
2 1 1 x 6 b 3
Her e t he col umns of t he coefficient mat r i x are not l i nearl y i ndependent . In
fact, col umn 1 is t he sum of col umns 2 and 3. Thi s syst em cannot be solved
if b I ~: 0. Consequent l y, this choi ce of vari abl es does not l ead to a basi c
solution. /x
In any basic sol ut i on, t he s - m vari abl es t hat are set equal to zero are
cal l ed nonbasi c vari abl es, and t he m vari abl es solved for are called basi c
vari abl es. Al t hough t he t er m basic solution appear s in all t he l i t er at ur e
descri bi ng l i near pr ogr ammi ng, it can be mi sl eadi ng. A basic sol ut i on is a
sol ut i on to t he syst em Ax = b; it does not necessari l y satisfy x >_ 0, and
t her ef or e it is not necessari l y a feasi bl e sol ut i on to t he l i near pr ogr ammi ng
pr obl em given by (1), (2), and (3).
DEFINITION. A basi c feasi bl e sol ut i on to t he l i near pr ogr ammi ng prob-
l em given by (1), (2), and (3) is a basic sol ut i on t hat is also feasible.
THEOREM 1.11. For the linear programming problem determined by (1),
(2), and (3), every basic feasible solution is an extreme point, and, conversely,
every extreme poi nt is a basic feasible solution.
Proof. Exercise. A
THEOREM 1.12. The problem determined by (1), (2), and (3) has a finite
number o f basic feasible solutions.
Proof The number of basic sol ut i ons to t he pr obl em is
( s ) s, ( ) s
m m! ( s - m ) ! s - m
1.5 Basic Solutions ~
because s - m variables must be chosen to be set to zero, out of a total of
s variables. The number of basic feasible solutions may be smaller t han
the number of basic solutions, since not all basic solutions need to be
feasible. A
We now examine the relationship between the set S of feasible solu-
tions to a standard linear programmi ng probl em given in Equat i ons (4),
(5), and (6) in Section 1.2 and the set S' of feasible solutions to the
associated canonical linear programmi ng probl em given in Equat i ons (12),
(13), and (14) in Section 1.2. We have already discussed the met hod of
adding slack variables to go from a point in S to a point in S' . Conversely,
we t runcat ed variables to move from S' to S. More specifically, we have
the following t heorem, whose proof we leave as an exercise.
THEOREM 1.13. Every extreme poi nt of S yields an extreme poi nt o f S'
when slack variables are added. Conversely, every extreme poi nt o f S' , when
truncated, yields an extreme poi nt o f S.
Proof Exercise. A
THEOREM 1.14. The convex set S of all feasible solutions to a linear
programming problem in standard form has a finite number o f extreme points.
Proof Exercise. /x
By combining the Ext reme Point Theor em (Theorem 1.7) and Theor em
1.14 we can give a procedure for solving a st andard linear programmi ng
problem given by Equat i ons (4), (5), and (6) in Section 1.2. First, set up the
associated canonical form of the problem. Then find all basic solutions and
eliminate those that are not feasible. Find those that are optimal among
the basic feasible solutions. Since the objective function does not change
between the standard and canonical forms of the problem, any optimal
solution to the canonical form, found as described above, is an optimal
solution to the standard problem.
From the situation under consideration, the number of basic solutions
to be examined is no more t han
m + n ) .
n
Although this number is finite, it is still too large for actual problems. For
example, a moderate-size probl em with m = 200 and n = 300 would have
about 10144 basic solutions.
EXAMPLE 2. Consider the linear programmi ng probl em given in Exam-
ple 3 in Section 1.2. In this example we can select two of the four variables
x, y, u, v as nonbasic variables by setting t hem equal to zero and then
98 Chapt er 1 I nt r oduct i on t o Li n e a r Pr ogr ammi ng
solve for t he basic variables. If in
we set
t hen
2 2 1 0
5 3 0 1
x
u
u
8
x = y = 0 (nonbasi c vari abl es),
u = 8 and v = 15 (basic vari abl es).
The vect or [0 0 8 15] T is a basic feasible sol ut i on to t he canoni cal
form of t he pr obl em and hence an ext r eme poi nt of S' . By t runcat i ng t he
slack vari abl es we get [0 0] T, whi ch is an ext r eme poi nt of S and a
feasible sol ut i on to t he st andar d f or m of t he pr obl em.
If i nst ead we set
t hen
The vect or [0
u is negative.
x = v = 0 (nonbasi c vari abl es),
y = 5 and u =- 2 (basic vari abl es).
5 - 2 0] T is a basic solution, but it is not feasible, since
In Tabl e 1.5 we list all t he basic solutions, i ndi cat e whet her t hey are
feasible, and give t he t r uncat ed vectors. The st udent shoul d l ocat e t hese
t r uncat ed vect ors on Fi gure 1.14. Once we di scard t he basic sol ut i ons t hat
are not feasible, we select a basic feasible sol ut i on for whi ch t he obj ect i ve
funct i on is a maxi mum. Thus, we again obt ai n t he opt i mal sol ut i on
3 5
x = ~, Jv= ~, u =v , v =v
TABLE 1.5
Val ue o f Tr uncat ed
x y u v Type o f sol ut i on z = 120x + 100y vect or
0 0 8 15 Basi c feasi bl e 0 (0, 0)
0 4 0 3 Basi c feasi bl e 400 (0, 4)
0 5 - 2 0 Basic, not feasi bl e m (0, 5)
4 0 0 - 5 Basic, not feasi bl e m (4, 0)
3 0 2 0 Basi c feasi bl e 360 (3, 0)
3 5
~ 0 0 Basi c feasi bl e 430 (3, 5)
1.5 Basic Solutions 99
1.5 EXERCISES
1. Suppose the canonical form of a liner programming problem is given by the
constraint matrix A and resource vector b, where
A- 2 1 0 0 0 and b = 3 .
4 0 3 0 1 6
Determine which of the following points is
(i) a feasible solution to the linear programming problem.
(ii) an extreme point of the set of feasible solutions.
(iii) a basic solution.
(iv) a basic feasible solution.
For each basic feasible solution x given below, list the basic variables.
3 1 1
0 0 ~ ~ 1
3 3 0 1 1
(a) 0 (b) 5 (c) 0 (d) 1 (e) ~
1 3
5 0 ~ 0
1
6 - 9 0 2
In Exercises 2 and 3, set up a linear programming model for the situation
described. Sketch the set of feasible solutions and find an optimal solution by
examining the extreme points.
2. The Savory Potato Chip Company makes pizza-flavored and chili-flavored
potato chips. These chips must go through three main processes: frying,
flavoring, and packing. Each kilogram of pizza-flavored chips takes 3 min to fry,
5 min to flavor, and 2 min to pack. Each kilogram of chili-flavored chips takes 3
min to fry, 4 min to flavor, and 3 min to pack. The net profit on each kilogram
of pizza chips is $0.12, whereas the net profit on each kilogram of chili chips is
$0.10. The fryer is available 4 hr each day, the flavorer is available 8 hr each
day, and the packer is available 6 hr each day. Maximize the net profit with
your model.
3. Sugary Donut Bakers, Inc., is known for its excellent glazed doughnuts. The
firm also bakes doughnuts, which are then dipped in powdered sugar. It makes
a profit of $0.07 per glazed doughnut and $0.05 per powdered sugar doughnut.
The three main operations are baking, dipping (for the powdered sugar
doughnuts only), and glazing (for the glazed doughnuts only). Each day the
plant has the capacity to bake 1400 doughnuts, dip 1200 doughnuts, and glaze
1000 doughnuts. The manager has instructed that at least 600 glazed dough-
nuts must be made each day. Maximize the total profit with your model.
4. For Exercise 2, write the linear programming problem in canonical form,
compute the values of the slack variables for an optimal solution, and give a
physical interpretation for these values. Also identify the basic variables of the
optimal solution.
1 O0 Chapter 1 Introduction to Linear Programming
5. For Exercise 3, write the linear programming problem in canonical form,
compute the values of the slack variables for an optimal solution, and give a
physical interpretation for these values. Also identify the basic variables of the
optimal solution.
6. Consider the system of equations Ax = b, where
A=[ 2 3 4 0 4] and b =[ 2]
1 0 0 - 2 1 0 "
Determine whether each of the following 5-tuples is a basic solution to the
system.
(a) (1, 0, 1, 0, 0) (b) (0, 2, - 1, 0, 0)
(c) (2, - 2, 3, 0, - 2) (d) (0, 0, x ~, 0, 0)
7. Consider the system of equations Ax - b, where
[23100] [1]
A- - 1 1 0 2 1 and b = 1 .
0 6 1 0 3 4
Determine which of the following 5-tuples are basic solutions to the system.
Give reasons.
(a) (1, 0, - 1, 1, 0)
(b) ( 0, 2 , - 5 , 0, - 1 )
(c) (0, 0, 1, 0, 1)
8. Consider the linear programming problem
Maximize z =3x +2 y
subject to
2 x - y < 6
2x + y < 10
x>_0, y>_0.
(a) Transform this problem to a problem in canonical form.
(b) For each extreme point of the new problem, identify the basic variables.
(c) Solve the problem geometrically.
9. Consider the linear programming problem
Maximize z = 4x 1 + 2 X 2 + 7X 3
subject to
2x 1 - x 2 + 4x 3_< 18
4x I + 2x 2 + 5X 3 __< 10
X 1 >_~ 0, X 2 >_~ 0, X 3 >_~ 0.
(a) Transform this problem to a problem in canonical form.
(b) For each extreme point of the new problem, identify the basic variables.
(c) Which of the extreme points are optimal solutions to the problem?
1.5 Basic Solutions 101
10. Consi der t he linear programmi ng in st andard form
Maximize z = cTx
subject to
Ax <b
x >0.
Show that t he constraints Ax < b may be wri t t en as
[x]
(i) [A I I] x' =b
or as
(ii) Ax + I x ' = b,
where x' is a vect or of slack variables.
11. Prove Corollary 1.1 (Hi nt : Use Theor em 0.13.)
12. Prove for a linear programmi ng probl em in canonical form that a poi nt in t he
set of feasible solutions is an ext reme poi nt if and only if it is a basic feasible
solution ( Theor em 1.11). (Hi nt : Use Theor em 0.13.)
13. Prove t hat t he set of feasible solutions to a linear programmi ng probl em in
st andard form has a finite number of ext reme points ( Theor em 1.14).
Further Reading
Chvfital, Va~ek. Linear Programming. Freeman, New York, 1980.
Griinbaum, B. Convex Polytopes. Wiley-Interscience, New York, 1967.
Hadley, G. Linear Algebra. Addison-Wesley, Reading, MA, 1961.
Murty, Katta G. Linear Programming. Wiley, New York, 1983.
Taha, Hamdy A. Operations Research: An Introduction, third ed., Macmillan, New York, 1982.