You are on page 1of 34

An Additive Algorithm for Solving Linear Programs with Zero-One Variables

Author(s): Egon Balas, Fred Glover, Stanley Zionts


Source: Operations Research, Vol. 13, No. 4 (Jul. - Aug., 1965), pp. 517-549
Published by: INFORMS
Stable URL: http://www.jstor.org/stable/167850
Accessed: 03/03/2010 07:47

Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless
you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you
may use content in the JSTOR archive only for your personal, non-commercial use.

Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at
http://www.jstor.org/action/showPublisher?publisherCode=informs.

Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed
page of such transmission.

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of
content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms
of scholarship. For more information about JSTOR, please contact support@jstor.org.

INFORMS is collaborating with JSTOR to digitize, preserve and extend access to Operations Research.

http://www.jstor.org
Operations Research
July-August 1965

AN ADDITIVE ALGORITHM FOR SOLVING


LINEAR PROGRAMS WITH ZERO-ONE
VARIABLESt

Egon Balas
Centre of MathematicalStatistics, Rumanian Academy, Bucharest
(Received March 2, 1964)

An algorithm is proposed for solving linear programs with variables con-


strained to take only one of the values 0 or 1. It starts by setting all the
n variables equal to 0, and consists of a systematic procedure of successively
assigning to certain variables the value 1, in such a way that after trying a
(small) part of all the 2- possible combinations, one obtains either an
optimal solution, or evidence of the fact that no feasible solution exists.
The only operations required under the algorithm are additions and sub-
tractions; thus round-off errors are excluded. Problems involving up to
15 variables can be solved with this algorithm by hand in not more than
3-4 hours. An extension of the algorithm to integer linear programming
and to nonlinear programming is available, but not dealt with in this
article.

IT IS well known that important classes of economic (and not only


economic) problems find their mathematical models in linear programs
with integer variables. Prominent among these problems are those that
correspond to linear programs with variables taking only one of the
values 0 or 1. A rapidly growing literature on the subject describes many
practical instances of such problems in a large variety of fields.:
At present, several methods are available for solving linear programs
of the type discussed here. ? Best known among them are R. E. GOMORY'S
algorithms,[13,14] for solving linear programs with integer variables. They
use the dual simplex method and impose the integer conditions by adding
t Paper presented at the International Symposium on Mathematical Program-
ming, July 1964, London.
t See references 1-4; reference 5 (pp. 650-656, 695-700); reference 6; reference
7 (pp. 194-202); reference 8 (pp. 535-550); references 9-11.
?See references 12-27; reference 5 (pp. 700-712); reference 7 (pp. 160-194); refer-
ence 8 (pp. 514-535); reference 28 (pp. 190-205).
517
518 Egon Balas

automatically generated new constraints to the original constraint set.


These are satisfied by any integer solution to the latter, but not by the
solution reached at the stage of their introduction. The cutting-plane
approach has also been used by E. M. L. BEALE[171 and R. E. Gomory'141
to develop algorithms for solving the mixed case, when some but not all
of the variables are bound to be integers. The procedures of references
6 and 24 also belong to this family.
Another type of algorithm for integer (and mixed integer) linear pro-
grams, developed by A. H. LAND AND A. G. DOIG,[18) also starts with a
noninteger optimal solution and then finds the optimal integer (or mixed-
integer) solution through systematic parallel shifts of the objective function-
hyperplane. The methods of references 21, 22, 25, and 26 also come under
this heading.
A different approach to the problem was initiated by R. FORTET[291on
the lines of Boolean algebra, and continued by R. CAMION[301 with the
introduction of Galois fields. On these lines, P. L. IVANESCU[231developed
an algorithm for solving discrete polynomial programs.
The algorithm proposed in this papert represents a combinatorial
approach to the problem of solving discrete-variable linear programs in
general, and linear programs with zero-one variables in particular. As
an abbreviated enumeration procedure, it is kindred in conception to
combinatorial methods developed in related areas (see, for instance, refer-
ences 33 and 34). This algorithm is first of all a direct method for solving
linear programs with zero-one variables, and for this particular type of
problem it seems to work very efficiently. It has also been extendedE32'
to linear programs with integer variables, and, as a method of approxima-
tion, to nonlinear programs of a more general type than those usually dealt
with.

BASIC IDEAS AND OUTLINE OF THE ADDITIVE ALGORITHM


THE GENERAL form of a linear program with zero-one variables may be
stated as follows:
Find x' minimizing (maximizing)

subject to A'x ><b', (2')


x1j=Oor , (jEN) (3')
t An initial version of the additive algorithm was presented at the Third Scien-
tific Session on Statistics, Bucharest, December 5-7, 1963 (see reference 9). A brief
note on this algorithm was also published in reference 31, and another one on its ex-
tensions in reference 32. For an interesting graph-theoretical interpretation of the
additive algorithm and a comparison with some later developments see reference 35.
Linear Programs with Zero-One Variables 519

where X'= (xj/) is an n-component column-vector, c'= (c1') is a given


n-component row-vector, A'= (a$') is a given qXn matrix and b'= (bj')
is a given q-component colunmn-vector, while {1, ***, q} = Q, {1, ***, n} = N.
However, we wish to consider the problem in a slightly different form,
namely with all the constraints being inequalities of the same form _,
and all the coefficients of the objective function (to be minimized) being
nonnegative. Any problem of the type (1'), (2'), (3') can be brought
to this form by the following operations:
(a). Replacing all equations by two inequalities.
(b). Multiplying by -1 all inequalities of the form >.
(c). Setting
fX1' for cj'> 0 when minimizing; for cj'< 0 when maximizing,
xj 1-xj' for cj'< 0 when minimizing; for cj'>0 when maximizing.
Following this and the introduction of an m-component nonnegative
slack vector y, the problem may be restated thus:
Find x such that
z=cx=min, (1)
subject to Ax+y= b, (2)
xj= or 1, (jEN) (3)
y O, (4)
where c >0 and where x, C; A, and b are to be obtained from x', c', A'
and b' through the above described transformation. The dimension of
x and c remains n. Let b be m-dimensional, with {1, * *, in} = Mlm Q.
The problem (1), (2), (3), and (4) will be labeled P.
Let aj stand for the jth column of A.
An (n+rm) -dimensional vector u = (x,y) will be called a solution, if it
satisfies (2) and (3); a feasible solution, if it satisfies (2), (3), and (4);
and an optimal (feasible) solution, if it satisfies (1), (2), (3), and (4).
Let us denote P' the linear program defined by (1), (2), and (4), and
the constraints
Xj_>O. (jEN) (3a)
Xj= a, (jEJ,) (3b,)
where J8 is a subset of N. Let Jo = , and thus Po be the problem defined
by (1), (2), (3a), and (4).
The fundamental idea underlying our algorithm runs on the following
lines. We start with the ordinary linear program P0 with uo= (x0,y0)
= (O,b), which is obviously a dual-feasible solution to P' (because c >0).
The corresponding basis consists of the unit-matrix I(m)= (ei) (i= 1, * *, m),
e} being the ith unit vector. For some i such that Yi0< 0 we choose, accord-
520 Egon Balas

ing to a certain rule, a vector a11 such that ai1j<0, to be introduced into
the basis. But instead of introducing ah in place of a vector et in the
basis, as would be the case in the dual simplex method, we add to P0
the constraint xj, =1, in the slightly modified form -xj1+yYm+i=-1,
where ymwl is an artificial variable. Thus we obtain the problem P1 as
defined above, with Ji= {j'},i.e., the problem consisting of (1), (2), (3a),
(4), and the additional constraint
Xj3=1, (3bi)
It is easy to see that the set xj=O(jeN), y =bi(iEM), ym?= -1, is a dual-
feasible solution to P'. In the extended basis I(m.+)= (ei) (i= 1, **, m+ ),
the (m+l)st unit vector em?1 corresponds to ym+i. It is in the place of
this unit vector em?+ that we introduce ajl, and thus xjl takes the value
1 in the new solution to P' that obviously remains dual-feasible. As the
artificial variable ym,+, which becomes 0, does not play any role henceforth,
it may be abandoned and the new solution may be written ul = (x',y )
.
(Xi', ** Xn* , Y i .' .. , Yi 1).

Given the particularly simple form of the additional constraint, the


pivot around the element -1 consists in fact of the algebraic addition
b-ajl. Thus, the new dual-feasible solution ul = (x',y') to F' is

X{= (j-N-{lly , il= bi-aijl. (iEM)

As the operations to be carried out at each iteration consist solely of such


additions and subtractions, we call the algorithm additive.
If the solution-vector ul still has negative components, then according
to the above mentioned rules we choose another vector aj2 to be introduced
into the basis, and we add to F' the new constraint xj2= 1, in the form
-Xj2+ym+2= -1, Ym+2 being another artificial variable. This yields the
problem P2, consisting of (1), (2), (3a), and (4) and the additional con-
straint set (3b2), made up of xj1= 1, xj2=1. The set xjl= 1, x; =
OLIe(N-{i,})], yj=bj-atjj(icM), Ym+2 =-1, is a dual-feasible solution to
p2. The vector aj2 is now introduced in place of e.+2, and xj2 takes the
value 1 in the new solution to p2, which obviously remains dual-feasible.
As the artificial variable ym+2 does not play any role henceforth (re-
maining all the time equal to 0), it may be dropped (as was the case for
Ym.+),and the new dual-feasible solution to p2 is u2= (x2,y2), where
2 j=~2 2 1
xj _1(=g1g 8 2=yi -aij2 (ic-M)

This procedure is repeated until either a solution u8 is reached with all


components nonnegative, or evidence is obtained that such a solution
to P8 does not exist.
Linear Programs with Zero-One Variables 521

If a nonnegative vector u = (x',y') is obtained, it is an optimal (feasible)


solution to P8. Such a solution may or may not be optimal for P, but it
is always a feasible solution to P.
The procedure is then started again from a solution u"(p<s) chosen
according to certain rules, with other rules governing the choice of vectors
to be introduced into the basis, until either another feasible solution u'
such that Zt<Zs is obtained (zp being the value of z for up)(p= O, 1, ***),
or evidence is obtained of the absence of such solutions.
The sequence u'(q=O, 1, ***) converges towards an optimal solution.
This procedure might also be called a pseudo-dual algorithm, because,
as in the dual simplex method, it starts with a dual-feasible solution and
then successively approaches the primal-feasible 'region,' safeguarding at
all times the property of dual-feasibility. However, a real dual simplex
iteration never takes place; the dual simplex criterion for choosing the
vector to enter the basis is not used, nor are any of the vectors
ei(i= 1, ** , m) ever 'eliminated' from the basis in the sense of being re-
placed by another vector. All changes in the basis occur through adding
new unit-rows and unit-columns or dropping them and, what is most im-
portant, the coefficient-matrix A remains unchanged. As the new unit-
rows and unit-columns introduced at each iteration never play any role
in a further iteration, they need neither be retained nor explicitly written
out.
The nature of the additive algorithm will best be pointed out by a
comparison with what would have to be done if we were to try all the
existing solutions. As the variables of P can only take the values 0 or 1,
the set U = { u} of all solutions to (2) and (3) is of course finite, and the
number of elements in U is 2', where n is the number of variables x;.
The process of trying all the 2' possible combinations (i.e., solutions) is
illustrated for a problem with 5 variables in Fig. 1. The figure has 5
'levels.' Starting from the top, at each 'level' we assign in turn the values
o and 1 to the variable bearing the number of the level. All points along
a line running down to the left represent one and the same solution, while
all points along a line running down to the right represent different solu-
tions. By continuing this procedure until all the variables have been
assigned the values 0 and 1, we obtain the whole set of 25= 32 solutions.
What our additive algorithm amounts to is, in fact, a set of rules ac-
cording to which one may obtain an optimal solution (if such a solution
exists) by following some branches of the tree in Fig. 1, while neglecting
most of them. Starting from a situation in which all n variables are equal
to 0, our algorithm consists of a systematic procedure of assigning the value
1 to some of the variables in such a way that after trying a small part of
all the 2' possible combinations, one obtains either an optimal solution, or
evidence of the fact that no feasible solution exists. This is achieved
522 Egon Balas

Xjw Ij-1.- 5)

( =1 ...... 5) 0l = ( 2,3,4,5)

/ f l(j=2) 1/(j=) I
l(j=1,2)
= O(X I ..5) 0 (j = 1,3,4,5) O(j -2,3,4,5) _ I(j = 3,4,5)

_ 7 _ \ ? la2 4 / __ ______??-23)) 4 7 /

-Zig - t
~-t -:IO-2,4,) -
,4,5-_
- -- -A--A
- ?-- 2-t 3

?~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 (Q= 2X3)
X
OQ = 1,4,5)

Figure 1

through a set of rules determining at each interation (a) a subset of variables


that are candidates for being assigned the value 1; (b) the variable to be
chosen among the candidates. At certain stages of the procedure it be-
comes clear that either an optimal solution has been obtained, or there is no
optimal solution with value 1 for all the variables that had been assigned
this value. The procedure is then stopped and started again from a pre-
vious stage. In other words, the rules of the algorithm identify such
branches of the solution-tree, which may be abandoned because they can-
not lead to a feasible 'solution' better than the one already obtained.
Thus, in Fig. 1, which illustrates our first numerical example presented
in the final section, only the thick lines are to be followed and the corre-
sponding solutions to be tested. This means that, instead of all 25=32
solutions, only the following 3 had to be tried:
u0 with xj =O, (j=1, * , 5)

u' with x ~=1) j3


w{O (j=1, 2,4,5)
2 with X12={ X (j=2, 3)
and this latter (j=oh b4 5)
and this latter solution has been found to be optimal. The 'stop signals'
Linear Programs with Zero-One Variables 523

of the algorithm (represented by the circles at the end of the thick lines)
make sure that no feasible solution 'better' than the one obtained exists
beyond them.
Of course, under such circumstances, the efficiency of the algorithm
depends largely on the efficiency of these 'stop signals' i.e., on the number
of branches that need not be followed. As it will be shown later in greater
detail, in most cases the algorithm succeeds in reducing the subset of solu-
tions to be tested to a relatively small fraction of the complete set.

SOME DEFINITIONS AND NOTATIONS


LET US consider problem P.
As each constraint of the set (2) contains exactly one component of
y, a solution u'= (x',y') is uniquely determined by the set Jp, {jjJEN,
xj'=1}. For, if

Xj (5)
{O[J(N-Jp,)],
then y=ip bi- E j. aij. (icli) (6)

As already shown, the additive algorithm generates a sequence of


solutions. We shall denote the sth term of this sequence by

us= u(j **,j) = (x8,y8), (7)


where {jil **
... =JJ = fjjEN,xjs 1}I, (8)
while z, will represent the value of the form (1) for us.
The sequence starts with uo, for which Jo= 4, i.e., x =0, ys = b, and
zo=O. Of course, JoCJp for any p#O.
The set of values taken by the objective function for the feasible solu-
tions obtained until iteration s will be denoted
Z8 = {ZjPIP
-<S,uP> 0}. (9)
If this set is not void, its smallest element will be called the ceiling
for u8. If it is void, the r6le of the ceiling will be performed by co. Thus,
we shall denote the ceiling for u8
z8(s)={ ? if Z84, (10)
min,8z, if Z8#4.

At each iteration s+1, the new vector to be introduced into the basis
will be chosen from a subset of {ajEjcN},called the set improving vectorsfor
the solution uW. We shall denote by N8 the corresponding set of indices
j (of course, NACN), and we shall define it more precisely below. t
t Throughout this paper the symbol C will be used for inclusion, while C will
stand for strict inclusion.
524 Egon Balas

We now define certain values which will serve as a criterion for the
choice of the vector to be introduced into the basis. Thus, for each solu-
tion us and for each jeNs, we define the values
{a ~8- (( 8-aij), (jcNM, -) ()
o) 0)
~~~~(jE-N,;Mr=
where M1K={ifyi8-aij<O}. (12)
The meaning of these values is obvious: V1*is the sum of negative com-
ponents of the solution-vector u,+l which can be obtained from the solu-
tion-vector us by setting J,+1=JUf{j*}.
As already said, the values vj1 are to serve as a criterion for the choice
of the new vector to be introduced into the basis. This criterion has been
found efficient; however, it must be emphasized that it is empirically chosen,
being neither compelling, nor essential for the additive algorithm, which
may do as well with some other criterion. The choice of the criterion for
introducing a new vector into the basis may of course heavily affect the
efficiency of the algorithm, but has no influence on its finiteness.
Under the additive algorithm, the values vjk assigned to a certain solu-
tion uk are successively cancelled in the subsequent iterations according to
certain rules. Let Ck8(k?s) stand for the set of those j for which the
values vJ'assigned to the solution uk have been cancelled before the solu-
tion us has been obtained. (Ckk=o by definition.)
The set of those j for which the values v/' assigned to any one of the
solutions uVsuch that p <s and J cJs have been cancelled before obtaining
the solution us, will be denoted
CS= UPIJPC
JCP. (13)
We shall now define for the solution us the set of those jE(N- CS)such
that, if aj were introduced into the basis so as to yield Js+i=JUfj}, the
value of the objective function would hit the ceiling for us:
Ds= {jljE(N-C8), cj>z*(S)-ZS}. (14)
Further, the set of those jE[N- (C'UD,)] will be defined such that, if
aj were introduced into the basis so as to yield J,+?=JU{j}, no negative
YiZwould be increased in value:
Es = {jljE[N- (C'UD,)], yt<0==>aiO>0}. (15)
We are now in a position to give a proper definition to the set of im-
proving vectors for a solution us. It is the set of those aj, for which j
belongs to
N8= N- (CsUDSUES). (16)
Obviously, N8= 4 for any feasible solution us.
Linear Programs with Zero-One Variables 525

Similarly to D, we define for the pair of solutions uk and u8(k<s) the


set of those jE (Nk-CkO) such that, if aj were introduced into the basis so
as to yield J+l=JkU{J}, then z,+8 would hit the ceiling for u8:

Dk8 = fjjE(Nk-Ck8), Cj> z*(s) -ZkJ (17)

Finally, given a pair of solutions us and uk, such that u8= u(ji, ***,jr)
U= U (jl, * * jr-h), (1 <h <r; JkcLJ), we define the set of improving vectors
for the solution Uk, left after iteration s. It is the set of those aj, for which j
belongs to
Nks=Nk- (CkOUDko)* (18)
The sets defined above by (16) and (18) will play a central role in our
algorithm. Whenever a solution u' is reached, only the improving vectors
for that solution are considered for introduction into the basis. Whenever
the set of improving vectors for a solution us is found to be void, this is to
be interpreted as a 'stop signal,' which means that there is no feasible
solution ut such that JC:Jt and Zt<z*(s). In such cases we have to take
up our procedure from a previous solution Uk, to be identified according to
certain rules, and for any such solution only the set of improving vectors
for that solution uk left after iteration s is to be considered for introduction
into the basis.

STATEMENT OF THE ADDITIVE ALGORITHM


WE START with the dual-feasible solution u0, for which

x0=O, y0=b and zo=O. (19)


Let us suppose that after s iterations we have obtained the solution
u =u(ji, jr), for which

X3 1, (20)
O [jE(N-J8)(
yil= bi- Ejf. aij, (ilkM)

and Zs= Ejej, cj. (21)

The following procedure is then to be adoptedt:


Step 1. Check yj8(iEcM).
la. If yj8>0(iEcM), set z,=z*(s). Form the sets Dk8 as defined by (17)
for all k <s, cancel all vjk for jeDk'8,k <s, and pass to step 5.
If this happens to be the case for uo, then u0 is an optimal solution and
the algorithm has ended.
lb. If there exists i1 such that Y1 <0, pass to
t The algorithm can be followed on the flow chart presented in Fig. 2.
526 Egon Balas

Step 2. Identify the improving vectors for the solution u' by forming
the set N8 as defined by (16).
2a. If N, =4, i.e., there are no improving vectors for u8, pass to step 5.
2b. If NS,?-4),pass to
Step 3. Check the relations
ZjENS a-j yi8, (ijyis<O) (22)
where a; j are the negative elements of A.
3a. If there exists i1EM for which (22) does not hold, pass to step 5.
3b. If all relations (22) hold as strict inequalities, compute the values
Vj as defined by (1 1) and (12) for all jeN8, choose jsli so that
V = naxjENvvj (23)
cancel vi8+ and pass to step 8.
3c. If all relations (22) hold, and there exists a subset M8 of M such
that the relations (22) hold as equalities for iEM8,pass to
Step 4. Check the relation
ZE F8 Ci < Z*() -Z (24)
where F8 is the set of those jeNs, for which aj <0 for at least one ijM8.
4a. If (24) holds, cancel vjf for all jEFS (without computing their
numerical values). Set J,+?=J8UF8, compute the value of the objective
function
Zs+1 =Zs+ Eij Fe Cj (25)
and of the slack variables
!+lI= _ EjEFS ai1 (iEM) (26)
for the new solution u1+, and pass to the next iteration (i.e., start again
with step 1).
4b. If (24) does not hold, cancel vf for all jENs (without computing
their numerical values) and pass to
Step 5. Identify the improving vectors for the solutions uk (klJkc:J)
left after iteration s, i.e., check the sets Nk' (IkjJkCJ,) as defined by (18),
in the decreasing order of the numbers k, until either a number ki is found
such that Jk1 CJ and N',1#), or Nk' is found to be void for any k such
JkCJs.
5a. If Nk8= 4 for all k such that JkCJ8, i.e., there are no improving
vectors for any uk (k[Jk CL), the algorithm has come to an end. In this
case, if ZS=4), P has no feasible solution. If Z8 #4), then u' for which
Zq=Z*(s) is an optimal solution and z*(8) is the minimum attained by the
objective function for this solution.
If (23) or (28) holds for more than one j=jsi+ (let J.ax be the set of those j for
which it holds), choose js1+ so that c,+1 =miniejmaxci,and if this relation too holds for
more than one j=J=s+, then choose any one of them as Js+1.
Lintear Programs with Zero-One Variables 527

step 1 yes (la) Form D (k1< s)


oDjs) O >
Cancel ()
YC (i cm)?
no (lb)

e
SNs 1 J u s

tno (2b)

C eyes es
Compuye s tb or 3 or i N

l~~~~~~~~~~~~ys
_ _ Y ~~~~~o
n~~~
(3cfo = ~
i6 o ~ _ - _

- i o. Computezs~ '1anids
qopuet z- ,an

s oloutioono) (Ne solution)


Fi Cancel F ca forj NtE ao

yes (5a) J ste stepp5 epeat fork<k,


StepN-k6.bykC replacing

Z no, fork = kc(45b) (27)


1
( M (N iew M (e
Y. rlzfor
fo6 EN' n
I a-<ykAd(ilykd<O) Iokka|cseit-

a yeas, fork (6bo hca t


d oes

for all and repeat .


step for <A, _ritg . _ _cl in t f e
S
|Cancel u for jEF~, ino t6c)
Set Js-1 - s
'Fk8 yve (7a) f step 7
IComputeZs4-1 and - <J z*(s) - z7k
c-
ys+' CM~) (Newr
(i l jeFkp )

|solution) "! 1 | + no(b


|[|Cancel uvkl3for 3 eNskpi

t ! A | for kt<,@k
~~~Repeatstep5
| oes ke<:k,8(T GJs) exist?| ye e| replacing k by'k8 F +

1 r~~~~~~~~~~1

Fig. 2. Flow chart of the additive algorithm.

5b. If N.>8
s for' k ==ki> ik. 1ci., pass to
Step 6. Check the relations

Ef N, a-j: y. (it|8ik<?) (27)


for k - kj.
6a,. If any one of the relations (27) does not hold for kc-kl, cancel
vk` for allticN' and repeat step 5 for k < k, writing k2 irlstead o}f kc1in steps
528 Egon Balas

5 and 6. Whenever step 5 is repeated for k <ka, write k,+1instead of kZa in


steps 5 and 6.
If (27) does not hold for any k such that Nk8 q, the algorithm has
ended, with the same conclusion as for 5a.
6b. If for k = kg all relations (27) hold as strict inequalities, choose
8?+1so that
VA+1= MaXNavj$, (28)

cancel vie kl and pass to step 8.


6c. If for k = kZ all relations (27) hold, and there exists a subset Mk'
of M such that the relations (27) hold as equalities for ieMI', pass to
Step 7. Check the relation
ZFa Cj<z -*k( , (29)
where F*: is the set of those jeNk,, for which aij< G for at least one iEMk.
7a. If (29) holds, cancel vkOfor all jEFk,. Set Js.,l=JkUFk, compute
the value of the objective function
ZS+1= Zk# + ZEjFk Cj (30)
and of the slack variables
yiurl_8k 57 iFs
0aij (ic-M) (31)
for the new solution u'+1 and pass to the next iteration (i.e., start again
with step 1).
7b. If (29) does not hold, cancel the values vkPfor all jEATNand repeat
step 5, for k<1kg. If no k<kk exists, i.e., if k =0, the algorithm has ended
with the same conclusion as for 5a.
Step 8. Set J,?1=JU{fj+1} and compute the value of the objective
function and of the slack variables for the new solution u'+1, according to
the formulas
Zs8+lzp+cj+- EjeJ.+l ci (32)
and ySl y airJ+ =bi- Ej,8+, aij, (ieM) (33)
where p is defined by the last value cancelled, v? Pass to the next iter-
ation (i.e., start again with step 1).
The algorithm terminates when a solution uo has been reached, for which
(a) situation 5a obtains; or (I) situation 6a obtains and (27) does not
hold for any k such that Nk8e-q; or (y) situation 7b obtains and k,3=0.
REMARK I. The additive algorithm described above yields one optimal
solution (if such a solution exists). But if we set > instead of ? in (14)
and (17), and ? instead of < in (24) and (29), we obtain another version
of the same algorithm yielding all the existing optimal solutions.
REMARKII. On the other hand, the number of solutions to be examined
Linear Programs with Zero-One Variables 529

under the algorithm may be further reduced at the cost of a relatively


small additional computational effort at each iteration, if we change steps
3b, 3c and 6b, 6c as follows:
3b. If all relations (22) hold, check the relations
ZAa tj-a71 ?yis
j EN8
(ijyj8<O)- (22a)

where- a7j = maxjENa-j; if all relations (22a) hold, compute the values v.8
for all jeNY, choose j,+, so that
V>s+1= maxjENv, (23)
cancel v8?+iand pass to step 8.
3c. If all relations (22) hold, and there exists a subset M' of M such
that relations (22a) do not hold for ieM', pass to step 4.
Completely analogous changes are to be made in steps 6b, 6c.
The application of the algorithm is facilitated through the use of a
tableau of the type presented with the numerical examples of the final
section.

FINITENESS PROOFS
THEADDITIVE algorithm yields a sequence of solutions uo, u1, ***. We shall
say that a solution uk is abandoned,if we are instructed by the algorithm
either to check Np'(O< p < k < s), or to stop (termination).
A central feature of our algorithm, which is at the same time instru-
mental in proving its convergence, may be expressed by Theorem 1.
THEOREM 1. If a solution uk is abandoned under the additive algorithm,
*
then no feasible solution tu exists such that JkCJt and Zt<z *(I)
t
(
A solution may be abandoned as a consequence of one of the following
steps of the algorithm: la, 2a, 3a, 4b, 5, 6a, and 7b. In order to prove that
the above theorem holds for all these situations, we shall need two lemmas.
First, let us consider the sets Cp8. Under the rules of our algorithm,
there are three circumstances in which a value v?*assigned to a solution
uP may have been cancelled until the iteration s, i.e., there are three reasons
for j* to belong to a set Cpt8
(a) Relation (27) does not hold for k = p (i.e., j* has been cancelled
under step 6a).
(b) A solution u(p<q < s) has been obtained from up by introducing
aj* into the basis (i.e., j* has been cancelled under one of the steps 3b,
4a, 6b, 7a).
(c) If aj were to be introduced into the basis, z would hit the ceiling
for us (i.e., j* has been cancelled under one of the steps la, 4b, or 7b).
Let us denote the subsets of indices j corresponding to those vfP that
530 Egon Balas

have been cancelled until iteration s for reasons a, b, and c respectively,


by Cp8(a), Cp8(b), and Cp'(c), so that Cp8=Cp8(a)UCp'(b)UC,8(c), and
let us denote by C8(a), C8(b), and C8(c) so that
CpCp' (a) UCp (b)UCp (c),
and let us denote by Cs(a), C8(b), and C8(c) respectively, the correspond-
ing subsets of C'.
We have
LEMMA1. Given a solution u', if thereexists a feasible solution ut such that
Jsc J and zt<Z*(s), then
(Jt-J)Cn=Cs. (34)
Proof. Relation (34) is obviously satisfied for any solution us such that
C8=J, and this holds in any case for u , as Jo= C0= c.
Thus we are entitled to suppose that (34) holds for a sequence of solu-
tions up (p = 0, 1, * * *), composed of at least one term.
We shall prove that if (34) holds for up (p=O, 1, ***, n), then it also
holds for unn+1
Let us denote u' = u(ji, X**, jr). Two situations are then possible:
(1) in CJn+l-
Jn+1=Jn U{jn+1} and Cn+1=C U{Ijn+l}, or Jn+1=JnUFn,
In this case either
and C'+1= C'UFn. In both cases (34) obviously holds for U8n1.
(2) Jn 4:Jn+l
Let us denote Jk = {:j **.*. jr-h}, h being the smallest number (1 < h < r)
for whichJkCJn+l.
According to the definition of Cn+1,
C P =
PIJPCsliC PIJpC:
Jk P -P )]UC. (3
As for uk (34) is supposed to hold, and as JkCJn+l and z*(k)>Z*(n+l)
we have,
(Jt-Jn+l) nCk= p (36)
for any ut?0 such that Jn+1cJt and zt<z* (n+l)
On the other hand, as JkCJn+l, we have CP+1(a)= q(pIJpcJk), and
Cnp'(b) = Cpk(b) (plJpcJk), so that
Cn +1 _Cpk = Cn +1(C) Cpk (C) (Pi~
I jP C~(7

and
and ~~ ~ Ck-k
~~~~kn+()
Ck k+= k + c) UCk 1(b)(3 (38)
Further, by the definition of C+1 (c),
(Jt Jn+l) n{ UPIJPJ,[C '(C)-C (C)]} =do (39)
Thus, from the above relations it follows that

(Jt-)Jn+1l) nC (Jt-Jn+1) nCk (b). (40)


Linear Programs with Zero-One Variables 531

Now, there are two possibilities: either Jn+1= Jk U Ijn+ }, or Jn+l = JkUFk -
Let us suppose that the first of these two situations holds, and thus
Ck'(b) Ck (b) U{jn+} (A perfectly analogous reasoning is valid for
the second situation.)
We have
(Jt-Jn+l) nfc,,1= (Jt-Jk) A(Jt - jLn?) nf[Ck (b) U{jn+I}]) (41)

or, as
(Jit- {Ini} ) -lJ}I (42)
(Jt-Jn+l) no -n1=(Jt-Jk) n (Jt- jjn+,j
)ncok (b) * (43)
We shall now show that
(Jt-Jkc) nfCk (b) = (44)
Let us suppose the contrary, i.e., that there exists jf, such that

if l4(Jt -J) nCk'(b)], (45)


and let us denote Jq1 =Jk U L/fl}I
Obviously, Jt is not identical with Jqj, as from jf1ECk'(b)it follows that
qj<n, i.e., z*(l)>Z*((n), while z*(t) <Z*(n) Thus, if (45) holds, there must
exist jf2 such that
if2[(Jt-Jq,) nCq (b) (46)
For, if not, then
(jt-jql ) 5 [NCn ( b) ]C"' UD,,, UE,,, U [N,11 Cnr ( b) ] (47 )
must hold. But, as k < q, <n, and as jfI ECk(b) implies Jq, cJn+1, we have
= Cnj1, and C 1+1
NqIZ (b)-=Cn (b). Thus
NAqICn1( b) = Cn+j (a) UCnI+'(c). (48)
On the other hand, from the definition of Dq, Cn+ (a) and C'+1(c), and
from the fact that (34) is supposed to hold for qi as qj <n and Jq, CJn, it
follows that
(Jt j,,)n[CqIU.DgIUCl+1 (a) UC +1(c)]X (49 )
and, by the definition of Eq1,
Jt-Jqi1SEq1. (50)
Therefore (47) is impossible, i.e., (46) must hold if (45) holds.
Now, if Cn (b)=4, we have a contradiction and the validity of (44) is
proved. If not, then a reasoning perfectly analogous to the above one
shows that, if
YfJE[(Jt-Jqj1 )lnCqn-1 (b)] (51)
exists, then ifj-1[(Jt-Jqi) ncfl (b)] (52)
532 Egon Balas

must also exist (i=2, 3, ... ), Jq_, and Jqj being defined analogously to
J1. As the number of sets Ct (b) H4 is finite, this sequence of implica-
tions obviously ends in a contradiction that proves the validity of (44).
Relation (34) is thus shown to hold also in case (2), and this completes
the proof of Lemma 1.
LEMMA 2. Given two solutions u8= u(ji, j, j) and u = u(jil, jr-h)
(1 _h _r; JkC Js8), such that Np= C +1'(k< p As), if there exists a feasible
solution ut such that JkcJt and zt<z*(), then
(Jt-Jk) nCk = 4). (53)
Proof. JkCJ8 guarantees that Ck8(a) = 4) while

(Jt-Jk)fnCk (c) =? (54)


follows from the definition of Ck8(c). Thus (53) becomes
(Jt-Jk) fCk"(b) =4), (55)
which can be proved in exactly the same way as (44).
Proof of Theorem1. For solutions abandoned under one of the steps la,
4b, or 7b, the theorem is obvious. We shall now prove it for the other
cases:
2a. In this situation, as Nk=4,? the existence of a feasible solution ut
such that JkcCJt, zt<z(k), would imply

JtJkCN-Nk = Ck UDkUEk, (56)

which is impossible in view of lemma 2 and the definitions of Dk and Ek.


3a. If (22) does not hold, then, in view of the definition of Ek, a
feasible solution ut with the required properties could only exist if

(Jt-Jk) n(CkUDk) Hos, (57)


which cannot hold according to Lemma 1 and by the definition of Dk.
5. If Nk8= 4 for some k, the existence of ut with the required properties
would imply
Jt -JkcN-Nk8. (58)
But in view of the definitions of Dk, Dk8, and Ck8(c), no element of any
of these sets can be contained in Jt -Jk. Further, Ck8(a)= 4 because the
checking of Nk' for a certain k under step 5 always precedes the possible
cancellation of values vj"for the same k under step 6a.
On the other hand, the checking of Nk' for a certain k under step 5 can
never occur before N, = C'+1 has been established for all p such that k <p ? s.
Therefore, according to Lemma 2, (Jt-Jk) nCk8 is also void and (58)
becomes
Jt-JkgEk, (59)
which is obviously impossible.
Linear Programs with Zero-One Variables 533

6a. If (27) does not hold for a certain k, then a solution ut such as re-
quired in the theorem could only exist if
(Jt-Jo) n[N-(N.8UE.)] ,. (60)

But this cannot hold, as was shown at the preceding point.


Theorem 1 is thus proved.
Now we can formulate the following
CONVERGENCE THEOREM 2. In a finite number of iterations, the additive
algorithm yields either an optimal feasible solution, or the conclusion that the
problemhas no feasible solution at all.
Proof. From theorem 1 it follows that the termination of the algo-
rithm (which means that all solutions tested by the algorithm have been
abandoned) yields either an optimal feasible solution, or the conclusion
that no feasible solution to the problem exists. We shall now show that
the additive algorithm terminates in a finite number of steps.
(a) In a finite number of steps, each iteration yields a new solution or
else the algorithm terminates. Repetition of certain steps during one and
the same iteration may arise in one of the situations 6a or 7b. In both
situations step 5, which requires the checking of Nk8 (klJkCJ,) is to be re-
peated for k <ka, where ka is a number for which the checking had already
taken place. Thus there can only be a finite number of such repetitions.
(b) No solution can be obtained twice under the additive algorithm.
Let us suppose the contrary, i.e., let u8= u(j1, * * *, jr) and ut= u(ki, * * *, kr)
be two solutions obtained under the additive algorithm, such that u8= u ,
t> s. We cannot have ji= ki(i= 1, ***, r), because in that case jrECq-1
(where q is defined either by Jt=JqU{jr} or by Jt=JqUFt71), and this ex-
cludes jrEJt. Let (ja, ka) be the first pair of indices (ji, ki) such that jicki
and let us denote u'= u(j', ..., jw-1). Then jasCt-1cCt-1 and thus
jaSit.
This completes the proof of the Convergence Theorem.

SOME REMARKS ON THE EFFICIENCY OF THE ALGORITHM

UNLIKE MOSTof the known algorithms for solving linear programs with
integer variables, the additive algorithm attacks directlythe linear program
with zero-one variables, and does not require the solution of the correspond-
ing ordinary linear program (without the zero-one constraints).
As has been shown, the only operations required under the algorithm
described above are additions and subtractions. Thus any possibility of
round-off errors is excluded.
The additive algorithm does not impose a heavy burden on the storage
system of a computer. Most of the partial results may be dropped shortly
after they have been obtained.
At this moment, experience with the additive algorithm is at its very
534 Egon Balas

beginning. So far it consists only of solving by hand about a dozen prob-


lems with up to 15 variables, and of partially solving, also by hand, one
single large problem. The results are very encouraging, but of course this
experience is insufficient for a firm judgment on the efficiency of the al-
gorithm, especially for larger problems. We shall summarize the above ex-
perience and comment on it, but all our conjectures in this section should be
regarded as tentative, in view of their scanty experimental basis.
The data of 11 problems solved by hand with the additive algorithm are
given in Tableau I.

TABLEAU I

Number of Number of Nubro


Problem no. zero-one variables constraints Nuiterations
(inequalities) ieain

I 5 2 6
2(a) 5 3 3
3 7 5 II
4(a)(b) 9 4 31
5 IO 4 I2

6(a) IO 7 5
7 II 6 21

8 II 7 8
9(a)(c) I2 6 39
IO I4 9 23
II I5 12 22

(a) Problems given as numerical examples in final section.


(b) A problem with no feasible solution.
(c) A problem with only one feasible solution.

The time needed for solving these problemst was on the average 7
minutes for an iteration, i.e., 1-3 hours for each of the problems 3, 4, 5, 7, 8,
10, and 11. Problems 1, 2, and 6 were solved in less than an hour, while
somewhat more than 4 hours were used to solve problem 9.
Any one who has tried to solve by hand a problem of the type and size
discussed here with the cutting-plane technique for integer programming
problems (which in this case has to be combined with bounded variables-
procedures) knows that it takes several times that amount of time, not to
speak about difficulties generated by round-off. Moreover, the solution of
the ordinary linear programs (without the zero-one constraints) correspond-
ing to the above problems would also require an amount of computations
considerably larger than that needed for solving the zero-one problems by
the additive algorithm.
t By hand calculations by a person having no special experience with the additive
algorithm.
Linear Programs with Zero-One Variables 535

The only large problem on which the additive algorithm has so far been
tried was a problem in forest management, [10]with a linear objective func-
tion in zero-one variables subject to a set of linear constraints, and to
another set of conditional ('logical') constraints.T The latter set has been
replaced by an equivalent set of linear constraints including additional
zero-one variables, so that finally a zero-one linear programming problem
emerged with 40 variables and 22 constraints. The number of nonzero
elements in the final coefficient matrix was 140, of which 115 were 1 or -1.
This problem was specially chosen for studying the applicability of the
algorithm to the given type of forest-management problems, and it was so
structured that the optimal solution could be known beforehand. After 35
iterations were made by hand-the average time of an iteration being about
20 minutes-the optimal feasible solution was approximated within 1.4
per cent. We note, however, that this approximation cannot be regarded
as a sign that termination of our algorithm was also near, and we further
note that our coefficient-matrix was relatively sparse-with many 0
elements.
This experiment showed, among other things, the great advantages of
the additive character of the algorithm. For instance, an error discovered
at the 19th iteration could be traced back to the 3rd iteration and corrected
throughout the subsequent solutions in about 2 hours' time.
The following considerations concerning the dependence of the amount
of computations on the size and other characteristics of the problem are
based partly on common-sense examination of the mechanism of the
algorithm, partly on the experience summarized above.
Let n be the number of variables and m the number of constraints.
(a) Amount of computationsneeded for one iteration. The number of
operations needed for checking relations (22) and (27) (steps 3 and 6
respectively), and for computing the values vjf (step 3b), depends linearly
on mXn. The number of operations needed to form or check the sets Ns
and Nk' (steps 2, 5), depends linearly on n, while the number of operations
needed to compute and check a new solution (steps 8, 4a, 7a, and la) de-
pends linearly on m. Thus the total amount of computations needed for
one iteration depends linearly on a quantity ,usituated somewhere between
min[m, n] and mXn.
(b) Number of iterations needed to solve the problem. This obviously
depends first of all on n. As to m, its increase enhances the efficiency of
some of the stop signals (steps 3a and 6a) and thus tends to reduce the
number of iterations. (This is an important advantage of the algorithm.)
The crucial thing about the efficiency of the algorithm is to know how
the number of iterations depends on n. The combinatorial nature of the
algorithm does not necessarily imply an exponential relation for, while the
t Expressing relations of the type V, =>, and A.
536 Egon Balas

set of all solutions from which an optimal feasible one has to be selected
grows exponentially with n, the efficiency of some of the stop signals may
perhaps grow even faster with n. So far the experience summarized above
does not seem to indicate an exponential relation. While in the three
smallest problems of Tableau I (1, 2, and 3) the number of iterations was
0.6-1.6 times n, in the two largest problems (10 and 11) it was 1.5-1.6 times
n. In the largest problem so far solved (11), 22 solutions out of a total of
2 = 32,768 had to be tested. But of course this experience is insufficient,
and computer experience with a considerable number of larger problems
will be needed to elucidate this question in the absence of an analytic
proof.
The number of iterations also depends on other characteristics of the
problem. In cases where an optimal solution exists in which only a few
variables take the value 1, the relatively large size of the sets D. makes the
stop signals especially efficient and thus assures a rapid convergence of the
algorithm (see, for instance, problem 6 of Tableau I) (example 2 in the final
section), where only 5 out of 210= 1,024 solutions had to be tested). On the
other hand, if the problem has no feasible solution, all sets D, are void and
the efficiency of the stop signals is reduced. The same holds to a lesser
extent for problems with very few feasible solutions. But it should be
noted that even in the case of such 'ill-behaved' problems the number of
iterations does not become unreasonably large. Thus, problem 4 of
Tableau I (example 3 in the final section), with 9 variables and 4 con-
straints, having no feasible solution, was 'solved' (i.e., the absence of a
feasible solution was established) in 31 iterations, while in the worst of
cases met until now, problem 9 of Tableau I (example 4 in the final section),
with 12 variables and 6 constraints, having only one feasible solution, was
solved by testing 39 out of 212= 4,096 solutions.

NUMERICAL EXAMPLES
Example 1. (Problem 2 of Tableau I.)
Let us consider the following problem:
-5x1'+7X2'+ 1Ox3'-3X4'+ x5 = min
-x1'-3X2 + 5x3 - X4'-4x5 > 0,
-2x1'-6X2 + 3X3'-2X4'-2x5'< -4,

-2x2 + 2x3 + X4'- x5 > 2,


xj'=0 or 1. (j=, **, 5)
Multiplying by -1 the two inequalities of the form > and setting
fi {1-lit, (j=2, 3, 5)
1-xj /, ~~(j=114)
Linear Programs with Zero-One Variables 537

allows us to restate our problem in the following form, corresponding


to P in the second section:
5x,+7X2+10x3+3x4+ X6 =nmin,

-xl+-3x2- 5x3- x4+4x5+yl = -2,


2x1-6X2+ 3x3+2x4 -2x5 +Y2 =0,

x2- 2x3+ X4+ X5 +Y3=-1,

x;=O or 1. (j=1, * , 5)

TABLEAU II

Row s z8 8 C D E8 F
no. J,
I 2 3

I O 0 0 -2 0 -I j 2,5

2 ZlENO aii -7 -2
3 I yij0-ail -I -2 -I

4 3 yi - ai3 -3 3
5 4 yj0-ai4 -I -2 -2 -~55

6 I 3 10 Yil 3 3 I 3 0 I,4

8 __ __ __ _ 5 _Yi _;ai2
_ _ C

9 5 yj1 -tai5 -I I 2

IO 2 3, 2 I7* Yi 0 3 0

ii ZJ~~~~E
ENl2catji -2
I 2 ZieN,2caiI - 2 0

Iin view of the form of Tableau II, the computations are easier to follow
if we work with AT instead of A (T being the symbol of transposition):
-1 2 0
3 -6 1
A= 5 3 -2
-1 2 1
4 -2 1
We start with the initial solution u = (0, b). The data of this solution,
JoT p, zo=O, and y1o=bi -2, y20= b2= O. Y30=b3-1, are shown on the
left-hand side of row 1 of Tableau II.
To make this illustration easier to follow on Tableau II, cancellation of
the values vj/ is not marked through crossing out, but through numbering
538 Egon Balas

in the order of cancellation. (Of course, this is not necessary in current


work with the algorithm.)
The solution of this problem is also illustrated by Fig. 1.
Iteration 1
Step 1. y 0<Ofor i= 1, 3; so we are in situation lb.
Step 2. No= N- (CUDoUEo). Co= , Do=4, Eo= {2, 5} (see row 1 of
the Tableau).
No={1,2, 3,4, 5}-{2, 5}{=1, 3,4};soweareincase2b.
Step 3. We check the relations (22) for i-= 1, 3 (see row 2 of the
Tableau).
EjNOalj = a-,+a3+?a4= -1-5-1= -7<yl = -2,
EjNO a3j=a3s= -2<y3= -1.
As all relations hold as strict inequalities, we are in case 3b.
Computation of the values vj as defined by (11), (12), for jENo,i.e., for
i= 1, 3, 4, is shown in rows 3, 4, 5 of the Tableau:

v1 Z=
ZEM (y0-al) = (-2+1) + (0-2) + (-1-0) =-4,
M = ZEjMo (yiP-ai3) = (0-3) = -3,
V40= ZieMo (yi0-aj4) = (-2+1)+(0-2)+(-l-1) = -5.
We have v~l=maxjeNo{v I}=v3 =-3, we cancel v30and pass to
Step8. J1=JoU{3}= {3},
z1= zo+c3=0+ 10 = 10,
yj1=y1 -a13= -2+5= 3,
Y2 =Y20-a23=0-3=-3,
YS =3 -a3 =-1+2=1,
(see row 6 of TableauII).
Iteration2
Step 1. yil<0 for i=2, so we are in situation lb.
Step 2. N1-=N- (ClUD1UEl). C1={3}, D1=4, E1 {1, 4} (seerow 6
of the Tableau).
Ni= {l, 2, 3, 4, 5}-({3}U{1, 4} )={2, 5}, so we are in case 2b.
Step3. Checkingof the relations(22) for i= 2 is shownin row 7 of the
Tableau:
EjNl a2j= -6-2=-8<y2 - -2.
Thus we are in case 3b. Computationof vOw
forjeN, is shownin rows8
and 9 of TableauII.
V25 = Z(M??21 =(3)?
E
61= j1- (yil-ai6) = (3-4)+(-3+2)+(1 l=1) 2
Linear Programs with Zero-One Variables 539

We have = maxjeNf we cancel V2' and pass to


v3, v'J}=V2=0,
Step 8. J2=J1U{2} = {3, 2},
Z2=zI+c2= 10+7= 17,
=
y' =Y -a,2= 3-3= O,
Y2 = Y21-a22=-3+6=3,

Y32 = Y31- a32= 1-1 = O.

(see row 10 of Tableau II).


Iteration 3
Step 1. y 2> O for all iEM, so we are in case la. We set z2=17=z*(2),
and we form the sets Dk2 as defined by (17) for k = 1, 0:
N1={2, 5}, C12={2} ; Nr-C12={5}; c6=1<17-10, so D12=?,
No={1 3, 4}, Co2={3}; No-C02= {1, 4}; cl=5<17-0, c4=3<17-0;
Do2=A

We pass to
Step 5. We check the set Nk2as defined by ( 18) for k =1:
N12=N-(C12 UD12)= {2, 51-121 = {51, so we are in situation 5b.
Step 6. We check the relations (27) for i= 2 (row 11 of Tableau II):

EjENl a2j=-2:gy2= _-33

As (27) does not hold, we are in case 6a. We cancel vjl for jEN12, i.e.,
v51,and return to
Step 5. We check the set Nk2for k=0:
No2=No- (C02UD02)= {1, 3, 41- {3} = (1, 41 so we are again in case 5b.
Step 6. We check (27) for i= 1, 3 (row 12 of the Tableau):
EjeNo alj= -1-1= -2=yl 0 -2,
EjeNo a3=0 -1.

As (27) does not hold for i= 3, we are again in case 6a, and we cancel
0i.e., vio and v40.
vOfor jeNo2
As (27) does not hold for any k such that Nk2 4 , the algorithm has
terminated. The optimal solution obtained for the restated problem is
u2=u(3, 2), with
2 _ 2 2 2 2 2 2
x2 = x32=1, x1 =x4 =x5 =0, y20 y22= 3, y3 =0,
and Z2= 17.
The corresponding optimal solution to the initial problem is
X1 =x2 =X3'=X4 =1 X5 ==0,

the value of the objective function being c'x' = 9.


540 Egon Balas

Example 2. (Problem 6 of Tableau I).


We shall consider the following problem:
lOx1'- 7x2'+ X3'-12x4'+2x5'+8x6'- 3x7'- x8'+ 5x9'+3xlo = max,
3x'?+ 12x2'- 8x3'- X41 - 7x3' 2xo > -8
X2 ?103 ?5x5'- x6 +7x7'+ x8 <13
-5x, - 3x2 + X3' -2x - x1o= -6,
- 4x3j+ 2x4' -5x61- X71+9X81- 2x9' -8,
- 9x2' +12x4'- 7x&'+6x6' -2x8- 15xg- 3xo -12
8x1'+ 5x2'- 2x3'- 7x4'+ x' -5x7! + lOxg' < 16,
Xi =0 or 1. (j=1, ** *, 10)
After replacing the equation in the above set through two inequalities,
multiplying by -1 the objective function, and all inequalities of the form
> and setting
_ xj', (j=2, 4,7,8)
11-xjy (j=1, 3,5,6,9,10)
we obtain the restated form of the problem, correspondingto P in the second
section, where
c= (ci, *., cio)= (10, 7, 1, 12, 2, 8, 3, 1, 5, 3)
3 5 -5 -8
-12 1 -3 3 9 5 -2
-8 -10 -1 1 -4 2 -2
1 -2 -12 -7 -1
AT -5 by -7 -1 1.
1 -5 6
-
7 1 -5
-
1 -2 2 -9 2
-
-7 -2 -15 -10
2 1 1-3

The application of the algorithm may be followed on Tableau III.


Numbering of the vj' shows again the order of their cancellation.
The optimal solution is a2= u (9, 3), with

y 2= 13, Y22= 9 Y32=Y42=O7 y52 =3, y62=8, y72= 7, and Z2= 6.


The corresponding optimal solution to the initial problem is thus
I I I III I I I
X1 =X5 =x6 =X0=i X2 =X31 =X4 =X7 =X8 =X9= 0;

the value of the objective function being c'x' = 23.


Linear Programs with Zero-One Variables 541

TABLEAU III

Row S Vj8 1Ds Es Fs


no. Js zs Ns Cs
I 2 3 4 5 6 7

I 0 q5 0 y;? -2 -I -I I -3 7 -I

2 TjeNo aij -27 -I5 -6 -22 -34 -3I 4


3 I Y&? ail 5 -I -6 -3 -7 -223

4 2 yji-ai 2 -2 -3 -i6 -6 -294


5 3 yi&-ais -7 -3 -I035
6 4 yij aj4 3 -I -I -I -65
7 5 yi&-ai5 -2 -I -3 -612
8 6 Yi&-ai6 -2 -2 -I -13 -I -I98
9 7 yi&-ai7 -2 -8 -I 4 -7 -2218
I0 8 y&0-a&8 -2 -2 -I -9 -I -I517
II 9 y&?-aig -I -I -I -31
I2 I0 yi?-ailo -4 -I -2 -3 -4 -I -I518

I3 I 9 5 yil 5 -I -I I -I 8 9 9 0 I7,
I0

I4 2jeNl aij -i5 -6 -22


1$ 2 yj1-aj2 -2 -2 -I -I -67
I6 3 yil-ais 02
17 4 yil-ai4 -I -1 -28
I8 5 yil ai5 -I -I -29
19 6 yil-ai6 -I _I10
20 8 yj1-aia -2 -I -311

2I 2 9, 3 6* 0 yi2 I3 9 0 0 3 8 7

22 TjeN2 air -8 35 3 -I3 -IO -6

23 3 5 2 Yis -2 4 -I I -3 0 0 I, 2,4,5, q 7, I0
6, 9

24 2jeNs aij -8 3 -i8


25 3 yi3-ais -2 -213
26 8 y&3-aj8 -2 -I -2 -514

27 4 5, 33 3 4Y 6 14 0 0 I O -2 1,2,3,4, 7,i0 8
5, 6, 9

28 Tje N4 a j -2 -9
29 eN0 a&i -8 I0 3 1 -I 3S 3

Example 3. (Problem 4 of Tableau I).


The following is an ill-behaved problem (with no feasible solution):
4x,+2x2+x3+5x4+3x5+6x6+x7+ 2x8+3xg= min,
3x,+5x2-2x3- X4 - x6 -4x8+2x9: -1,
6x1-2x2 -2X4+2x5- 4x6+3x7 <-3,
5x2 +3X4-3xb+6x6- x7 -2x9!2,
-5x1-4x2+ X3 +5x6 -2x7+ x8- x9<-8.
542 Egon Balas

We do not reproduce in detail the computations but the application of


the algorithm may be followed in Tableau IV, which shows the sequence of
solutions and of the steps used at each iteration.

TABLEAU IV

Sequence of solutions Sequence Sequence of solutions Sequence

s J, of steps s J- of steps

o0 A O 3, 9 A
I 4 A I7 3, 9, 6 A
2 4, 9 A I8 3, 9, 6, 7 B
3 4,9,8 A 19 3,9,6,7,2 C(o)
4 4, 9, 8, 7 A 20 3, 9, 6, C (2)
5 4, 9, 85 7, 2 B 21 3, 7 B
6 4, 9, 8, 7, 2, 3, 5, 6 C (I) 22 3, 7, 2, 6 C (o)
7 4, 9, 8, 2 C (I) 23 3, I, 2 C (0)
8 4, 9, 3 A 24 7 B
9 4, 9, 3, 7 A 25 7, 2, 6 C (o)
I0 4, 9 3, 7, 6 C (I) 26 8 A
II 4, 9, 3, 2 C (2) 27 8, 6 A
I2 4, 7 A 28 8, 6, 9 A
I3 4, 7, 6 C (I) 29 8, 6, 9, C (3)
I4 4, I C (I) 30 6 A
I5 3 A 3I 6, 9 D (2)
Stop

The symbols in the Tableau indicate the following sequences of steps:


A= lb, 2b, 3b, 8,
B=lb, 2b, 3c, 4a,
C(k) = lb, 2b, 3a, 5b, 6a, *, 5b, 6a, 5b, 6b, 8,
2k
Dl(k) = lb, 2b, 3a, 5b, 6a, ***, 5b, 6a, 5a.
2k
Example 4. (Problem 9 of Tableau I).
Another ill-behaved problem (with only one feasible solution) is the
following:
5x1+ x2+ 3x3+ 2x4+ 6x5+4x6+7x7+2x8+4x9+ x10+ x1i+5xl2= min,
-x1+3X2-12x3 - x5+7X6- X7 +3xio-5x11- x12? -6,
3x1-7X2 + X4+ 6xb <1,
11x1 + X3-7x4 - x6+2x7+ x8-5xg +9xlu <-4,
Linear Programs with Zero-One Variables 543

5X2+ 6x3 -12x5+7x6 +3x8+ xs-8x1o +5x12 <8,


-7x1- X2- 5x3+3X4 + X55-8x6 -2x8+7xg+ x10 -7,
-2x- 4X4 -3x7-5x8- xg + X11+ X12? -4.

Tableau V shows the sequence of solutions and of the steps used at each
iteration.
TABLEAU V
Sequenceof solutions Sequence Sequenceof solutions Sequence
s | ,* _ |of steps s J, of steps

0 k A 2I I2, 4, 8 B
I 3 A 22 I2, 4, 8, 2,1 II A
2 3 4 A 23 I2, 4, 8, 2, II) I0 F (o)
3 3, 4, 8 A 24 I2, 4, II C (I)
4 3, 4, 8,0 I A 25 I2, 8 A
5 3, 4, 8, I0, 12 E (o) 26 12, 8, 9 C (

6 3, 4, 8, I0, 2 F (o) 27 I2, 7 F (I)


7 3, 4, 8,1 0, 6 F (I) 28 8 A
8 3,4, I2 C (0) 29 8,4 A
9 3, 4, 6 A 30 8, 4, 7 C (o)
I0 3, 4, 6, I0. A 3I 8, 4, 6 H (o)
II , 4, 6 Io,II F (I) 32 8, 4, I, 2 C (I)
I2 3,4,2 A 33 8,7 C(o)
I3 3, 4s 2, I0 A 34 8,2 C (o)
I4 3, 4 2, I0, I F (o) 35 8, 9 A
I5 3, 4, 2, 5 F (I) 36 8, 9, 6 C (2)
i6 3, 4, I F (o) 37 4 A
I7 3, 8 G (o) 38 4, 6 C (I)
i8 3, 7 C (I) 39 7 D (o)
I9 I2 A Stop
20 I2,4 A

The symbolsA,B, C(k) and D(k) are used as in Tableau IV, while the
other sequences of steps are:
E(k) = la, 5b, 6a , 5b, 6a, 5b, 6b, 8,
2k
F(k) = lb, 2a, 5b, 6a, ..., 5b, 6a, 5b, 6b, 8,
2k
G(k) = lb, 2b, 3c, 4b, 5b, 6a, .., 5b, 6a, 5b, 6b, 8,
2k
H(k) = 1b, 2b, 3a, 5b, 6a, , 5b, 6a, 5b, 6c, 7a.
2k
544 Egon Balas

The optimal (in this case the only feasible) solution is the starred one,
i.e.,
_f1, (j=3, 4,8, 10, 12)
xjlo. (j= 1, 2, 5, 6, 7, 9, 11)

ACKNOWLEDGMENTS
I AM indebted to PROF.WILLIAM WV.COOPER,
as well as to FRED GLOVER
and to STANLEY ZIONTS for comments and suggestions which helped to
improve this article. I also wish to acknowledge the help of ELENA
MARINESCU,who carried out the computations for the forest-management
problem discussed in the sixth section.

REFERENCES
1. G. B. DANTZIG,"Discrete Variable Extremum Problems," Opns. Res. 5, 266-
277 (1957).
2. H. M. MARKOWITZ AND A. S. MANNE, "On the Solution of Discrete Program-
ming Problems," Econometrica25, 84-110 (1957).
3. K. EISEMANN, "The Trim Problem," ManagementSci., 3, 279-284 (1957).
4. G. B. DANTZIG,"On the Significance of Solving Linear Programming Problems
with Some Integer Variables," Econometrica28, 30-44 (1960).
5. A. CHARNES ANDW. W. COOPER, Management Models and Industrial Applica-
tions of Linear Programming, Wiley, New York, 1961.
6. A. BEN-ISRAELANDA. CHARNES,"On Some Problems of Diophantine Pro-
gramming," Cahiers du Centre d'1tudes de Recherche Operationnelle
(Bruxelles) 4, 215-280 (1962).
7. M. SIMONNARD, Programmation lineaire, Dunod, Paris, 1962.
8. G. B. DANTZIG,Linear Programming and Extensions, Princeton University
Press, 1963.
9. E. BALAS, "Linear Programming with Zero-One Variables" (in Rumanian),
Proceedings of the Third Scientific Session on Statistics, Bucharest, December
5-7, 1963.
10. , "Mathematical Programming in Forest Management" (in Ru-
manian), Proceedings of the Third Scientific Session on Statistics, Bucharest,
December5-7, 1963.
11. GH. MIHoC AND E. BALAS, "The Problem of Optimal Timetables," Revue de
MathdmatiquesPures et Appliquges 10 (1965).
12. R. E. GOMORY, "Outline of an Algorithm for Integer Solutions to Linear Pro-
grams," Bull. Am. Math. Soc. 64, 3 (1958).
13. , "An All-Integer Programming Algorithm," in J. R. MUTHANDG. L.
THOMPSON (eds.), Industrial Scheduling, Chap. 13, Prentice-Hall, 1963.
14. , "An Algorithm for Integer Solutions to Linear Programs," in R. L.
GRAVESANDPH. WOLFE(eds.), Recent Advances in Mathematical Program-
ming, pp. 269-302, McGraw-Hill, New York, 1963.
Linear Programs with Zero-One Variables 545

15. G. B. DANTZIG,"Note on Solving Linear Programs in Integers," Naval Res.


Log. Quart. 6, 75-76 (1959).
16. R. E. GOMORY ANDA. J. HOFFMAN,"On the Convergence of an Integer-Pro-
gramming Process," Naval Res. Log. Quart. 10, 121-124 (1963).
17. E. M. L. BEALE,"A Method of Solving Linear Programming Problems When
Some but Not All of the Variables Must Take Integral Values," Statistical
Techniques Research Group, Technical Report No. 19, Princeton University,
1958.
18. A. H. LANDANDA. G. DOIG,"An Automatic Method of Solving Discrete Pro-
gramming Problems," Econometrica28, 497-520 (1960).
19. J. F. BENDERS, A. R. CATCHPOLE,AND L. C. KUIKEN, "Discrete-Variable
Optimization Problems," Paper presented to the Rand Symposium on
Mathematical Programming, Santa Monica, 1959.
20. P. M. J. HARRIS,"The Solution of Mixed Integer Linear Programs," Opnl.
Res. Quart. 15, 117-133 (1964).
21. G. L. THOMPSON, "The Stopped Simplex Method, Part I," Revue Francaise de
RechercheOperationnelle8, 159-182 (1964).
22. , "The Stopped Simplex Method, Part. II," Revue Fran~aise de Recherche
Operationnelle9 (1965).
23. P. L. IVANESCU, "Programmation polynomiale en nombres entiers," Comptes
Rendus de l'Academie des Sciences (Paris) 257, 424-427 (1963).
24. F. GLOVER,"A Bound Escalation Method for the Solution of Integer Pro-
grams," Cahiers du Centred'Atudes de RechercheOperationnelle(Bruxelles) 6,
(1964).
25. S. E. ELMAGHRABY,"An Algorithm for the Solution of the 'Zero-One' Problem
of Integer Linear Programming," Department of Industrial Administration,
Yale University, May 1963.
26. W. SZWARC,"The Mixed Integer Linear Programming P^'oblem When the
Variables are Zero or One," Carnegie Institute of Technology, Graduate
School of Industrial Administration, May 1963.
27. F. LAMBERT,"Programmes lindaires mixtes," Cahiers du Centre d'Etudes de
RechercheOpdrationnelle(Bruxelles) 2, 47-126 (1960).
28. S. VAJDA,Mathematical Programming, Addison-Wesley, 1961.
29. R. FORTET,"Applications de l'algdbre de Boole en recherche op6rationnelle,"
Revue Francaise de RechercheOpgrationelle4, 17-25 (1960).
30. P. CAMION, "Une m6thode de resolution par l'algbbre de Boole des problems
combinatoires o-h interviennent des entiers," Cahiers du Centre d'Etudes de
RechercheOperationnelle(Bruxelles) 2, 234-289 (1960).
31. E. BALAS,"Un algorithms additif pour la resolution des programmes lin6aires
en variables bivalentes," Comptes Rendus de l'Acadmie des Sciences (Paris)
258, 3817-3820 (1964).
32. , "Extension de l'algorithme additif b la programmation en nombres
entiers et A la programmation nonlin6aire," Comptes Rendus de l'Academie
des Sciences (Paris) 258, 5136-5139 (1964).
546 Egon Balas

33. F. RAD6, "Linear Programming with Logical Conditions" (in Rumanian),


ComunicdrileAcademiei RPR 13, 1039-1041 (1963).
34. J. D. C. LITTLE, K. G. MURTY, D. W. SWEENEY, AND C. KAREL, "An Algorithm
for the Traveling Salesman Problem," Opns. Res. 11, 972-989 (1963).
35. P. BERTHIER AND PH. T. NGHIEM, Resolution de problems en variables biva-
lentes (Algorithme de Balas et procedure S.E.P.). Note de travail no. 33,
Society d'I1conomie et de Math6matique Appliquees," Paris, 1965.

A NOTE ON THE ADDITIVE ALGORITHM OF BALASt

Fred Glover
CarnegieInstitute of Technology,Pittsburgh, Pa.
and
Stanley Zionts
Carnegie Institute of Technologyand U. S. Steel Corp. Applied ResearchLaboratory,
Monroeville, Pa.
(Received December 28, 1964)

IN THE preceding paper EGON BALASpresents an interesting combinatorial


approach to solving linear programs with zero-one variables. The method is
essentially a tree-search algorithm that uses information generated in the search to
exclude portions of the tree from consideration. The purpose of this note is: (1)
to propose additional tests meant to increase the power of Balas' algorithm by
reducing the number of possible solutions examined in the course of computation;
and (2) to propose an application for which the algorithm appears particularly
well-suited. An acquaintance with Balas' paper is assumed in the discussion that
follows.
First consider ways of reducing the number of solutions examined by Balas'
method. In one approach to this objective, Balas defines the set D. that (using his
notation and equation numbers) consists of those je(N - Cs) such that, if aj were
introduced into the basis, the value of the objective function would equal or exceed
the best value (z*()) already obtained. It then follows immediately that the
variables associated with elements of D8 may be ignored in seeking an improving
solution along the current branch of the solution tree. However, D, can be fruit-
fully enlarged by using a slightly less immediate criterion for inclusion, specified
as follows. Each j in N8 is first examined-in any desired sequence-to see if
t The authors originally refereed Mr. Balas' paper for Operations Research. With
their referee's report they submitted this manuscript extending some of his results.
T The additional tests proposed in this note reduce the number of solutions to be
examined under the additive algorithm, at the expense of an increase in the amount
of computation at each iteration. While we conjecture that on the balance it is
worthwhile introducing these tests, this is a matter to be decided on the basis of
computational experience.
The Algorithm of Balas 547

JsU{j} gives a feasiblesolution. (Such an examinationwill sometimehave to be


made for a numberof these j in any case.) If feasibility is establishedfor any
J8U{jj}a new improvingsolution is obtainedthat may be handledin the usual
fashion by the algorithm. But in the more probableevent that JU j} fails to
satisfy one of the constraints-say constraint i(a71j>yja)-consider the objective
function coefficientch =min[c, I p -(N -j}) and ate <0]. By referringto Balas'
definitionof N8 it can readily be seen that D, may be enlargedto includej if the
sum Ch+cj, when added to the currentobjectivefunction value za, will producea
value at least as great as z*(8).t Similarly,we may enlargethe set DkS in a cor-
respondingmannerto reducethe numberof variablesthat need to be considered
after backtrackingwith the algorithm.
Anothermeansthat can be employedto sharpenthe efficiencyof the algorithm
is to include an additionalrelationat Steps 3 and 6 of the algorithm. To under-
stand the function of these proposedrelations,it is useful to review the relations
(22) and (27) that Balas includes at these steps. For any unsatisfiedconstraint
(yis or yi/ <0), the additive algorithminstructsby means of (22) and (27) to de-
termine whetherthat constraintcan be satisfiedby setting all nonexcludedvari-
ableswith negativecoefficientsequalto one. If not, sincethe constraintrepresents
a 'less than or equal to' relation,obviouslythere is no improvingsolutionthat is
feasible along the present branch of the tree, and the algorithmbacktracksap-
propriately. But we note that it is entirelypossiblefor the tests embodiedin (22)
and (27) to be passedonly at the expenseof forcingthe objectivefunctiontoo high.
Restrictingattention for the moment to Step 3, we specify a sufficient(but not
necessary)conditionto detect whethersuch a situation obtains. If there exists
an i such that yia<0 and yi8(cj/a- j) >Z*(8) -z, for all jEN,(aT-J<0), then there are
no feasible solutions left along the current branch of the tree that will improve on
the best already found. To prove this we assume that an improving solution does
exist. Then for any such solution we have E a,1 ?yti where the sum is taken
over those negative aij such that jEN, and aj is in the basis for the specified solu-
tion. Then if E cj is restricted to these same j, the current objective function
value z8 plus Ecj yields a lower bound on the objective function value of the
improving solution. But by the foregoing we have z,+ cj=z, + E, a-
(cj/a;y) >z,+ Eati(z*(S) z8)/1yi8>z*(8), and the assertion is proved by con-
tradiction. Similar inferences may be drawn relative to the conditions en-
countered at Step 6. The wording of the expanded instruction at this step may
then be: "For those i such that y <0, check the relation (27a) y Ic(cj/a-) >z*() Zk
for all jENkS, and for k =ki. If (27a) holds for any i, carry the procedure indicated
when (27) does not hold for some i."J
t If Ch is undefined, it is the same as failing to satisfy relation (22) in step 3 of
the additive algorithm which we discuss below. Broader criteria for inclusion in D,
may clearly be established by considering whether JLUj1UIh} is feasible, but at a
computational cost that probably outweighs the gains to be derived. In a contrast-
ing vein, a more expedient, but less restrictive approach would be to use 1 in place
of Ch, provided the c, consist of positive integers.
T For a more general test which becomes quite powerful in a somewhat different
combinatorial approach than employed by Balas, see reference 5. For applications
of nonbinding constraint elimination and extraneous variable elimination to mathe-
matical programming in general, see reference 6.
548 Fred Glover and Stanley Zionts

We turn now to the issue of generating and searching the solution tree. First
we note that for many problems a feasible solution is known in advance. This
solution may be used to establish a starting value for Z*(8) other than infinity,
thereby expediting convergence to the optimum. But such a solution may also
be used in another way. If it is suspected that an appreciable fraction of the
variables for the feasible solution coincide in value to those for some optimal solu-
tion, it may be useful to employ a two-stage algorithm that treats the ('primary')
variables set at unity in the feasible solution differently from the ('secondary')
variables set equal to zero. In particular, it would seem desirable for such an
algorithm to be designed to dispose rapidly of various 0-1 assignments to the pri-
mary variables in the first stage and then to apply the additive algorithm to the
secondary variables in the second stage. Though it is beyond the scope of this
note to describe such a procedure in detail, we remark that it is possible to use a
simplified bookkeeping scheme for the first stage (consisting of a single vector each
of whose components assumes only three values) so that with slight modifications
the tests described above may still be applied to restrict the range of 0-1 assign-
ments necessary for consideration (see reference 5).
An application for the Balas' algorithm of interest lies in its potential integration
with the GILMORE-GOMORY method for solving the cutting-stock problem.[' 41
While this problem may be given an integer programming formulation, the number
of variables, even for a moderate-sized problem, is so large that, in a practical
sense, the usual integer programming approach is to no avail. Even the ordinary
linear programming methods are not practical. Gilmore and Gomory's method
overcomes this difficulty by restricting attention to a very small subset of the
variables in order to obtain a starting solution via linear programming. Once a
solution is available, a solution to the knapsack problem is used to generate im-
proving variables for the problem.t The solution process then alternates between
generating new variables and solving the linear programming problem to see which
of the newly generated variables should be included in the solution. When no
improving variables can be found the algorithm comes to a halt.
Three features of Balas' method should prove useful in this application. First,
Gilmore and Gomory's method does not provide integer solutions; Balas' algorithm
appears very reasonable in this context. Second, except for the first solution of
the integer-program subproblem, a finite starting Z*(8) (obtained from the previous
subproblem) is available. Such solutions can be exploited as suggested above.
Third, at any stage of the Gilmore-Gomory method it is necessary to consider
only those solutions involving at least one of the newly generated variables, a
situation with which Balas' method dovetails rather well.
A significant portion of the cutting-stock problems known to us can be repre-
sented using zero-one variables, higher-valued integers of course being represented
by sums of zero-one integers. Balas' extension of the additive algorithm to the
general integer linear programming problem[ proposes an improvement in effi-
ciency of such a representation that makes the approach even more attractive.
t In the first version of their cutting-stock method, Gilmore and Gomory solve
the knapsack problem by dynamic programming; in the second, they propose a
special algorithm for the knapsack problem which proves to be substantially su-
perior to dynamic programming.
The Algorithm of Balas 549

REFERENCES
1. EGONBALAS,"An Additive Algorithm for Solving Linear Programs with Zero-
One Variables," Opns.Res. 13, 517-546 (1965).
2. , "Extension de l'algorithme additif a la programmation en nombres
entiers et a la programmation nonlineaire," ComptesRendusde l'Academiedes
Sciences(Paris) 258, 5136-5139 (1964).
3. P. C. GILMORE AND R. E. GOMORY, "A Linear Programming Approach to the
Cutting-Stock Problem," Opns.Res. 9, 849-859 (1961).
4. AND , "A Linear Programming Approach to the Cutting-Stock
Problem-Part II," Opns.Res. 11, 863-888 (1963).
5. FRED GLOVER, "A Multiphase-Dual Algorithm for the Zero-One Integer Pro-
gramming Problem," (forthcoming).
6. S. ZIONTS, G. L. THOMPSON, AND F. M. TONGE, "Techniques for Removing
Nonbinding Constraints and Extraneous Variables from Linear Programming
Problems," Carnegie Institute of Technology, Graduate School of Industrial
Administration, Pittsburgh, Pennsylvania, November, 1964.

You might also like