Professional Documents
Culture Documents
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless
you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you
may use content in the JSTOR archive only for your personal, non-commercial use.
Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at
http://www.jstor.org/action/showPublisher?publisherCode=informs.
Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed
page of such transmission.
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of
content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms
of scholarship. For more information about JSTOR, please contact support@jstor.org.
INFORMS is collaborating with JSTOR to digitize, preserve and extend access to Operations Research.
http://www.jstor.org
Operations Research
July-August 1965
Egon Balas
Centre of MathematicalStatistics, Rumanian Academy, Bucharest
(Received March 2, 1964)
ing to a certain rule, a vector a11 such that ai1j<0, to be introduced into
the basis. But instead of introducing ah in place of a vector et in the
basis, as would be the case in the dual simplex method, we add to P0
the constraint xj, =1, in the slightly modified form -xj1+yYm+i=-1,
where ymwl is an artificial variable. Thus we obtain the problem P1 as
defined above, with Ji= {j'},i.e., the problem consisting of (1), (2), (3a),
(4), and the additional constraint
Xj3=1, (3bi)
It is easy to see that the set xj=O(jeN), y =bi(iEM), ym?= -1, is a dual-
feasible solution to P'. In the extended basis I(m.+)= (ei) (i= 1, **, m+ ),
the (m+l)st unit vector em?1 corresponds to ym+i. It is in the place of
this unit vector em?+ that we introduce ajl, and thus xjl takes the value
1 in the new solution to P' that obviously remains dual-feasible. As the
artificial variable ym,+, which becomes 0, does not play any role henceforth,
it may be abandoned and the new solution may be written ul = (x',y )
.
(Xi', ** Xn* , Y i .' .. , Yi 1).
Xjw Ij-1.- 5)
( =1 ...... 5) 0l = ( 2,3,4,5)
/ f l(j=2) 1/(j=) I
l(j=1,2)
= O(X I ..5) 0 (j = 1,3,4,5) O(j -2,3,4,5) _ I(j = 3,4,5)
_ 7 _ \ ? la2 4 / __ ______??-23)) 4 7 /
-Zig - t
~-t -:IO-2,4,) -
,4,5-_
- -- -A--A
- ?-- 2-t 3
?~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 (Q= 2X3)
X
OQ = 1,4,5)
Figure 1
of the algorithm (represented by the circles at the end of the thick lines)
make sure that no feasible solution 'better' than the one obtained exists
beyond them.
Of course, under such circumstances, the efficiency of the algorithm
depends largely on the efficiency of these 'stop signals' i.e., on the number
of branches that need not be followed. As it will be shown later in greater
detail, in most cases the algorithm succeeds in reducing the subset of solu-
tions to be tested to a relatively small fraction of the complete set.
Xj (5)
{O[J(N-Jp,)],
then y=ip bi- E j. aij. (icli) (6)
At each iteration s+1, the new vector to be introduced into the basis
will be chosen from a subset of {ajEjcN},called the set improving vectorsfor
the solution uW. We shall denote by N8 the corresponding set of indices
j (of course, NACN), and we shall define it more precisely below. t
t Throughout this paper the symbol C will be used for inclusion, while C will
stand for strict inclusion.
524 Egon Balas
We now define certain values which will serve as a criterion for the
choice of the vector to be introduced into the basis. Thus, for each solu-
tion us and for each jeNs, we define the values
{a ~8- (( 8-aij), (jcNM, -) ()
o) 0)
~~~~(jE-N,;Mr=
where M1K={ifyi8-aij<O}. (12)
The meaning of these values is obvious: V1*is the sum of negative com-
ponents of the solution-vector u,+l which can be obtained from the solu-
tion-vector us by setting J,+1=JUf{j*}.
As already said, the values vj1 are to serve as a criterion for the choice
of the new vector to be introduced into the basis. This criterion has been
found efficient; however, it must be emphasized that it is empirically chosen,
being neither compelling, nor essential for the additive algorithm, which
may do as well with some other criterion. The choice of the criterion for
introducing a new vector into the basis may of course heavily affect the
efficiency of the algorithm, but has no influence on its finiteness.
Under the additive algorithm, the values vjk assigned to a certain solu-
tion uk are successively cancelled in the subsequent iterations according to
certain rules. Let Ck8(k?s) stand for the set of those j for which the
values vJ'assigned to the solution uk have been cancelled before the solu-
tion us has been obtained. (Ckk=o by definition.)
The set of those j for which the values v/' assigned to any one of the
solutions uVsuch that p <s and J cJs have been cancelled before obtaining
the solution us, will be denoted
CS= UPIJPC
JCP. (13)
We shall now define for the solution us the set of those jE(N- CS)such
that, if aj were introduced into the basis so as to yield Js+i=JUfj}, the
value of the objective function would hit the ceiling for us:
Ds= {jljE(N-C8), cj>z*(S)-ZS}. (14)
Further, the set of those jE[N- (C'UD,)] will be defined such that, if
aj were introduced into the basis so as to yield J,+?=JU{j}, no negative
YiZwould be increased in value:
Es = {jljE[N- (C'UD,)], yt<0==>aiO>0}. (15)
We are now in a position to give a proper definition to the set of im-
proving vectors for a solution us. It is the set of those aj, for which j
belongs to
N8= N- (CsUDSUES). (16)
Obviously, N8= 4 for any feasible solution us.
Linear Programs with Zero-One Variables 525
Finally, given a pair of solutions us and uk, such that u8= u(ji, ***,jr)
U= U (jl, * * jr-h), (1 <h <r; JkcLJ), we define the set of improving vectors
for the solution Uk, left after iteration s. It is the set of those aj, for which j
belongs to
Nks=Nk- (CkOUDko)* (18)
The sets defined above by (16) and (18) will play a central role in our
algorithm. Whenever a solution u' is reached, only the improving vectors
for that solution are considered for introduction into the basis. Whenever
the set of improving vectors for a solution us is found to be void, this is to
be interpreted as a 'stop signal,' which means that there is no feasible
solution ut such that JC:Jt and Zt<z*(s). In such cases we have to take
up our procedure from a previous solution Uk, to be identified according to
certain rules, and for any such solution only the set of improving vectors
for that solution uk left after iteration s is to be considered for introduction
into the basis.
X3 1, (20)
O [jE(N-J8)(
yil= bi- Ejf. aij, (ilkM)
Step 2. Identify the improving vectors for the solution u' by forming
the set N8 as defined by (16).
2a. If N, =4, i.e., there are no improving vectors for u8, pass to step 5.
2b. If NS,?-4),pass to
Step 3. Check the relations
ZjENS a-j yi8, (ijyis<O) (22)
where a; j are the negative elements of A.
3a. If there exists i1EM for which (22) does not hold, pass to step 5.
3b. If all relations (22) hold as strict inequalities, compute the values
Vj as defined by (1 1) and (12) for all jeN8, choose jsli so that
V = naxjENvvj (23)
cancel vi8+ and pass to step 8.
3c. If all relations (22) hold, and there exists a subset M8 of M such
that the relations (22) hold as equalities for iEM8,pass to
Step 4. Check the relation
ZE F8 Ci < Z*() -Z (24)
where F8 is the set of those jeNs, for which aj <0 for at least one ijM8.
4a. If (24) holds, cancel vjf for all jEFS (without computing their
numerical values). Set J,+?=J8UF8, compute the value of the objective
function
Zs+1 =Zs+ Eij Fe Cj (25)
and of the slack variables
!+lI= _ EjEFS ai1 (iEM) (26)
for the new solution u1+, and pass to the next iteration (i.e., start again
with step 1).
4b. If (24) does not hold, cancel vf for all jENs (without computing
their numerical values) and pass to
Step 5. Identify the improving vectors for the solutions uk (klJkc:J)
left after iteration s, i.e., check the sets Nk' (IkjJkCJ,) as defined by (18),
in the decreasing order of the numbers k, until either a number ki is found
such that Jk1 CJ and N',1#), or Nk' is found to be void for any k such
JkCJs.
5a. If Nk8= 4 for all k such that JkCJ8, i.e., there are no improving
vectors for any uk (k[Jk CL), the algorithm has come to an end. In this
case, if ZS=4), P has no feasible solution. If Z8 #4), then u' for which
Zq=Z*(s) is an optimal solution and z*(8) is the minimum attained by the
objective function for this solution.
If (23) or (28) holds for more than one j=jsi+ (let J.ax be the set of those j for
which it holds), choose js1+ so that c,+1 =miniejmaxci,and if this relation too holds for
more than one j=J=s+, then choose any one of them as Js+1.
Lintear Programs with Zero-One Variables 527
e
SNs 1 J u s
tno (2b)
C eyes es
Compuye s tb or 3 or i N
l~~~~~~~~~~~~ys
_ _ Y ~~~~~o
n~~~
(3cfo = ~
i6 o ~ _ - _
- i o. Computezs~ '1anids
qopuet z- ,an
t ! A | for kt<,@k
~~~Repeatstep5
| oes ke<:k,8(T GJs) exist?| ye e| replacing k by'k8 F +
1 r~~~~~~~~~~1
5b. If N.>8
s for' k ==ki> ik. 1ci., pass to
Step 6. Check the relations
where- a7j = maxjENa-j; if all relations (22a) hold, compute the values v.8
for all jeNY, choose j,+, so that
V>s+1= maxjENv, (23)
cancel v8?+iand pass to step 8.
3c. If all relations (22) hold, and there exists a subset M' of M such
that relations (22a) do not hold for ieM', pass to step 4.
Completely analogous changes are to be made in steps 6b, 6c.
The application of the algorithm is facilitated through the use of a
tableau of the type presented with the numerical examples of the final
section.
FINITENESS PROOFS
THEADDITIVE algorithm yields a sequence of solutions uo, u1, ***. We shall
say that a solution uk is abandoned,if we are instructed by the algorithm
either to check Np'(O< p < k < s), or to stop (termination).
A central feature of our algorithm, which is at the same time instru-
mental in proving its convergence, may be expressed by Theorem 1.
THEOREM 1. If a solution uk is abandoned under the additive algorithm,
*
then no feasible solution tu exists such that JkCJt and Zt<z *(I)
t
(
A solution may be abandoned as a consequence of one of the following
steps of the algorithm: la, 2a, 3a, 4b, 5, 6a, and 7b. In order to prove that
the above theorem holds for all these situations, we shall need two lemmas.
First, let us consider the sets Cp8. Under the rules of our algorithm,
there are three circumstances in which a value v?*assigned to a solution
uP may have been cancelled until the iteration s, i.e., there are three reasons
for j* to belong to a set Cpt8
(a) Relation (27) does not hold for k = p (i.e., j* has been cancelled
under step 6a).
(b) A solution u(p<q < s) has been obtained from up by introducing
aj* into the basis (i.e., j* has been cancelled under one of the steps 3b,
4a, 6b, 7a).
(c) If aj were to be introduced into the basis, z would hit the ceiling
for us (i.e., j* has been cancelled under one of the steps la, 4b, or 7b).
Let us denote the subsets of indices j corresponding to those vfP that
530 Egon Balas
and
and ~~ ~ Ck-k
~~~~kn+()
Ck k+= k + c) UCk 1(b)(3 (38)
Further, by the definition of C+1 (c),
(Jt Jn+l) n{ UPIJPJ,[C '(C)-C (C)]} =do (39)
Thus, from the above relations it follows that
Now, there are two possibilities: either Jn+1= Jk U Ijn+ }, or Jn+l = JkUFk -
Let us suppose that the first of these two situations holds, and thus
Ck'(b) Ck (b) U{jn+} (A perfectly analogous reasoning is valid for
the second situation.)
We have
(Jt-Jn+l) nfc,,1= (Jt-Jk) A(Jt - jLn?) nf[Ck (b) U{jn+I}]) (41)
or, as
(Jit- {Ini} ) -lJ}I (42)
(Jt-Jn+l) no -n1=(Jt-Jk) n (Jt- jjn+,j
)ncok (b) * (43)
We shall now show that
(Jt-Jkc) nfCk (b) = (44)
Let us suppose the contrary, i.e., that there exists jf, such that
must also exist (i=2, 3, ... ), Jq_, and Jqj being defined analogously to
J1. As the number of sets Ct (b) H4 is finite, this sequence of implica-
tions obviously ends in a contradiction that proves the validity of (44).
Relation (34) is thus shown to hold also in case (2), and this completes
the proof of Lemma 1.
LEMMA 2. Given two solutions u8= u(ji, j, j) and u = u(jil, jr-h)
(1 _h _r; JkC Js8), such that Np= C +1'(k< p As), if there exists a feasible
solution ut such that JkcJt and zt<z*(), then
(Jt-Jk) nCk = 4). (53)
Proof. JkCJ8 guarantees that Ck8(a) = 4) while
6a. If (27) does not hold for a certain k, then a solution ut such as re-
quired in the theorem could only exist if
(Jt-Jo) n[N-(N.8UE.)] ,. (60)
UNLIKE MOSTof the known algorithms for solving linear programs with
integer variables, the additive algorithm attacks directlythe linear program
with zero-one variables, and does not require the solution of the correspond-
ing ordinary linear program (without the zero-one constraints).
As has been shown, the only operations required under the algorithm
described above are additions and subtractions. Thus any possibility of
round-off errors is excluded.
The additive algorithm does not impose a heavy burden on the storage
system of a computer. Most of the partial results may be dropped shortly
after they have been obtained.
At this moment, experience with the additive algorithm is at its very
534 Egon Balas
TABLEAU I
I 5 2 6
2(a) 5 3 3
3 7 5 II
4(a)(b) 9 4 31
5 IO 4 I2
6(a) IO 7 5
7 II 6 21
8 II 7 8
9(a)(c) I2 6 39
IO I4 9 23
II I5 12 22
The time needed for solving these problemst was on the average 7
minutes for an iteration, i.e., 1-3 hours for each of the problems 3, 4, 5, 7, 8,
10, and 11. Problems 1, 2, and 6 were solved in less than an hour, while
somewhat more than 4 hours were used to solve problem 9.
Any one who has tried to solve by hand a problem of the type and size
discussed here with the cutting-plane technique for integer programming
problems (which in this case has to be combined with bounded variables-
procedures) knows that it takes several times that amount of time, not to
speak about difficulties generated by round-off. Moreover, the solution of
the ordinary linear programs (without the zero-one constraints) correspond-
ing to the above problems would also require an amount of computations
considerably larger than that needed for solving the zero-one problems by
the additive algorithm.
t By hand calculations by a person having no special experience with the additive
algorithm.
Linear Programs with Zero-One Variables 535
The only large problem on which the additive algorithm has so far been
tried was a problem in forest management, [10]with a linear objective func-
tion in zero-one variables subject to a set of linear constraints, and to
another set of conditional ('logical') constraints.T The latter set has been
replaced by an equivalent set of linear constraints including additional
zero-one variables, so that finally a zero-one linear programming problem
emerged with 40 variables and 22 constraints. The number of nonzero
elements in the final coefficient matrix was 140, of which 115 were 1 or -1.
This problem was specially chosen for studying the applicability of the
algorithm to the given type of forest-management problems, and it was so
structured that the optimal solution could be known beforehand. After 35
iterations were made by hand-the average time of an iteration being about
20 minutes-the optimal feasible solution was approximated within 1.4
per cent. We note, however, that this approximation cannot be regarded
as a sign that termination of our algorithm was also near, and we further
note that our coefficient-matrix was relatively sparse-with many 0
elements.
This experiment showed, among other things, the great advantages of
the additive character of the algorithm. For instance, an error discovered
at the 19th iteration could be traced back to the 3rd iteration and corrected
throughout the subsequent solutions in about 2 hours' time.
The following considerations concerning the dependence of the amount
of computations on the size and other characteristics of the problem are
based partly on common-sense examination of the mechanism of the
algorithm, partly on the experience summarized above.
Let n be the number of variables and m the number of constraints.
(a) Amount of computationsneeded for one iteration. The number of
operations needed for checking relations (22) and (27) (steps 3 and 6
respectively), and for computing the values vjf (step 3b), depends linearly
on mXn. The number of operations needed to form or check the sets Ns
and Nk' (steps 2, 5), depends linearly on n, while the number of operations
needed to compute and check a new solution (steps 8, 4a, 7a, and la) de-
pends linearly on m. Thus the total amount of computations needed for
one iteration depends linearly on a quantity ,usituated somewhere between
min[m, n] and mXn.
(b) Number of iterations needed to solve the problem. This obviously
depends first of all on n. As to m, its increase enhances the efficiency of
some of the stop signals (steps 3a and 6a) and thus tends to reduce the
number of iterations. (This is an important advantage of the algorithm.)
The crucial thing about the efficiency of the algorithm is to know how
the number of iterations depends on n. The combinatorial nature of the
algorithm does not necessarily imply an exponential relation for, while the
t Expressing relations of the type V, =>, and A.
536 Egon Balas
set of all solutions from which an optimal feasible one has to be selected
grows exponentially with n, the efficiency of some of the stop signals may
perhaps grow even faster with n. So far the experience summarized above
does not seem to indicate an exponential relation. While in the three
smallest problems of Tableau I (1, 2, and 3) the number of iterations was
0.6-1.6 times n, in the two largest problems (10 and 11) it was 1.5-1.6 times
n. In the largest problem so far solved (11), 22 solutions out of a total of
2 = 32,768 had to be tested. But of course this experience is insufficient,
and computer experience with a considerable number of larger problems
will be needed to elucidate this question in the absence of an analytic
proof.
The number of iterations also depends on other characteristics of the
problem. In cases where an optimal solution exists in which only a few
variables take the value 1, the relatively large size of the sets D. makes the
stop signals especially efficient and thus assures a rapid convergence of the
algorithm (see, for instance, problem 6 of Tableau I) (example 2 in the final
section), where only 5 out of 210= 1,024 solutions had to be tested). On the
other hand, if the problem has no feasible solution, all sets D, are void and
the efficiency of the stop signals is reduced. The same holds to a lesser
extent for problems with very few feasible solutions. But it should be
noted that even in the case of such 'ill-behaved' problems the number of
iterations does not become unreasonably large. Thus, problem 4 of
Tableau I (example 3 in the final section), with 9 variables and 4 con-
straints, having no feasible solution, was 'solved' (i.e., the absence of a
feasible solution was established) in 31 iterations, while in the worst of
cases met until now, problem 9 of Tableau I (example 4 in the final section),
with 12 variables and 6 constraints, having only one feasible solution, was
solved by testing 39 out of 212= 4,096 solutions.
NUMERICAL EXAMPLES
Example 1. (Problem 2 of Tableau I.)
Let us consider the following problem:
-5x1'+7X2'+ 1Ox3'-3X4'+ x5 = min
-x1'-3X2 + 5x3 - X4'-4x5 > 0,
-2x1'-6X2 + 3X3'-2X4'-2x5'< -4,
x;=O or 1. (j=1, * , 5)
TABLEAU II
Row s z8 8 C D E8 F
no. J,
I 2 3
I O 0 0 -2 0 -I j 2,5
2 ZlENO aii -7 -2
3 I yij0-ail -I -2 -I
4 3 yi - ai3 -3 3
5 4 yj0-ai4 -I -2 -2 -~55
6 I 3 10 Yil 3 3 I 3 0 I,4
8 __ __ __ _ 5 _Yi _;ai2
_ _ C
9 5 yj1 -tai5 -I I 2
IO 2 3, 2 I7* Yi 0 3 0
ii ZJ~~~~E
ENl2catji -2
I 2 ZieN,2caiI - 2 0
Iin view of the form of Tableau II, the computations are easier to follow
if we work with AT instead of A (T being the symbol of transposition):
-1 2 0
3 -6 1
A= 5 3 -2
-1 2 1
4 -2 1
We start with the initial solution u = (0, b). The data of this solution,
JoT p, zo=O, and y1o=bi -2, y20= b2= O. Y30=b3-1, are shown on the
left-hand side of row 1 of Tableau II.
To make this illustration easier to follow on Tableau II, cancellation of
the values vj/ is not marked through crossing out, but through numbering
538 Egon Balas
v1 Z=
ZEM (y0-al) = (-2+1) + (0-2) + (-1-0) =-4,
M = ZEjMo (yiP-ai3) = (0-3) = -3,
V40= ZieMo (yi0-aj4) = (-2+1)+(0-2)+(-l-1) = -5.
We have v~l=maxjeNo{v I}=v3 =-3, we cancel v30and pass to
Step8. J1=JoU{3}= {3},
z1= zo+c3=0+ 10 = 10,
yj1=y1 -a13= -2+5= 3,
Y2 =Y20-a23=0-3=-3,
YS =3 -a3 =-1+2=1,
(see row 6 of TableauII).
Iteration2
Step 1. yil<0 for i=2, so we are in situation lb.
Step 2. N1-=N- (ClUD1UEl). C1={3}, D1=4, E1 {1, 4} (seerow 6
of the Tableau).
Ni= {l, 2, 3, 4, 5}-({3}U{1, 4} )={2, 5}, so we are in case 2b.
Step3. Checkingof the relations(22) for i= 2 is shownin row 7 of the
Tableau:
EjNl a2j= -6-2=-8<y2 - -2.
Thus we are in case 3b. Computationof vOw
forjeN, is shownin rows8
and 9 of TableauII.
V25 = Z(M??21 =(3)?
E
61= j1- (yil-ai6) = (3-4)+(-3+2)+(1 l=1) 2
Linear Programs with Zero-One Variables 539
We pass to
Step 5. We check the set Nk2as defined by ( 18) for k =1:
N12=N-(C12 UD12)= {2, 51-121 = {51, so we are in situation 5b.
Step 6. We check the relations (27) for i= 2 (row 11 of Tableau II):
As (27) does not hold, we are in case 6a. We cancel vjl for jEN12, i.e.,
v51,and return to
Step 5. We check the set Nk2for k=0:
No2=No- (C02UD02)= {1, 3, 41- {3} = (1, 41 so we are again in case 5b.
Step 6. We check (27) for i= 1, 3 (row 12 of the Tableau):
EjeNo alj= -1-1= -2=yl 0 -2,
EjeNo a3=0 -1.
As (27) does not hold for i= 3, we are again in case 6a, and we cancel
0i.e., vio and v40.
vOfor jeNo2
As (27) does not hold for any k such that Nk2 4 , the algorithm has
terminated. The optimal solution obtained for the restated problem is
u2=u(3, 2), with
2 _ 2 2 2 2 2 2
x2 = x32=1, x1 =x4 =x5 =0, y20 y22= 3, y3 =0,
and Z2= 17.
The corresponding optimal solution to the initial problem is
X1 =x2 =X3'=X4 =1 X5 ==0,
TABLEAU III
I 0 q5 0 y;? -2 -I -I I -3 7 -I
I3 I 9 5 yil 5 -I -I I -I 8 9 9 0 I7,
I0
2I 2 9, 3 6* 0 yi2 I3 9 0 0 3 8 7
23 3 5 2 Yis -2 4 -I I -3 0 0 I, 2,4,5, q 7, I0
6, 9
27 4 5, 33 3 4Y 6 14 0 0 I O -2 1,2,3,4, 7,i0 8
5, 6, 9
28 Tje N4 a j -2 -9
29 eN0 a&i -8 I0 3 1 -I 3S 3
TABLEAU IV
s J, of steps s J- of steps
o0 A O 3, 9 A
I 4 A I7 3, 9, 6 A
2 4, 9 A I8 3, 9, 6, 7 B
3 4,9,8 A 19 3,9,6,7,2 C(o)
4 4, 9, 8, 7 A 20 3, 9, 6, C (2)
5 4, 9, 85 7, 2 B 21 3, 7 B
6 4, 9, 8, 7, 2, 3, 5, 6 C (I) 22 3, 7, 2, 6 C (o)
7 4, 9, 8, 2 C (I) 23 3, I, 2 C (0)
8 4, 9, 3 A 24 7 B
9 4, 9, 3, 7 A 25 7, 2, 6 C (o)
I0 4, 9 3, 7, 6 C (I) 26 8 A
II 4, 9, 3, 2 C (2) 27 8, 6 A
I2 4, 7 A 28 8, 6, 9 A
I3 4, 7, 6 C (I) 29 8, 6, 9, C (3)
I4 4, I C (I) 30 6 A
I5 3 A 3I 6, 9 D (2)
Stop
Tableau V shows the sequence of solutions and of the steps used at each
iteration.
TABLEAU V
Sequenceof solutions Sequence Sequenceof solutions Sequence
s | ,* _ |of steps s J, of steps
0 k A 2I I2, 4, 8 B
I 3 A 22 I2, 4, 8, 2,1 II A
2 3 4 A 23 I2, 4, 8, 2, II) I0 F (o)
3 3, 4, 8 A 24 I2, 4, II C (I)
4 3, 4, 8,0 I A 25 I2, 8 A
5 3, 4, 8, I0, 12 E (o) 26 12, 8, 9 C (
The symbolsA,B, C(k) and D(k) are used as in Tableau IV, while the
other sequences of steps are:
E(k) = la, 5b, 6a , 5b, 6a, 5b, 6b, 8,
2k
F(k) = lb, 2a, 5b, 6a, ..., 5b, 6a, 5b, 6b, 8,
2k
G(k) = lb, 2b, 3c, 4b, 5b, 6a, .., 5b, 6a, 5b, 6b, 8,
2k
H(k) = 1b, 2b, 3a, 5b, 6a, , 5b, 6a, 5b, 6c, 7a.
2k
544 Egon Balas
The optimal (in this case the only feasible) solution is the starred one,
i.e.,
_f1, (j=3, 4,8, 10, 12)
xjlo. (j= 1, 2, 5, 6, 7, 9, 11)
ACKNOWLEDGMENTS
I AM indebted to PROF.WILLIAM WV.COOPER,
as well as to FRED GLOVER
and to STANLEY ZIONTS for comments and suggestions which helped to
improve this article. I also wish to acknowledge the help of ELENA
MARINESCU,who carried out the computations for the forest-management
problem discussed in the sixth section.
REFERENCES
1. G. B. DANTZIG,"Discrete Variable Extremum Problems," Opns. Res. 5, 266-
277 (1957).
2. H. M. MARKOWITZ AND A. S. MANNE, "On the Solution of Discrete Program-
ming Problems," Econometrica25, 84-110 (1957).
3. K. EISEMANN, "The Trim Problem," ManagementSci., 3, 279-284 (1957).
4. G. B. DANTZIG,"On the Significance of Solving Linear Programming Problems
with Some Integer Variables," Econometrica28, 30-44 (1960).
5. A. CHARNES ANDW. W. COOPER, Management Models and Industrial Applica-
tions of Linear Programming, Wiley, New York, 1961.
6. A. BEN-ISRAELANDA. CHARNES,"On Some Problems of Diophantine Pro-
gramming," Cahiers du Centre d'1tudes de Recherche Operationnelle
(Bruxelles) 4, 215-280 (1962).
7. M. SIMONNARD, Programmation lineaire, Dunod, Paris, 1962.
8. G. B. DANTZIG,Linear Programming and Extensions, Princeton University
Press, 1963.
9. E. BALAS, "Linear Programming with Zero-One Variables" (in Rumanian),
Proceedings of the Third Scientific Session on Statistics, Bucharest, December
5-7, 1963.
10. , "Mathematical Programming in Forest Management" (in Ru-
manian), Proceedings of the Third Scientific Session on Statistics, Bucharest,
December5-7, 1963.
11. GH. MIHoC AND E. BALAS, "The Problem of Optimal Timetables," Revue de
MathdmatiquesPures et Appliquges 10 (1965).
12. R. E. GOMORY, "Outline of an Algorithm for Integer Solutions to Linear Pro-
grams," Bull. Am. Math. Soc. 64, 3 (1958).
13. , "An All-Integer Programming Algorithm," in J. R. MUTHANDG. L.
THOMPSON (eds.), Industrial Scheduling, Chap. 13, Prentice-Hall, 1963.
14. , "An Algorithm for Integer Solutions to Linear Programs," in R. L.
GRAVESANDPH. WOLFE(eds.), Recent Advances in Mathematical Program-
ming, pp. 269-302, McGraw-Hill, New York, 1963.
Linear Programs with Zero-One Variables 545
Fred Glover
CarnegieInstitute of Technology,Pittsburgh, Pa.
and
Stanley Zionts
Carnegie Institute of Technologyand U. S. Steel Corp. Applied ResearchLaboratory,
Monroeville, Pa.
(Received December 28, 1964)
We turn now to the issue of generating and searching the solution tree. First
we note that for many problems a feasible solution is known in advance. This
solution may be used to establish a starting value for Z*(8) other than infinity,
thereby expediting convergence to the optimum. But such a solution may also
be used in another way. If it is suspected that an appreciable fraction of the
variables for the feasible solution coincide in value to those for some optimal solu-
tion, it may be useful to employ a two-stage algorithm that treats the ('primary')
variables set at unity in the feasible solution differently from the ('secondary')
variables set equal to zero. In particular, it would seem desirable for such an
algorithm to be designed to dispose rapidly of various 0-1 assignments to the pri-
mary variables in the first stage and then to apply the additive algorithm to the
secondary variables in the second stage. Though it is beyond the scope of this
note to describe such a procedure in detail, we remark that it is possible to use a
simplified bookkeeping scheme for the first stage (consisting of a single vector each
of whose components assumes only three values) so that with slight modifications
the tests described above may still be applied to restrict the range of 0-1 assign-
ments necessary for consideration (see reference 5).
An application for the Balas' algorithm of interest lies in its potential integration
with the GILMORE-GOMORY method for solving the cutting-stock problem.[' 41
While this problem may be given an integer programming formulation, the number
of variables, even for a moderate-sized problem, is so large that, in a practical
sense, the usual integer programming approach is to no avail. Even the ordinary
linear programming methods are not practical. Gilmore and Gomory's method
overcomes this difficulty by restricting attention to a very small subset of the
variables in order to obtain a starting solution via linear programming. Once a
solution is available, a solution to the knapsack problem is used to generate im-
proving variables for the problem.t The solution process then alternates between
generating new variables and solving the linear programming problem to see which
of the newly generated variables should be included in the solution. When no
improving variables can be found the algorithm comes to a halt.
Three features of Balas' method should prove useful in this application. First,
Gilmore and Gomory's method does not provide integer solutions; Balas' algorithm
appears very reasonable in this context. Second, except for the first solution of
the integer-program subproblem, a finite starting Z*(8) (obtained from the previous
subproblem) is available. Such solutions can be exploited as suggested above.
Third, at any stage of the Gilmore-Gomory method it is necessary to consider
only those solutions involving at least one of the newly generated variables, a
situation with which Balas' method dovetails rather well.
A significant portion of the cutting-stock problems known to us can be repre-
sented using zero-one variables, higher-valued integers of course being represented
by sums of zero-one integers. Balas' extension of the additive algorithm to the
general integer linear programming problem[ proposes an improvement in effi-
ciency of such a representation that makes the approach even more attractive.
t In the first version of their cutting-stock method, Gilmore and Gomory solve
the knapsack problem by dynamic programming; in the second, they propose a
special algorithm for the knapsack problem which proves to be substantially su-
perior to dynamic programming.
The Algorithm of Balas 549
REFERENCES
1. EGONBALAS,"An Additive Algorithm for Solving Linear Programs with Zero-
One Variables," Opns.Res. 13, 517-546 (1965).
2. , "Extension de l'algorithme additif a la programmation en nombres
entiers et a la programmation nonlineaire," ComptesRendusde l'Academiedes
Sciences(Paris) 258, 5136-5139 (1964).
3. P. C. GILMORE AND R. E. GOMORY, "A Linear Programming Approach to the
Cutting-Stock Problem," Opns.Res. 9, 849-859 (1961).
4. AND , "A Linear Programming Approach to the Cutting-Stock
Problem-Part II," Opns.Res. 11, 863-888 (1963).
5. FRED GLOVER, "A Multiphase-Dual Algorithm for the Zero-One Integer Pro-
gramming Problem," (forthcoming).
6. S. ZIONTS, G. L. THOMPSON, AND F. M. TONGE, "Techniques for Removing
Nonbinding Constraints and Extraneous Variables from Linear Programming
Problems," Carnegie Institute of Technology, Graduate School of Industrial
Administration, Pittsburgh, Pennsylvania, November, 1964.