You are on page 1of 40

Chapter 3 *Deriving Solutions from a Linear Optimization Model

Learning Objective
This chapter will enable/equip you to understand the principles that the computer uses to solve or arrive at the optimal solution of a linear programming problem. It describes the simplex method used by most software packages to arrive at an optimal solution of a linear programming problem. Various approaches are described as to how one can obtain a starting feasible solution to a Linear Programming problem. Conversely, how to find out if a linear programming problem has a feasible solution which satisfies all the constraints simultaneously.

3.1 Introduction to Algorithms


In the most general sense, an algorithm is any set of detailed instructions which results in a predictable end-state from a known beginning. How good the performance of an algorithm is, depends on the instructions given. A pervasive example of an algorithm in todays world is a computer program, which consists of a series of instructions in a particular sequence, designed to perform a stipulated task. The term algorithm has been coined to honor Al Khwarizmi, who lived in Baghdad in the ninth century and wrote a book in Arabic, in which he described (laid down) the methodology or procedure for adding, multiplying and dividing numbers, as also for extracting square roots and computing . The procedure that he laid down was unambiguous, precise and efficient, and was the precursor of the algorithms which we come across today. The widely known Fibonacci numbers are generated by the following simple rule, which is akin to an algorithm. Algorithms are synonymous with operations research, and quite a few of them are described in this book. Most of them are iterative procedures, which keep repeating a series of steps, called iterations, for arriving at optimal or near-optimal solutions. Perhaps the most widely applied algorithm is the Simplex Method, developed by George Dantzig in 1947, to find optimal solutions for linear programming (LP) problems. Subsequently, other algorithms have been developed, notably the Ellipsoid algorithm by Khachiyan and Shor in 1979, and the InteriorPoint algorithm by Narendra Karmarkar in 1984. However, the simplex method has held its own for more than six decades, and is still the most commonly used method for solving LP problems.

3.2 Motivation for studying the Simplex Method


A large number of software packages are available for using the simplex method to determine *Excerpt from the Draft of a Book Operations Research: Modeling, Computations and Applications by Dr. Ranjan Ghosh

the optimum solution for a LP problem. Hence it is not necessary to carry out computations manually for solving a LP problem. More importantly, the volume of computation required to solve any practical LP problem is so enormous that it would just not be feasible to perform the computations manually. The question that one may legitimately pose is as to why it is necessary to understand how the simplex method works. For instance, to drive a vehicle, one does not have to know how various parts of the vehicle function. However, to do a more effective job of driving, it helps to have a rudimentary level of knowledge of how the engine, the battery, the transmission system and the brakes function. In a similar vein, an airplane pilot does not have to be an aeronautical engineer. But a pilot does need to have some knowledge of avionics, and the various systems in an aircraft such as propulsion, hydraulics and instrumentation. Using the same argument, an executive, who wants to apply operations research as an aide to decision-making, would be in a better position if he develops an insight into how the various tools and techniques of Operations research work, their underlying assumptions, their capabilities and limitations, and the interpretation of solutions. It would be pertinent to mention that a manager need not master all the underlying theory or be an expert on the subject, but only needs to have appropriate (basic) level of knowledge. An executive, who is involved in some manner in applying linear programming, would find it beneficial if he understands how the simplex method works, as it will enable him to permit the use of computer-generated solutions to be implemented successfully, and to interpret the optimum solution so that it provides him information having economic/financial implications, which can be used for decision-making.

3.3 Transition from Graphical to Algebraic Procedure


The feasible region of a linear programming problem is defined by the intersection of the halfspaces corresponding to the various constraints and is, if bounded, a convex polygon in two dimensions and a convex polyhedron in higher dimensions. As the variables are continuous and not discrete, the feasible region contains an infinite number of points. The graphical method serves to illustrate the nature of the optimal solution, and how to arrive at an optimal solution for a linear programming problem. However, the graphical method has a severe limitation in that it can be used only when there are two decision variables, as it is not possible to draw a graph when there are more than two variables. Therefore, most real-life LP problems are not amenable to the graphical procedure as they have many more variables. Arising from the observations made in Section 2.11 on the Graphical Solution Method, it follows that a possible method or procedure for arriving at an optimal solution would be to carry out a search over all feasible corner point solutions. Such a procedure will have to address a number of issues, namely, how to identify these corner point feasible (CPF) solutions, and how to find one to start with. Furthermore, how to go about searching efficiently so that the number of CPFs considered and evaluated are as few as possible. Lastly, there must be a rule which can be applied to find out when a CPF solution is optimal so that the search procedure can be stopped.

In the simplex method, the feasible region or solution space is defined by the set of points which satisfy simultaneously the m constraints and the non-negativity restrictions of the n variables. Hence the representation is algebraic in nature. The computations which are carried out at each iteration are also in the nature of algebraic transformations. The following results are available from linear algebra. For a system of linear equations, which are independent and consistent, there is a unique solution if the number of equations, m, and the number of variables, n, are equal. For instance, the two equations, x + 2y = 9 and 2x + y = 6, for which m = n = 2, there is a unique solution, i.e. , x = 1 and y = 2. For a system of linear equations, in which the number of equations, m, is less than the number of variables, n, there is an infinite number of solutions. For instance, the equation, x + y = 10 has an infinite number of solutions as, in this case, m = 1, n = 2 and m < n. Any point on the straight line x + y = 10 satisfies the equation, and there are an infinite number of such points. When m < n, the feasible region, as defined by the intersection of m half-spaces, has a dimension of (n m). As described in Chapter 2, the standard form of a linear programming problem in compact notation is as follows: n Maximize Z = subject to n aij xj bi for i = 1, , m j=1 xj 0 for j = 1,2, , n By adding m slack variables s1, s2, ., sm, to the m constraints, we convert the inequality constraints into equalities, and have the following LP. n Maximize Z = cj xj j=1 cj xj j=1

subject to the linear constraints n aij xj + sn+i = bi for i = 1, , m j=1 xj 0 for j = 1,2, , n
3

In expanded form, we have Maximize Z = c1x1 + c2x2 + ..+ cj xj + .+ cn xn subject to linear constraints of the following form

a11 x1 + a12x2 + ..+ a1j xj + . + a1n xn + s1


.

= b1 + s2 = bi

ai1 x1 + ai2x2 + ..+ aij xj + . + ain xn


. .

am1 x1 + am2x2 + ..+ amj xj + . + amn xn Non-negative variables x1, x2, xj, . xn 0 Non-negative right hand side constants b1, b2, . bi, . bm 0

+ sm = b m

Here n is the number of decision variables; m is the number of constraints. There is no relationship between n and m. The above form of the linear programming problem is referred to as the Canonical Form or Augmented Form. It has the following characteristics: All constraints are expressed as equalities All variables are restricted to be non-negative Right Hand Sides (RHSs) for all constraints are non-negative Each constraint equation has a basic variable

For a system of m linear equations having (m + n) variables, we can identify the extreme points by setting n variables equal to zero and solving the resulting system of equations. The solution corresponds to an extreme point or corner point and is referred to as a basic solution in the parlance of linear programming. These corner points may be feasible or infeasible. In a basic solution, the n variables set equal to zero are called non-basic variables, while the remaining m variables are called basic variables.

If all the basic variables for a particular basic solution have non-negative values, the Basic Solution is called a Basic Feasible Solution (BFS). The maximum number of corner points is fixed by the number of ways in which m or n can be selected out of (m+n), and is given by:
(m+n)

Cm =

(m+n)! / (m! n!)

3.4 Overview of the Simplex Method


3.4.1 Step 1: Obtaining an Initial Basic Feasible Solution (BFS) A basic feasible solution is required to start the algorithm. If the objective function is to be maximized and all the constraints are in the nature of less than or equal to inequalities, obtaining an initial BFS may be straightforward. If, however, there are equality or greater than or equal to constraints in a LPP with the objective function to be maximized, we may have to solve a derived LPP to find an initial BFS. Two ways of doing this, the two-phase method and the big-M method are described later in Section 3.8. . If no initial BFS can be found, it implies that the LP has no feasible solution, and the algorithm terminates at this stage. 3.4.2 Step 2: Test for Optimality Determine whether the current BFS or, equivalently, the value of the objective function can be improved. If it is not possible to do so, the current BFS is optimal, and the algorithm terminates. This test for optimality is done by finding out whether the value of the objective function can be increased (if it is to be maximized) or decreased (if it is to be minimized) by increasing the value of one non-basic variable from zero to some positive amount. It is to be noted that the coefficient of each non-basic variable in Row (Equation) 0 represents the increase for negative coefficients or decrease for positive coefficients in the objective function Z for an increase of one unit of the associated variable. 3.4.3 Step 3: Performing an Iteration Each iteration has several steps: (a) Select the entering basic variable, using the optimality condition. In the graphical procedure, this is equivalent to determining the direction of movement along the edge of the feasible region from one extreme point to another. The purpose of this step is to select one non-basic variable to increase from zero, while ensuring that the values of the basic variables are adjusted to continue satisfying the system of equations. Increasing the non-basic variable from zero will convert it to a basic variable for the next basic feasible solution (BFS). As it is entering the basis, the variable is called the entering basic variable for the current iteration. For a maximization problem, select the non-basic variable in Row (Equation) (0), having the largest negative coefficient. In a similar
5

manner, for a minimization problem, select the non-basic variable in Row (Equation) (0), having the largest positive coefficient. The column associated with the entering basic variable is called the pivot column. Tie for entering basic variable It may happen in some LPPs that at during a particular iteration, there are two candidates for the entering basic variable as the current values of the coefficients of those two nonbasic variables in equation (0) are equal. In such a situation, the choice of the entering basic variable is usually made arbitrarily. However, the following rule has been suggested for breaking the tie so that the number of iterations required to arrive at the optimal solution is minimized. If there is a tie between two decision variables or between two slack/surplus variables, the choice can be arbitrary. However, if there is a tie between a decision variable and a slack/surplus variable, the choice can be made arbitrarily. (b) Select the leaving basic variable, using the feasibility condition. Graphically, this is equivalent to determining where to stop when moving along an edge from one extreme point to another and not beyond so that feasibility is maintained. As the value of a non-basic variable is increased from zero, the values of some of the basic variables change because of the requirement to satisfy the system of equations, that is, the feasibility conditions. The additional requirement for feasibility is that all the variables be non-negative. To do this, it is to be determined how large the entering basic variable can become without violating the feasibility conditions. There is a limit to how much the entering basic variable can be increased because, beyond that limit, one of the basic variables will become negative and violate the feasibility condition. For this, the current values of the right hand side are divided by the corresponding coefficients in the pivot column, provided they are positive. This ratio is usually referred to as the exchange ratio and the row having the minimum exchange ratio is selected as the pivot row. These calculations are referred to as the minimum ratio test. The objective of this test is to determine which basic variable decreases to zero first as the value of the entering basic variable is increased. The coefficient common to the pivot column and the pivot row is referred to as the pivot element. Tie for leaving basic variable (degeneracy)

(c) Determine the new basic solution by using the appropriate Gauss-Jordan computations.

In each iteration of the simplex algorithm, after determining the entering and leaving basic variables, some elementary algebraic operations are performed on a system of equations so that a non-basic variable becomes a basic variable and a basic variable becomes a non-basic variable. The algebraic operations are in the nature of (a) multiplying or dividing an equation by a nonzero constant, and (b) adding or subtracting a multiple of one equation to or from another equation. Hence the following changes take place with respect to various rows or equations. Pivot Row: (a) New pivot row = Current pivot row / Pivot element, and (b) The leaving basic variable is replaced with the entering basic variable. All other Rows: New row = Current row ( pivot column coefficient ) x ( new pivot row )

3.5 Illustration of the Simplex Solution Procedure


The steps described above for applying the simplex method will now be illustrated by solving the following Linear Programming Problem (LPP) Maximize Z = 4x1 + 3 x2 Subject to x1 + 2 x2 16 3x1 + 2x2 24 x1 and x1 0 , 6 x2 0

After adding slack variables to the constraints, we have the LPP in canonical form: Maximize Z = 4x1 + 3 x2 Subject to x1 + 2 x2 + s1 3x1 + 2 x2 x1 and + s2 = 16 = 24

+ s3 = 6

x1 0 , x2 0, s1 0 , s2 0, s3 0

Representing the objective function as an equation with Z as a basic variable, we have the Linear Programming Problem in a form, which is convenient for performing the computations of the simplex method: Maximize Z - 4x1 - 3x2 0s1 0s2 0s3 = 0 Subject to x1 + 2x2 + s1 3x1 + 2x2 x1 and + s2 = 16 = 24 + s3 = 6

x1 0 , x2 0, s1 0 , s2 0, s3 0

Here, the basic variables s1, s2 and s3 have values of 16, 24 and 6 respectively, and the non-basic variables x1 and x2 have a value of zero. In other words, an initial BFS is ( 0, 0, 16, 24, 6 ). If x1 and x2 are increased from zero, the rate of improvement in Z is 4 and 3 per unit increase in x1 and x2 respectively. As increase in x1 results in a higher rate of improvement than increase in x2, we select x1 as the entering basic variable for the current iteration, keeping x2 = 0. As x1 is increased from zero, the requirement to satisfy the functional constraints changes the values of the basic variables and keeping in mind the other requirement for feasibility that all the variables be non-negative, we have: s1 = 16 x1 0 => x1 16 s2 = 24 3 x1 0 => x1 24/3 (= 8) s3 = 6 x1 0 => x1 6 For all the three conditions to hold good, x1 min (16, 8, 6 ) = 6 Therefore, the leaving basic variable is s3, and the pivot row is the third functional constraint, the pivot element being x1 in that equation. The series of steps described above for determining the leaving basic variable is usually referred to as the minimum ratio rule. After performing the Gauss-Jordan computations by pivoting on x1 in the third row/equation, we obtain: 1st Iteration: x1 enters the Basis, s3 leaves the Basis Z - 3x2 2x2 + s1 2x2 x1 and + 4s3 - s3 = 24 = 10 = 6 = 6

+ s2 - 3s3 + s3

x1 0 , x2 0, s1 0 , s2 0, s3 0

It is observed from the above set of equations that the value of Z can be improved by increasing from zero the value of the non-basic variable x2. Hence we have not yet arrived at the optimal solution and further iterations are to be carried out, with x2 being the entering basic variable. On applying the same logic as in the first iteration to the above set of equations, we identify s2 as the leaving basic variable, and x2 in the second functional constraint as the pivot element. After performing the Gauss-Jordan computations by pivoting on x2 in the second equation, we obtain: 2nd Iteration: x2 enters the Basis, s2 leaves the Basis
3 2 1 2

+ s1 x2 x1 +

s2 s2

s3

= 33 = 4

+ 2 s3
3 2

1 2

s2

s3 = 3 s3 = 6

and

x1 0 , x2 0, s1 0 , s2 0, s3 0

It is observed from the above set of equations that the value of Z can be improved by increasing from zero the value of the non-basic variable s3. Hence we have not yet arrived at the optimal solution and further iterations are to be carried out, with s3 being the entering basic variable. On applying the same logic as in the first iteration to the above set of equations, we identify s1 as the leaving basic variable, and s3 in the first functional constraint as the pivot element. After performing the Gauss-Jordan computations by pivoting on s3 in the first row/equation, we obtain: 3rd Iteration: s3 enters the Basis, s1 leaves the Basis
1 4 1 5
+

s1

4 1 2

s2 + s3

= 34 = 2 = 6 = 4

s1 2 x2 + x1 and 3 4

s2 s2 s2

s1 s1 +

1 4

1 2

1 2

x1 0 , x2 0, s1 0 , s2 0, s3 0

On examining the above system of equations, we find that the value of Z cannot be improved any further by increasing the values of the non-basic variables s1 and s2. We conclude that we have arrived at the optimal solution, which is: Z = 34, x1 = 4, x2 = 6, s1 = 0, s2 = 0, s3 = 2. Hence no further iterations are required, and the algorithm terminates at this point.

10

11

3.6 The Simplex Method in Tabular Form


The algebraic form of the simplex method facilitates the understanding of the logic of the algorithm. However, it is not convenient from the point of view of carrying out the required computations. It is unnecessary and also cumbersome to write the variables x1 , x2 , . xn in every iteration. Setting up the initial simplex tableau did not involve any computation. The coefficients of the constraints or equations were rearranged to form the initial simplex tableau. The tabular form of the simplex method records only the essential information pertaining to the current values during any iteration of (a) the coefficients of the variables, (b) the constants on the right hand side of the equations, and (c ) the basic variables appearing in each equation. The objective function is written in the form of an equation, and is referred to as Row or Equation (0). The functional rows/equations are numbered from (1) to (m). The non-negativity restriction of the variables is not shown, but is implicit. This saves writing the symbols for the variables in each of the equations. It highlights the numbers involved in the computation, and recording them in a compact form. To compute the profit or cost for each solution and to find out whether the solution can be improved upon, we include along with row 0 of the simplex tableau an additional row, to be referred to as row zj. The zj row is not an absolute requirement; it is only to provide some additional insight. The value of zj represents the amount by which the value of the objective function increases (in case of a maximization problem) or decreases (for a minimization problem) if one unit of the concerned variable xj is added to the new solution. The current values of the negative of the coefficients of the objective function is given in row 0 by (zj cj), which may be interpreted as relative profit or relative cost, depending on whether it is a maximization problem or a minimization problem. Each of the values in the ( zj cj ) row represents the net amount of increase (decrease) in the objective function if one unit of the variable represented by the column head is incorporated into the solution.

12

Starting Tableau: s1, s2, and s3 constitute the Basis Iteration Row Basic Variable Z Coefficient of _________________________________________ x1 x2 s1 s2 s3 Current Solution

zj - cj (row 0) zj

-4 0

-3 0

0 0

0 0

0 0

x1 enters s3 leaves

1 2 3

s1 s2 s3

0 0 0

1 3 1

2 2 0

1 0 0

0 1 0

-1
0

16 24 6

Tableau after one Iteration: x1 enters the Basis, s3 leaves the Basis Coefficient of _______________________________________ x1 x2 s1 s2 s3

Iteration

Row

Basic Variable

Current Solution

zj - cj (row 0) zj

0 4

-3 0

0 0

0 0

4 4

24

x2 enters s2 leaves

1 2 3

s1 s2 x1

0 0 0

0 0 1

2 2 0

1 0 0

0 1 0

-1 -3 1

10 6 6

13

Tableau after two Iterations: x2 enters the Basis, s2 leaves the Basis Coefficient of ________________________________________ x1 x2 s1 s2 s3
3 2 3 2

Iteration

Row

Basic Variable

Current Solution

zj - cj (row 0) zj

0 4

0 3 0 1 0

0 0 1 0 0

33

s3 enters s1 leaves

1 2 3

s1 x2 x1

0 0 0

0 0 1

-1 1 0

2 -2 1
3

4 3 6

Tableau after three Iterations: s3 enters the Basis, s1 leaves the Basis Coefficient of _____________________________________ Current x1 x2 s1 s2 s3 Solution
1 4 5 4

Iteration

Row

Basic Variable Z

zj - cj (row 0) zj

34

3 0 1 0

1 4
1 2 3 4

5 4

0 1 0 0 2 6

Optimum Solution

1 2

s3
x2

0 0 1

-2 1 4 1 2

0 3 x1 0

-2

It can be observed from the above tableau that the optimal solution is: Z = 34, x1 = 4, x2 = 6, s1 = 0, s2 = 0, s3 = 2.
14

3.7 Artificial Initial Solutions: Modifications for various types of constraints


The simplex method as described above holds good for the standard form of LP, which maximizes Z subject to functional constraints of the less-than-or-equal-to ( ) type, nonnegativity restrictions on all decision variables and that bi 0 for all i = 1, 2, , m. In case of any deviations from these conditions, some modifications have to be made during initialization, and then the subsequent steps in the simplex method can be applied as described above. There is a problem in identifying an initial basic feasible solution (BFS) if there are functional constraints of the equality (=) or greater-than-or-equal-to () form, or if there is a negative righthand side. In the LPP solved earlier, the initial BFS was found quite easily by letting the slack variables be the initial basic variables which were equal to the non-negative right hand sides of their respective equations. The approach adopted in these cases is based on the concept of dummy variable or the more commonly used term, artificial variable. This technique constructs an auxiliary or artificial problem by incorporating an artificial variable into each constraint that is not in standard form. The new variable is introduced only for the purpose of being the initial basic variable for that equation. The usual non-negativity restrictions are put on these variables, and the objective function is re-formulated in terms of these artificial variables so that there is a huge penalty for these variables having values larger than zero. If the LP has a feasible solution, the iterations of the simplex method will ensure that, one by one, these artificial variables become zero and are not to be considered any further. The real LP is then solved, using the initial BFS obtained by this procedure. Based on the framework described above, To solve find out whether a LPP has a feasible solution, two artificial-variable techniques or approaches are available for solving linear programming problems, which are not in standard form. They are: (a) The Two-Phase Method, and (b) The Big-M Method. Most software packages for solving LPPs use the two phase method. However, the Big M method is of considerable historical importance. Hence it will also be described. 3.7.1 The Two-Phase Method As the name indicates, this method solves the LP in two phases. The objective of Phase I is to find out if there is a feasible solution to the system of functional constraints and, if so, finds an initial basic feasible solution. If there is no feasible solution, the algorithm stops after Phase I. Phase I: The LPP is expressed in canonical (equation) form. In those equations where there is no slack variable, artificial variables are added so as to get a starting basic solution. Another LP is constructed in which the objective is to minimize the sum of artificial variables, subject to the constraints in the original LP. This is irrespective of whether the objective function in the
15

original LP was to be maximized or minimized. If the minimum value of the sum of artificial variables is zero, the LP has an initial basic feasible solution and hence feasibility, and the algorithm proceeds to Phase II. If the minimum value of the sum of artificial variables is ositive, the LP has no feasible solution, and the algorithm terminates after Phase I. Phase II: The starting basic feasible solution found in Phase I is used to solve the LP with the original objective.

Consider the following LPP: Maximize Z = 4x1 + 3 x2 Subject to the constraints x1 + 2 x2 16 3x1 + 2x2 = 24 x1 and x1 0 , 6 x2 0

There is no need of incorporating (adding) a slack variable in the second constraint, as it is in the form of an equality. However, an artificial variable is added to the left hand side of this constraint so as to satisfy the requirement of having a basic variable. After adding the appropriate slack variables s1 and s2, and artificial variable A1 , we get

Maximize Z = 4x1 + 3 x2 Subject to x1 + 2 x2 + s1 3x1 + 2x2 x1 and + A1 = 16 = 24

+ s2 = 6

x1 0 , x2 0, s1 0 , s2 0, A1 0

In the first phase of this method, the sum of the artificial variables, say W, is minimized. If the LPP has a feasible solution, that is, all the constraints are satisfied, the minimum value of W will be zero. We then proceed to the second phase, in which we start with the final tableau of the first phase, but replacing W by Z, the original objective function. If the minimum value of W is not zero, the implication is that all the constraints are not satisfied, and hence the LPP does not have a feasible solution. The procedure is terminated because of infeasibility.
16

In this problem, the objective will be to minimize W = A1 or, equivalently, maximize - A1. Hence the problem can be depicted in tabular form as follows:

Coefficient of Iteration Row Basic Variable W s1 A1 s3 W 1 0 0 0 x1 0 1 3 1 x2 0 2 2 0 s1 0 1 0 0 s2 0 0 0 1 A1 -1 0 1 0

Right Hand Side 0 16 24 6

0 0 1 2 3

As A1 is a basic variable, it has to be eliminated from Row 0. To do this, 4 x Row 3 is added to Row 0. As we are minimizing (rather than maximizing) W, the non-basic variable in Row (0) with the largest positive coefficient, that is, x1 is chosen as the entering basic variable. Starting Tableau Coefficient of Iteration Row Basic Variable W s1 A1 s2 W 1 0 0 0 x1 3 1 3 1 x2 2 2 2 0 s1 0 1 0 0 s2 0 0 0 1 A1 0 0 1 0 Current Solution 24 16 24 6

0 x1 enters s2 leaves

0 1 2 3

Tableau after one Iteration: Coefficient of Iteration Row Basic Variable W x1 x2 s1 s2 A1 Right Hand Side

17

1 x2 enters A1 leaves

0 1 2 3

W s1 A1 x1

1 0 0 0

0 0 0 1

2 2 2 0

0 1 0 0

-3 -1 -3 1

0 0 1 0

6 10 6 6

Tableau after two Iterations: Coefficient of Iteration Row Basic Variable W s1 x2 x1 W 1 0 0 0 x1 0 0 0 1 x2 0 0 1 0 s1 0 1 0 0 s2 0 2 -2 1


3

A1 -1 -1
1 2

Current Solution 0 4 3 6

0 1

Feasible Solution

2 3

The above feasible solution can now be used as an initial basic feasible solution for Phase Two by eliminating the column for A1 and replacing W by Z, the original objective function. Initial Tableau for Phase Two after replacing W by Z, and removing the column for A1.

Coefficient of Iteration Row Basic Variable Z x1 x2 s1 s2

Right Hand Side

18

0 0 1 2 3

Z s1 x2 x1

1 0 0 0

-4 0 0 1

-3 0 1 0

0 1 0 0

0 2 -2 1
3

0 4 3 6

As x1 and x2 are non-basic variables, their coefficients in Row 0 must be made zero. To do this, 3 x Row 0 and 4 x Row 3 are added to row 0.

Initial Tableau for the Phase Two after modifying Row (0).

Coefficient of Iteration Row Basic Variable Z s1 x2 Z 1 0 0 0 x1 0 0 0 1 x2 0 0 1 0 s1 0 1 0 0 s2 -2 2 -2 1


3 1

Right Hand Side 33 4 3

0 s2 enters

0 1 2

s1 leaves 3 x1

Tableau after one Iteration: s2 enters the Basis, s1 leaves the Basis Coefficient of Iteration Row Basic Variable Z x1 x2 s1 s2 Current Solution

19

1 Optimum solution

0 1

Z s2

1 0 0 0

0 0 0 1

0 0 1 0

1 4 1 2 3 4

0 1 0 0

34 2

2 3

x2 x1

6 4

-2

We observe from the above tableau that the optimal solution is: Z = 34, x1 = 4, x2 = 6, s1 = 0, s2 = 2.

20

21

3.7.2 The Big-M Method The essence of the Big M method is to construct an artificial linear programming problem that has the same optimal value as the original LP. The modifications made in the original LP are as follows: (a) The LP is expressed in canonical form by incorporating slack variables in constraints and surplus variables in constraints. Non-negative artificial variables are introduced in those equations which do not have slack variables. The artificial variables are only to serve the purpose of obtaining an initial basic feasible solution (BFS). (b) Assigning a very large penalty to the artificial variables by adding the term -M x (sum of artificial variables) to the objective function if it is a maximization LP. M is a very large positive number, usually 20 times the largest value of any coefficient in the LP. For a minimization LP, the term +M x(sum of artificial variables) is added to the objective function. Slack and surplus variables in the objective function are assigned a zero coefficient. (c) The initial basic feasible solution is obtained by assigning a zero value to the original variables. The simplex method is then applied to the modified LP problem. While carrying out iterations, one of the following cases may arise: (a) The optimality condition is satisfied with no artificial variable remaining in the basis, that is, all the artificial variables have a value of zero. It implies that the current solution is an optimal basic feasible solution (BFS). (b) One or more artificial variables are in the basis with zero value, and the optimality condition is satisfied. The current solution is then a degenerate optimal basic feasible solution. (c) One or more artificial variables appear in the basis with positive values, and the optimality condition is satisfied. In this case, the original LP has no feasible solution. The solution so obtained is referred to as a pseudo-optimal solution, because the solution satisfies the constraints but does not optimize the objective function as it contains a very large penalty term.

22

Example: Consider the following LPP: Maximize Z = 4x1 + 3 x2 Subject to x1 + 2 x2 16 3 x1 + 2 x2 24 x1 and x1 0 , 6 x2 0

After adding the appropriate slack variables s1 and s3, surplus variable s2, and artificial variable A1 , we have the LPP in standard form: Maximize Z = 4x1 + 3 x2 Subject to x1 + 2 x2 + s1 3x1 + 2x2 x1 and - s2 + s3 = 16 + A1 = 24 = 6

x1 0 , x2 0, s1 0 , s2 0, s3 0, A1 0

Coefficient of Iteration Row Basic Variable Z s1 A1 s3 Z 1 0 0 0 x1 -4 1 3 1 x2 -3 2 2 0 s1 0 1 0 0 s2 0 0 -1 0 s3 0 0 0 1 A1 M 0 1 0

Right Hand Side 0 16 24 6

0 0 1 2 3

As A1 is a basic variable, its coefficient in the objective function must be zero. Hence M times Row (2) is subtracted from Row (0) to yield the following initial tableau.

23

Coefficient of Iteration Row Basic Variable Z s1 A1 s3 Z 1 0 0 0 x1 x2 s1 0 1 0 0 s2 M 0 -1 0 s3 0 0 0 1 A1 0 0 1 0 Current Solution -24M 16 24 6

0 x1 enters s3 leaves

0 1 2 3

-( 4+3M) - (3+2M) 1 3 1 2 2 0

1st Iteration Coefficient of Iteration Row Basic Variable Z s1 A1 x1 Z 1 0 0 0 x1 0 0 0 1 x2 - (3+2M) 2 2 0 s1 0 1 0 0 s2 M 0 -1 0 s3 (4+3M) -1 -3 1 A1 0 0 1 0 Current Solution 24-6M 10 6 6

2 x2 enters A1 leaves

0 1 2 3

2nd Iteration Coefficient of Iteration Row Basic Variable Z s1 Z 1 0 0 0 x1 0 0 0 1


24

x2 0 0 1 0

s1 0 1 0 0 -

s2 -2 1
1 2 3

s3 -2 2 -2 1
3 1

R1 (3+2M)/2 -1
1 2

Current Solution 33 4

2 s2 enters s1 leaves

0 1

2 3

x2 x1

3 6

3rd Iteration Coefficient of Iteration Row Basic Variable Z s2 Z 1 0 0 0 x1 0 0 0 1 x2 0 0 1 0 s1 0 1


1 2

s2 0 1 0 0

s3
5 2

A M -1
0

Current Solution 39 4

3 Optimal Solution

0 1

2 -2 1
1

2 3

x2 x1

5 6

3.8 Special Cases


3.8.1 Unbounded Solutions It may happen in some LP models that the values of one or more variables can be increased indefinitely, indicating that the feasible region or solution space is unbounded. If, while applying the simplex method, it is observed that all the coefficients in the column corresponding to some non-basic variable are non-positive, the solution space is unbounded. In this case, the non-basic variable entering the basis can do so at any value, that is, without any upper limit, resulting in the solution space becoming unbounded. In such a situation, the optimal solution of the objective function need not be unbounded. The following is an example of a LP with an unbounded solution. Maximize Z = 3x1 + 4 x2 Subject to x1 - x2 1 x1 and x1 0 , 6 x2 0

After introducing a slack variable, we have the following tableau for applying the simplex method:

25

Starting Tableau

Coefficient of Iteration Row Basic Variable Z s1 s2 Z 1 0 0 x1 -3 1 1 x2 -4 -1 0 s1 0 1 0 s2 0 0 1 Current Solution 0 1 2

0 x2 enters No limit on x2

0 1 2

The entries/coefficients in the column corresponding to the entering basic variable x2 are 1 and 0, that is, they are non-positive. It implies that the slack variables s1 and s2 do not have an upper limit, and the solution space is unbounded. The following is an example of a LP in which the solution space is unbounded, but the optimum (minimum) value of the objective function is finite. Minimize Z = x1 + x2 Subject to the constraints 5x1 + 3x2 8 3x1 + 4x2 7 and x1 0 , x2 0 It can be shown by applying the graphical or simplex method that, although the above LP has unbounded solution, the optimum or minimum value of Z is 2, occurring at x1 = x2 =1. Unbounded solutions may occur if there is hardly any or no limit on availability of resources. It may also arise because of some lacuna in the formulation of the LP.

26

27

3.8.2 Infeasible Solution This situation arises when the constraints are inconsistent in the sense that all the constraints are not satisfied simultaneously and hence there is no feasible solution. If all the constraints are of the less than or equal to () type with all the coefficients in the Right Hand Side (RHS) being non-negative and all the variables being restricted to be non-negative so that there are no artificial variables in the standard/canonical form of the LP, there is a feasible solution because the slack variables themselves provide such a solution. However, for other types of constraints in which we use/introduce artificial variables, there is a possibility of having an infeasible solution. There is a feasible solution if all the artificial variables are forced to assume a value of zero during the application of the simplex method. If, on the other hand, one or more artificial variables remain positive, it indicates that there is no feasible solution. Infeasibility may emanate from inability to meet demand or other requirements because of inadequate capacity or resources being available. The absence of a feasible solution may also be due to the model not being formulated correctly. The following is an example of infeasibility in a LP. Maximize Z = 3x1 + 4x2 subject to x1 + x2 1 2x1 + x2 4 and x1 0 , x2 0

After introducing slack variables s1 and s2 and artificial variable A, we set up the following tableau for performing Phase I of the simplex method, in which W, the sum of the artificial variables, is to be minimized: Starting Tableau Coefficient of Iteration Row Basic Variable W s1 A W 1 0 0 x1 0 1 2 x2 0 1 1 s1 0 1 0 s2 0 0 -1 A -1 0 1 Right Hand Side 0 1 4

0 0 1 2

28

On expressing the objective W in terms of the non-basic variables x1 and x2 , we get Initial Tableau Coefficient of Iteration Row Basic Variable W s1 A W 1 0 0 x1 2 1 2 x2 1 1 1 s1 0 1 0 s2 -1 0 -1 A 0 0 0 Current Solution 4 1 4

0 x1 enters s1 leaves

0 1 2

After performing one iteration, we have Coefficient of Iteration Row Basic Variable W s1 A W 1 0 0 x1 0 1 0 x2 -1 1 -1 s1 -2 1 -2 s2 -1 0 -1 A 0 0 0 Current Solution 2 1 2

1 PseudoOptimum Solution

0 1 2

The optimum or minimum value of W is 2. The artificial variable A (=2) is positive, implying that the above LP has no feasible solution.

29

30

3.8.3 Multiple Optimal Solutions (Alternative Optima) When the objective function is parallel to a non-redundant constraint, the optimum solution occurs at more than one extreme point or basic feasible solution, that is, the objective function has the same optimal value at more than one point. This phenomenon is referred to as alternate optima. All (linear) convex combinations of these extreme points, that is, all the points on the associated hyperplane are also optimal solutions. Hence there is an infinite number of optimum solutions. Consider the following LPP: Maximize Z = 6x1 + 3 x2 Subject to the constraints 2x1 + x2 6 x1 + 3x2 8 and x1 0 , x2 0

After introducing a slack variable, we have the following tableau for applying the simplex method:

Starting Tableau Coefficient of Iteration Row Basic Variable Z s1 s2 Z 1 0 0 x1 -6 2 1 x2 -3 1 3 s1 0 1 0 s2 0 0 1 Current Solution 0 6 8

0 x1 enters s1 leaves

0 1 2

31

Second Tableau

Coefficient of Iteration Row Basic Variable Z x1 s2 Z 1 0 0 x1 0 1 0 x2 0 1/2 5/2 s1 3 1/2 - 1/2 s2 0 0 1 Current Solution 18 3 5

1 x2 enters s2 leaves

0 1 2

Third Tableau Coefficient of Iteration Row Basic Variable Z x1 x2 Z 1 0 0 x1 0 1 0 x2 0 1/2 1 s1 3 3/5 - 1/5 s2 0 - 1/5 2/5 Current Solution 18 2 2

1 Alternate Optima

0 1 2

Corresponding to the two basic feasible solutions, x1 = 3, x2 = 0, and x1 = 2, x2 = 2, we get Z = 18 as the optimal or maximum value. Any convex combination of these two alternate optima, such as the point mid-way between the two, x1 = 5/2, x2 = 1, which is not a basic feasible solution, is also optimal as the associated value of Z = 18. As the variables in a LP are continuous and not discrete, there are an infinite number of points which are convex combinations of the alternate optima, and hence there are an infinite number of optimal solutions. This provides greater flexibility to the management of a firm as they can adopt the value which suits them best.

3.9 Concluding Remarks


32

Self-Test
True or False
1. The non-negativity conditions imply that all decision variables must be positive. 2. The most frequent objective of business firms is to minimize operational expenses. 3. In the context of modeling, restrictions on the decisions that can be taken are called constraints. 4. A linear programming models constraints are almost always nonlinear relationships that describe the restrictions placed on the models decision variables. 5. The optimal solution of a linear programming model will occur at least one extreme point. 6. The simplex method for solving linear programming problems is partially based on the solution of simultaneous equations and matrix algebra. 7. All the constraints in a linear programming problem are inequalities. 8. The feasible solution space contains the values for the decision variables that satisfy the majority of the linear programming models constraints. 9. The objective function of a cost minimization model need only consider variable, as opposed to sunk, costs. 10. Since fractional values for decision variables may not be physically meaningful, in practice (for the purpose of implementation), we sometimes round the optimal linear programming solution to integer values.

Multiple Choice Questions


1. The simplex method is a. a mathematical procedure for solving a linear programming problem according to a set of steps b. a closed-form solution to a linear programming problem c. a graphical solution technique for solving linear programming problems d. an analytical technique for solving linear programming problems. 2. Which of the following would cause a change in the feasible region? a. increasing the value of a coefficient in the objective function of a minimization problem
33

b. decreasing the value of a coefficient in the objective function of a maximization problem c. changing the right hand side of a non-redundant constraint d. adding a redundant constraint 3. In linear programming, extreme points are a. b. c. d. variables representing unused resources variables representing an excess above a resource requirement all the points that simultaneously satisfy all the constraints of the model corner points on the boundary of the feasible solution space

4. Every extreme point of the feasible region is defined by a. b. c. d. some subset of constraints and non-negativity conditions the intersection of two constraints neither of the above both a and b

5. In solving a linear programming problem, the condition of infeasibility occurred. This problem may be resolved by a. b. c. d. trying a different software package removing or relaxing a constraint adding another constraint adding another variable

6. A linear programming problem in standard form has m constraints and n variables. The number of basic feasible solutions will be n a. ( ) m b. (m)
c. ( ) n

d. none of the above 7. Which of the following statements is true of an optimal solution to a linear programming problem? a. b. c. d. The optimal solution always occurs at an extreme point. If an optimal solution exists, there will always be one at an extreme point. Every linear programming problem has an optimal solution. The optimal solution uses up all the resources.

8. If the feasible region gets larger due to a change in one of the constraints, the optimal value of the objective function
34

a. b. c. d.

must increase or remain the same for a maximization problem must decrease or remain the same for a maximization problem must increase or remain the same for a minimization problem cannot change

9. If, in any simplex iteration, the leaving rule is violated, then the next table will a. b. c. d. Give a non-basic solution Not give a basic solution Give a basic or a non-basic solution Give a basic solution which is not feasible

10. The graphical approach to solving linear programming problems in two dimensions is useful because a. b. c. d. it solves the problem quickly to it provides a general method of solving a linear programming problem it gives geometric insight into the model and the meaning of optimality all of the above

11. If, in any simplex iteration, the minimum ratio rule fails, then the linear programming problem a. b. c. d. infeasible solution degenerate basic feasible solution non-degenerate basic feasible solution unbounded solution

12. If, in phase I of the two-phase simplex method, an artificial variable turns out to be positive in the optimal table, then the linear programming problem has a. b. c. d. unbounded solution no feasible solution optimal solution none of the above

13. If, in a simplex tableau, there is a tie for the leaving variable, then the next basic feasible solution a. b. c. d. will be degenerate will be non-degenerate may be degenerate or non-degenerate does not exist

14. In a maximization LP, a non-basic variable with the most negative value of ( zj - cj ) entering the basis ensures

35

a. b. c. d.

that the next solution will be a basic feasible solution largest decrease in the objective function largest increase in the objective function none of the above

15. When alternative optimal solutions exist in an LP problem, then a. b. c. d. one of the constraints will be redundant the objective function will be parallel to one of the constraints the problem will be unbounded two constraints will be parallel

Discussion Questions

1. (a) (b) (c)

Define the following in the context of linear programming: slack variable surplus variable artificial variable

2. Develop your own set of constraint equations and inequalities and use them to illustrate graphically each of the following conditions: a. an infeasible problem b. a problem containing redundant constraints c. an unbounded problem 3. What is meant by an algorithm? Describe briefly the various steps involved in the simplex method. Why is it referred to as the simplex algorithm? 4. It has been said that each linear programming problem that has a feasible region has an infinite number of solutions. Explain. 5. Describe the two-phase method of solving linear programming problems. 6. What is the significance of the ( zj - cj ) numbers in the simplex tableau?

36

Problem Set
1. Work through the simplex method (in algebraic form) step by step to solve the following linear programming problems: (a) Maximize subject to 5x1 + 2x2 + 3x3 15 x1 + 4x2 + 2x3 12 2x1 + x3 8 Z = x1 + 2x2 + 2x3,

and x1 0, x2 0, x3 0. (b) Maximize subject to - x1 + x2 2 2x1 - x2 2 - x1 - x2 2 and x1 0, x2 unrestricted. Z = 2x1 - 3x2

2. Use the two phase method to solve the following linear programming problems: (a) Maximize subject to x1 - 2x2 + x3 11 - 4x1 + x2 + 2x3 3 - 2x1 and x1 0, x2 0, + x3 = 1 x3 0. Z = - 3x1 + x2 + x3,

(b) Minimize subject to

Z = 3x1 + x2

5x1 + 10x2 - x3 x1 + x2

= 8 + x4 = 1
37

and

x1 , x2 , x3 , x4 0.

3. Use the Big M (penalty) method to solve the following linear programming problem: Maximize Z = 5x1 + 2x2 + 10x3, subject to x1 - x3 10 x2 + x3 10 and x1 0, x2 0, x3 0.

4. Solve the following linear programming problem, using the two phase method and the Big M method separately. Maximize Z = 3x1 - 3x2 + x3, subject to x1 + 2x2 - x3 5 - 3x1 - x2 + x3 4 and x1 0, x2 0, x3 0.

5. Consider the system of inequalities: x1 + x2 1 - 2x1 + x2 2 2x1 + 3x2 7 and x1 0, x2 0

Use the simplex algorithm to find (a) a Basic Feasible Solution, and (b) a Basic Feasible Solution in which both x1 and x2 are Basic Variables. 6. Solve the following linear programming problem to show that it has no feasible solution. Maximize subject to 4x1 + 6x2 - 5x3 + 4x4 - 20 3x1 - 2x2 + 4x3 + x4 10 Z = 4x1 + x2 + 4x3 + 5x4

38

8x1 - 3x2 - 3x3 + 2x4 20 and x1 , x2 , x3 , x4 0

7. Solve the following linear programming problem to find out whether it has an unbounded solution. Maximize subject to 14x1 + x2 - 6x3 + 3x4 = 7 16x1 + 0.5 x2 - 6x3 3x1 - x2 and - x3 5 0 x3 0. Z = 10x1 - x2 + 2x3,

x1 0, x2 0,

8. Does there exist an alternative optimal solution to the following linear programming problem? If yes, find the solution. Maximize subject to x1 + x2 6 2x1 + x2 8 x2 3 and x1 , x2 0 Z = 6x1 - 3x2

Selected References
1. Anderson, D. R., D. J. Sweeney, and T. A. Williams, An Introduction to Management Science, 10th Edition, Thomson Asia Pvt. Ltd., Singapore, 2003. 2. Bradley, S. P., A. C. Hax, and T. L. Magnanti, Applied Mathematical Programming,

Addison-Wesley Publishing Co., 1977.


3. Dantzig, George B., Linear Programming and Extensions, Princeton University Press, 1963. 4. Dantzig, George B., and M. Thapa, Linear Programming I: Introduction, Springer, New York, 1997.

39

5. Hillier, F. S., and G. J. Lieberman, Introduction to Operations Research, 9th Edition, McGraw-Hill Publishing Company Ltd., New York, 2010. 6. Simmonard, M., Linear Programming, Prentice-Hall International, Inc., 1966. 7. Taha, H. A., Operations Research: An Introduction, 8th edition, Pearson Prentice Hall, Delhi, 2009. 8. Vajda, S., Mathematical Programming, Addison-Wesley Publishing Co., 1971. 9. Wagner, Harvey M., Principles of Operations Research, 2nd Edition, Prentice-Hall of India.

40

You might also like