You are on page 1of 176

Solving Linear

Programming Problems
The Primal-Dual
Relationship
Concept of Duality

One part of a Linear Programming Problem (LPP) is called


the Primal and the other part is called the Dual. In other
words, each maximization problem in LP has its
corresponding problem, called the dual, which is a
minimization problem. Similarly, each minimization problem
has its corresponding dual, a maximization problem. For
example, if the primal is concerned with maximizing the
contribution from the three products A, B, and C and from
the three departments X, Y, and Z, then the dual will be
concerned with minimizing the costs associated with the
time used in the three departments to produce those three
products. An optimal solution from the primal and the dual
problem would be same as they both originate from the
same set of data.
Rules for Constructing the Dual from Primal

1. A dual variable is defined for each constraint in the primal


problem,i.e, the no. of variables in the dual problem is equal
to no. of constraints in the primal problem and vice-versa. If
there are m constraints and n variables in the primal
problem then there would be m variables and n constraints
in the dual problem.
2. The RHS of primal,i.e, b1,b2,……,bm, become the
coefficients of dual variables(Y1,Y2,……,Ym) in the dual
objective function(ZY). Also the coefficients of primal
variables(X1,X2,……,Xn),i.e, c1,c2,……,cn, become RHS of
the dual constraints.
3. For a maximization primal problem(with all < or =
constraints), there exists a minimization dual problem(with
all > or = constraints) and vice-versa.
4.The matrix of coefficients of variables in dual problem is
the transpose of matrix of coefficients in the primal
problem and vice-versa.
5. If any of the primal constraint (say ith) is an equality then
the corresponding dual variable is unrestricted in sign
and vice-versa.

The Primal-Dual Relationship Primal Vari


X1 X2 …
Y1 a11 a12 …
Y2 a21 a22 …
riables

.
Example:
Maximize ZX = 6000X1+4000X2
s.t. 4X1 + X2 <or= 12
9X1 + X2 <or= 20
7X1 + 3X2 <or= 18
10X1 +40X2 <or= 40
X1,X2 >or= 0

Primal Dual Relat

Primal Variab
X1 X
Y1 4
les
l
Hence the Dual Problem looks like,

Minimize ZY = 12Y1+20Y2+18Y3+40Y4
subject to the constraints,
4Y1+9Y2+7Y3+10Y4 >or= 6000
Y1+ Y2+3Y3+40Y4 >or= 4000
Y1,Y2 >or= 0
Linear Programming
Sensitivity Analysis
What if there is uncertainly about one or
more values in the LP model?

Sensitivity analysis allows us to determine


how “sensitive” the optimal solution is to
changes in data values.

This includes analyzing changes in:


1. An Objective Function Coefficient (OFC)
2. A Right Hand Side (RHS) value of a
constraint
Graphical Sensitivity Analysis

We can use the graph of an LP to see what


happens when:

1. An OFC changes, or
2. A RHS changes

Recall the Flair Furniture problem


Flair Furniture Problem
Max 7T + 5C (profit)
Subject to the constraints:
3T + 4C < 2400 (carpentry hrs)
2T + 1C < 1000 (painting hrs)
C < 450 (max # chairs)
T > 100 (min # tables)
T, C > 0 (nonnegativity)
Objective Function
Coefficient (OFC) Changes
What if the profit contribution for tables
changed from $7 to $8 per table?

8
Max X7 T + 5 C (profit)
Clearly profit goes up, but would we want to
make more tables and less chairs?
(i.e. Does the optimal solution change?)
Characteristics of OFC Changes
• There is no effect on the feasible region

• The slope of the level profit line changes

• If the slope changes enough, a different


corner point will become optimal
C
Original
Objective Function Optimal Corner
7T + 5 C = $4040 500 (T=320, C=360)
Still optimal
Revised
400
Objective Function
8T + 5 C = $4360
300

200
Feasible

100
Region

0
0 100 200 300 400 500 T
C
1000
What if the OFC
became higher? Both have new
Or lower? optimal corner
points
11T + 5C = $5500 600
Optimal Solution 450
(T=500, C=0)

3T + 5C = $2850
Optimal Solution Feasible
(T=200, C=450) Region
0
0 100 500 800 T
• There is a range for each OFC where the
current optimal corner point remains
optimal.

• If the OFC changes beyond that range a


new corner point becomes optimal.

• Excel’s Solver will calculate the OFC


range.
Right Hand Side (RHS) Changes
What if painting hours available changed
from 1000 to 1300?
1300
X
2T + 1C < 1000 (painting hrs)

This increase in resources could allow us to


increase production and profit.
Characteristics of RHS Changes
• The constraint line shifts, which could
change the feasible region

• Slope of constraint line does not change

• Corner point locations can change

• The optimal solution can change


Old optimal
corner point
C (T=320,C=360)
500 Profit=$4040
Feasible region
400 becomes larger

300 New optimal


corner point
(T=560,C=180)
200

2T
Profit=$4820

2T
Original
+
1C

+1C
100 Feasible
=1

=1
00
0

300
Region
0

0 100 200 300 400 500 600 T


Effect on Objective Function Value
New profit = $4,820
Old profit = $4,040
Profit increase = $780 from 300 additional
painting hours

$2.60 in profit per hour of painting

• Each additional hour will increase profit by $2.60


• Each hour lost will decrease profit by $2.60
Shadow Price
The change is the objective function
value per one-unit increase in the
RHS of the constraint.

Will painting hours be worth $2.60 per


hour regardless of many hours are
available ?
Range of Shadow Price Validity
Beyond some RHS range the value of each
painting hour will change.

While the RHS stays within this range, the


shadow price does not change.

Excel will calculate this range as well as the


shadow price.
Constraint RHS Changes
If the change in the RHS value is within the
allowable range, then:
• The shadow price does not change
• The change in objective function value =
(shadow price) x (RHS change)

If the RHS change goes beyond the


allowable range, then the shadow price
will change.
Objective Function
Coefficient (OFC) Changes
If the change in OFC is within the allowable
range, then:
• The optimal solution does not change
• The new objective function value can be
calculated
RHS Change Questions
• What if the supply of nonelectrical
components changes?

• What happens if the supply of electrical


components
– increased by 400 (to 5100)?
– increased by 4000 (to 8700)?
• What if we could buy an additional 400
elec. components for $1 more than usual?
Would we want to buy them?

• What if would could get an additional 250


hours of assembly time by paying $5 per
hour more than usual? Would this be
profitable?
Decision Variables That Equal 0
We are not currently making any VCR’s
(V=0) because they are not profitable
enough.

How much would profit need to increase


before we would want to begin making
VCR’s?
Integer Programming,
Goal Programming,
and
Nonlinear Programming
Introduction
 Integer programming is the extension of LP that
solves problems requiring integer solutions.
 Goal programming is the extension of LP that
permits more than one objective to be stated.
 Nonlinear programming is the case in which
objectives or constraints are nonlinear.
 All three above mathematical programming models
are used when some of the basic assumptions of
LP are made more or less restrictive.
Integer Programming
 Solution values must be whole numbers in
integer programming .
 There are three types of integer programs:
 pure integer programming;
 mixed-integer programming; and
 0–1 integer programming.
Integer Programming
(continued)
1. The Pure Integer Programming problems are cases in
which all variables are required to have integer values.
2. The Mixed-Integer Programming problems are cases in
which some, but not all, of the decision variables are
required to have integer values.
3. The Zero–One Integer Programming problems are special
cases in which all the decision variables must have
integer solution values of 0 or 1.
Integer Programming Example:
Harrison Electric Company
 The Company produces two products popular with home
renovators: old-fashioned chandeliers and ceiling fans.
 Both the chandeliers and fans require a two-step production
process involving wiring and assembly.
 It takes about 2 hours to wire each chandelier and 3 hours to wire a
ceiling fan. Final assembly of the chandeliers and fans requires 6
and 5 hours, respectively.
 The production capability is such that only 12 hours of wiring time
and 30 hours of assembly time are available.
Integer Programming:
Example (continued)
If each chandelier produced nets the firm $7 and each fan
$6, Harrison’s production mix decision can be formulated
using LP as follows:
maximize profit = $7X1 + $6X2
subject to: 2X1 + 3X2 ≤ 12 (wiring hours)
6X1 + 5X2 ≤ 30 (assembly hours)
X1, X2 ≥ 0 (nonnegative)
X1 = number of chandeliers produced
X2 = number of ceiling fans produced
Integer Programming:
Example (continued)
With only two variables and two constraints, the graphical LP
approach to generate the optimal solution is given below:

6X1 + 5X2 ≤ 30

+ = Possible Integer Solution


Optimal LP Solution
(X1 = 33/4, X2 = 11/2,
Profit = $35.25
2X1 + 3X2 ≤ 12
Integer Solution to Harrison
Electric Co.

Optimal
solution

Solution if
rounding off
Integer Solution to Harrison
Electric Co. (continued)
 Rounding off is one way to reach integer
solution values, but it often does not yield the
best solution.
 An important concept to understand is that an
integer programming solution can never be
better than the solution to the same LP problem.
 The integer problem is usually worse in terms
of higher cost or lower profit.
Branch and Bound Method
 Branch and Bound break the feasible
solution region into sub-problems until an
optimal solution is found.
 There are Six Steps in Solving Integer
Programming Maximization Problems by
Branch and Bound.
 The steps are given over the next several
slides.
Branch and Bound Method: The
Six Steps
1. Solve the original problem using LP.
 If the answer satisfies the integer constraints,
it is done.
 If not, this value provides an initial upper
bound.
1. Find any feasible solution that meets the
integer constraints for use as a lower bound.
 Usually, rounding down each variable will
accomplish this.
Branch and Bound Method
Steps: (continued)
3. Branch on one variable from Step 1 that does not
have an integer value.
 Split the problem into two sub-problems based on
integer values that are immediately above and below
the non-integer value.
 For example, if X2 = 3.75 was in the final LP solution,
introduce the constraint X2 ≥ 4 in the first sub-
problem and X2 ≤ 3 in the second sub-problem.
3. Create nodes at the top of these new branches by
solving the new problems.
Branch and Bound Method
Steps: (continued)
5.
a) If a branch yields a solution to the LP
problem that is not feasible, terminate the
branch.
b) If a branch yields a solution to the LP
problem that is feasible, but not an integer
solution, go to step 6.
Branch and Bound Method
Steps: (continued)
5. (continued)
c) If the branch yields a feasible integer solution,
examine the value of the objective function.
If this value equals the upper bound, an
optimal solution has been reached.
If it is not equal to the upper bound, but
exceeds the lower bound, set it as the new
lower bound and go to step 6.
Finally, if it is less than the lower bound,
terminate this branch.
Branch and Bound Method
Steps: (continued)
6. Examine both branches again and set the upper bound equal to
the maximum value of the objective function at all final nodes.
– If the upper bound equals the lower bound, stop.
– If not, go back to step 3.

Minimization problems involve reversing the roles of the


upper and lower bounds.
Harrison Electric Co: Revisited
Figure 11.1 shows graphically that the optimal, non-
integer solution is
X1 = 3.75 chandeliers
X2 = 1.5 ceiling fans
 Since X and Xprofit
1 2
= $35.25
are not integers, this solution is not
valid.
 The profit value of $35.25 will serve as an initial
upper bound.
 Note that rounding down gives X1 = 3, X2 = 1, profit =
$27, which is feasible and can be used as a lower
bound.
Integer Solution: Creating Sub-
problems
 The problem is now divided into two sub-problems: A and B.
 Consider branching on either variable that does not have an
integer solution; pick X1 this time.

Subproblem A
maximize profit = $7X1 + $6X2
Subject to: 2X1 + 3X2 ≤ 12
6X1 + 5X2 ≤ 30
Subproblem B
X1 ≥4
maximize profit = $7X1 + $6X2
Subject to: 2X1 + 3X2 ≤ 12
6X1 + 5X2 ≤ 30
X1 ≤3
Optimal Solution for Sub-
Optimal solutions are:problems
Sub-problem A: X1 = 4; X2 = 1.2,
profit=$35.20
Sub-problem B: X1=3, X2=2,
profit=$33.00
(see figure on next slide)
 Stop searching on the Subproblem B branch because it has
an all-integer feasible solution.
 The $33 profit becomes the lower bound.
 Subproblem A’s branch is searched further since it has a
non-integer solution.
 The second upper bound becomes $35.20, replacing $35.25
from the first node.
Optimal Solution for Sub-
problem
Sub-problems C and D
Subproblem A’s branching yields Subproblems C and
D.
Subproblem C
maximize profit = $7X1 + $6X2
Subject to: 2X1 + 3X2 ≤ 12
6X1 + 5X2 ≤ 30
X1
Subproblem ≥D4
X2 2 ≥ 2
maximize profit = $7X1 + $6X
Subject to: 2X1 + 3X2 ≤ 12
6X1 + 5X2 ≤ 30
X1 ≥4
X2 ≤ 1
Sub-problems C and D
(continued)
 Subproblem C has no feasible solution at all because
the first two constraints are violated if the X1 ≥ 4 and
X2 ≥ 2 constraints are observed.
 Terminate this branch and do not consider its
solution.
 Subproblem D’s optimal solution is
X1 = 4 , X2 = 1, profit = $35.16.
 This non-integer solution yields a new upper
bound of $35.16, replacing the original $35.20.
 Subproblems C and D, as well as the final branches
for the problem, are shown in the figure on the next
slide.
Branch and Bound
Solution
Subproblems E and F
Finally, create subproblems E and F and
solve for X1 and X2 with the added
constraints X1 ≤ 4 and X1 ≥ 5. The
subproblems and their solutions are:

Subproblem E
maximize profit = $7X1 + $6X2
Subject to: 2X1 + 3X2 ≤ 12
6X1 + 5X2 ≤ 30
X1 ≥4
X1 ≤4
X2 ≤ 1
Optimal solution for E:
Subproblems E and F
(continued)
Subproblem F
maximize profit = $7X1 + $6X2
Subject to: 2X1 + 3X2 ≤ 12
6X1 + 5X2 ≤ 30
X1 ≥4
X1 ≥5
X2 ≤ 1
Optimal solution for F:
X1 = 5, X2 = 0, profit = $35
Goal Programming
 Firms usually have more than one goal. For example,
 maximizing total profit,
 maximizing market share,
 maintaining full employment,
 providing quality ecological management,
 minimizing noise level in the neighborhood, and
 meeting numerous other non-economic goals.
 It is not possible for LP to have multiple goals unless they are
all measured in the same units (such as dollars),
 a highly unusual situation.
 An important technique that has been developed to supplement
LP is called goal programming.
Goal Programming (continued)
 Goal programming “satisfices,”
as opposed to LP, which tries to “optimize.”
Satisfice means coming as close as possible to
reaching goals.
 The objective function is the main difference
between goal programming and LP.
 In goal programming, the purpose is to minimize
deviational variables,
which are the only terms in the objective function.
Example of Goal Programming
Harrison Electric Revisited
Goals Harrison’s management wants to
achieve, each equal in priority:
 Goal 1: to produce as much profit above $30 as possible
during the production period.
 Goal 2: to fully utilize the available wiring department hours.
 Goal 3: to avoid overtime in the assembly department.
 Goal 4: to meet a contract requirement to produce at least
seven ceiling fans.
Example of Goal Programming
Harrison Electric Revisited
Need a clear definition of deviational variables, such as :

d1– = underachievement of the profit target


d1+ = overachievement of the profit target
d2– = idle time in the wiring dept. (underused)
d2+ = overtime in the wiring dept. (overused)
d3– = idle time in the assembly dept. (underused)
d3+ = overtime in the wiring dept. (overused)
d4– = underachievement of the ceiling fan goal
d4+ = overachievement of the ceiling fan goal
Ranking Goals with Priority
Levels
A key idea in goal programming is that one
goal is more important than another.
Priorities are assigned to each deviational
variable.

Priority 1 is infinitely more important than Priority 2, which is


infinitely more important than the next goal, and so on.
Analysis of First Goal
Analysis of First and
Second Goals
Analysis of All Four
Priority Goals
Goal Programming Versus Linear
Programming
 Multiple goals (instead of one goal)
 Deviational variables minimized (instead of
maximizing profit or minimizing cost of LP)
 “Satisficing” (instead of optimizing)
 Deviational variables are real (and replace
slack variables)
Initial Goal Programming Tableau
Cj 0 0 P1 P2 0 P4 0 0 P3 0
Solution x1 x2 d1- d2- d3- d4- d1+ d2+ d3+ d4+ Quantity
Mix
P1 d1- 7 6 1 0 0 0 -1 0 0 0 30
P2 d2- 2 3 0 1 0 0 0 -1 0 0 12
0 d3- 6 5 0 0 1 0 0 0 -1 0 30
P4 d4- 0 1 0 0 0 1 0 0 0 -1 7
Zj 0 1 0 0 0 1 0 0 0 -1 7
P4 {
Cj - Zj 0 -1 0 0 0 0 0 0 0 -1
{ Zj 0 0 0 0 0 0 0 0 0 0 0
P3
Pivot Column

Cj - Zj 0 0 0 0 0 0 0 0 1 0
Zj 2 3 0 1 0 0 0 -1 0 0 12
P2 {
Cj - Zj -2 -3 0 0 0 0 0 1 0 0
{ Zj 7 6 1 0 0 0 -1 0 0 0 3
P1
Cj - Zj -7 -6 0 0 0 0 1 0 0 0 0
Second Goal
Programming Tableau
Cj 0 0 P1 P2 0 P4 0 0 P3 0
Solution x1 x2 d1- d2- d3- d4- d1+ d2+ d3+ d4+ Quantity
Mix
P1 x1 1 6/7 1/7 0 0 0 -1/7 0 0 0 30/7
P2 d2- 0 9/7 -2/7 1 0 0 +2/7 -1 0 0 24/7
0 d3- 0 -1/7 -6/7 0 1 0 6/7 0 -1 0 30/7
P4 d4- 0 1 0 0 0 1 0 0 0 -1 7
Zj 0 1 0 0 0 1 0 0 0 -1 7
P4 {
Cj - Zj 0 -1 0 0 0 0 0 0 0 +1
{ Zj 0 0 0 0 0 0 0 0 0 0 0
P3
Pivot Column

Cj - Zj 0 0 0 0 0 0 0 0 1 0
Zj 0 9/7 -2/7 1 0 0 2/7 -1 0 0 24/7
P2 {
Cj - Zj 0 -9/7 +2/7 0 0 0 -2/7 +1 0 0
{ Zj 0 0 0 0 0 0 0 0 0 0 0
P1
Cj - Zj 0 0 1 0 0 0 1 0 0 0
Final Solution to Harrison Electric’s
Goal Programming
Cj 0 0 P1 P2 0 P4 0 0 P3 0
Solution x1 x2 d1- d2- d3- d4- d1+ d2+ d3+ d4+ Quantity
Mix
P1 d2+ 8/5 0 0 -1 3/5 0 0 1 -3/5 0 6
P2 x2 6/5 1 0 0 1/5 0 0 0 -1/5 0 6
0 d1+ 1/5 0 -1 0 6/5 0 1 0 -6/5 0 6
P4 d4+ -6/5 0 0 0 -1/5 1 0 0 1/5 -1 1
Zj -6/5 0 0 0 -1/5 1 0 0 1/5 -1 1
P4 {
Cj - Zj 6/5 0 0 0 1/5 0 0 0 -1/5 -1
{ Zj 0 0 0 0 0 0 0 0 0 0 0
P3
Cj - Zj 0 0 0 0 0 0 0 0 1 0
Zj 0 0 0 0 0 0 0 0 0 0 0
P2 {
Cj - Zj 0 0 0 1 0 0 0 0 0 0
{ Zj 0 0 0 0 0 0 0 0 0 0 0
P1
Cj - Zj 0 0 1 0 0 0 0 0 0 0
Transportation Problem (TP)
and Assignment Problem
(AP)
(special cases of Linear Programming)
1. Transportation Problem (TP)
Distributing any commodity from any group
of supply centers, called sources, to any group
of receiving centers, called destinations, in
such a way as to minimize the total
distribution cost (shipping cost).
1. Transportation Problem (TP)
Total supply must equal total demand.
If total supply exceeds total demand, a dummy
destination, whose demand equals the difference
between the total supply and total demand is
created. Similarly if total supply is less than total
demand, a dummy source is created, whose supply
equals the difference.
All unit shipping costs into a dummy destination or
out of a dummy source are 0.
Example 1:
Example 2:
Destination Supply

D1 D2 D3 D4
S1 50 75 35 75 12
Source S2 65 80 60 65 17

S3 40 70 45 55 11
(D) 0 0 0 0 10

Demand 15 10 15 10
Transportation Tableau:
Initial Solution Procedure:
1. Northwest Corner Starting Procedure
1. Select the remaining variable in the upper left
(northwest) corner and note the supply remaining in
the row, s, and the demand remaining in the column,
d.
2. Allocate the minimum of s or d to this variable. If
this minimum is s, eliminate all variables in its row
from future consideration and reduce the demand in
its column by s; if the minimum is d, eliminate all
variables in the column from future consideration and
reduce the supply in its row by d.
REPEAT THESE STEPS UNTIL ALL SUPPLIES HAVE BEEN
ALLOCATED.
Total sipping cost = 2250
2. Least Cost Starting Procedure
1. For the remaining variable with the lowest unit
cost, determine the remaining supply left in its
row, s, and the remaining demand left in its
column, d (break ties arbitrarily).
2. Allocate the minimum of s or d to this variable.
If this minimum is s, eliminate all variables in its
row from future consideration and reduce the
demand in its column by s; if the minimum is d,
eliminate all variables in the column from future
consideration and reduce the supply in its row by
d.
Total sipping cost = 2065
3. Vogel’s Approximation Method Starting
Procedure
1. For each remaining row and column, determine
the difference between the lowest two remaining
costs; these are called the row and column
penalties.
2. Select the row or column with the largest penalty
found in step 1 and note the supply remaining for
its row, s, and the demand remaining in its column,
d.
3. Allocate the minimum of s or d to the variable in
the selected row or column with the lowest
remaining unit cost. If this minimum is s, eliminate
all variables in its row from future consideration and
reduce the demand in its column by s; if the
minimum is d, eliminate all variables in the column
Total sipping cost = 2030
2. The Assignment Problem (AP) —
a special case of TP with m=n and si=dj for all
i, and j.

The Hungarian Algorithm


=> solving the assignment problem of a least
cost assignment of m workers to m jobs
Assumptions:
1. There is a cost assignment matrix for the m
“people” to be assigned to m “tasks.” (If necessary
dummy rows or columns consisting of all 0’s are
added so that the numbers of people and tasks are
the same.)
2. All costs are nonnegative.
3. The problem is a minimization problem.
The Hungarian Algorithm
Initialization
1. For each row, subtract the minimum number from
all numbers in that row.
2. In the resulting matrix, subtract the minimum
number in each column from all numbers in the
column.
Iterative Steps
1. Make as many 0 cost assignments as possible. If
all workers are assigned, STOP; this is the minimum
cost assignment. Otherwise draw the minimum
number of horizontal and vertical lines necessary to
cover all 0’s in the matrix. (A method for making the
maximum number of 0 cost assignments and
drawing the minimum number of lines to cover all
0’s follows.)
2. Find the smallest value not covered by the lines;
this number is the reduction value.
3. Subtract the reduction value from all numbers not
covered by any lines. Add the reduction value to any
number covered by both a horizontal and vertical
line.
For small problems, one can usually determine the maximum
number of zero cost assignments by observation. For larger
problems, the following procedure can be used:
Determining the Maximum Number of Zero-Cost Assignments
1. For each row, if only one 0 remains in the row, make that
assignment and eliminate the row and column from consideration in
the steps below.
2. For each column, if only one 0 remains, make that assignment
and eliminate that row and column from consideration.
3. Repeat steps 1 and 2 until no more assignments can be made. (If
0’s remain, this means that there are at least two 0’s in each
remaining row and column. Make an arbitrary assignment to one of
these 0’s and repeat steps 1 and 2.)
Again, for small problems, the minimum number of lines required
to cover all the 0’s can usually be determined by observation. The
following procedure, based on network flow arguments, can be
used for larger problems:
Drawing the Minimum Number of Lines to Cover All 0’s
1. Mark all rows with no assignments (with a “‧”).
2. For each row just marked, mark each column that has a 0 in
that row (with a “‧”).
3. For each column just marked, mark each row that has an
assignment in that column (with a “‧”).
4. Repeat steps 2 and 3 until no more marks can be made.
5. Draw lines through unmarked rows and marked columns.
Example:
Minimum uncovered
number
Minimum uncovered
number
CONVERSION OF A MAXIMIZATION
PROBLEM TO A MINIMIZATION
PROBLEM

The Hungarian algorithm works only if the matrix is a


cost matrix. A maximization assignment problem can be
converted to a minimization problem by creating a lost
opportunity matrix. The problem then is to minimize the
total lost opportunity.
Profit Matrix:
J1 J2 J3 J4
W1 67 58 90 55
W2 58 88 89 56
W3 74 99 80 22
(D) 0 0 0 0
The lost opportunity matrix given below is derived by
subtracting each number in the J1 column from 74, each
number in the J2 column from 99, each number in the J3
column from 90, and each number in the J4 from 56.

J1 J2 J3 J4
W1 7 41 0 1
W2 16 11 1 0
W3 0 0 10 34
(D) 74 99 90 56

The Hungarian algorithm can now be applied to this lost


opportunity matrix to determine the maximum profit set of
assignments.
Transportation Problem (TP)
and Assignment Problem
(AP)
(special cases of Linear Programming)
1. Transportation Problem (TP)
Distributing any commodity from any group
of supply centers, called sources, to any group
of receiving centers, called destinations, in
such a way as to minimize the total
distribution cost (shipping cost).
1. Transportation Problem (TP)
Total supply must equal total demand.
If total supply exceeds total demand, a dummy
destination, whose demand equals the difference
between the total supply and total demand is
created. Similarly if total supply is less than total
demand, a dummy source is created, whose supply
equals the difference.
All unit shipping costs into a dummy destination or
out of a dummy source are 0.
Example 1:
Example 2:
Destination Supply

D1 D2 D3 D4
S1 50 75 35 75 12
Source S2 65 80 60 65 17

S3 40 70 45 55 11
(D) 0 0 0 0 10

Demand 15 10 15 10
Transportation Tableau:
Initial Solution Procedure:
1. Northwest Corner Starting Procedure
1. Select the remaining variable in the upper left
(northwest) corner and note the supply remaining in
the row, s, and the demand remaining in the column,
d.
2. Allocate the minimum of s or d to this variable. If
this minimum is s, eliminate all variables in its row
from future consideration and reduce the demand in
its column by s; if the minimum is d, eliminate all
variables in the column from future consideration and
reduce the supply in its row by d.
REPEAT THESE STEPS UNTIL ALL SUPPLIES HAVE BEEN
ALLOCATED.
Total sipping cost = 2250
2. Least Cost Starting Procedure
1. For the remaining variable with the lowest unit
cost, determine the remaining supply left in its
row, s, and the remaining demand left in its
column, d (break ties arbitrarily).
2. Allocate the minimum of s or d to this variable.
If this minimum is s, eliminate all variables in its
row from future consideration and reduce the
demand in its column by s; if the minimum is d,
eliminate all variables in the column from future
consideration and reduce the supply in its row by
d.
Total sipping cost = 2065
3. Vogel’s Approximation Method Starting
Procedure
1. For each remaining row and column, determine
the difference between the lowest two remaining
costs; these are called the row and column
penalties.
2. Select the row or column with the largest penalty
found in step 1 and note the supply remaining for
its row, s, and the demand remaining in its column,
d.
3. Allocate the minimum of s or d to the variable in
the selected row or column with the lowest
remaining unit cost. If this minimum is s, eliminate
all variables in its row from future consideration and
reduce the demand in its column by s; if the
minimum is d, eliminate all variables in the column
Total sipping cost = 2030
2. The Assignment Problem (AP) —
a special case of TP with m=n and si=dj for all
i, and j.

The Hungarian Algorithm


=> solving the assignment problem of a least
cost assignment of m workers to m jobs
Assumptions:
1. There is a cost assignment matrix for the m
“people” to be assigned to m “tasks.” (If necessary
dummy rows or columns consisting of all 0’s are
added so that the numbers of people and tasks are
the same.)
2. All costs are nonnegative.
3. The problem is a minimization problem.
The Hungarian Algorithm
Initialization
1. For each row, subtract the minimum number from
all numbers in that row.
2. In the resulting matrix, subtract the minimum
number in each column from all numbers in the
column.
Iterative Steps
1. Make as many 0 cost assignments as possible. If
all workers are assigned, STOP; this is the minimum
cost assignment. Otherwise draw the minimum
number of horizontal and vertical lines necessary to
cover all 0’s in the matrix. (A method for making the
maximum number of 0 cost assignments and
drawing the minimum number of lines to cover all
0’s follows.)
2. Find the smallest value not covered by the lines;
this number is the reduction value.
3. Subtract the reduction value from all numbers not
covered by any lines. Add the reduction value to any
number covered by both a horizontal and vertical
line.
For small problems, one can usually determine the maximum
number of zero cost assignments by observation. For larger
problems, the following procedure can be used:
Determining the Maximum Number of Zero-Cost Assignments
1. For each row, if only one 0 remains in the row, make that
assignment and eliminate the row and column from consideration in
the steps below.
2. For each column, if only one 0 remains, make that assignment
and eliminate that row and column from consideration.
3. Repeat steps 1 and 2 until no more assignments can be made. (If
0’s remain, this means that there are at least two 0’s in each
remaining row and column. Make an arbitrary assignment to one of
these 0’s and repeat steps 1 and 2.)
Again, for small problems, the minimum number of lines required
to cover all the 0’s can usually be determined by observation. The
following procedure, based on network flow arguments, can be
used for larger problems:
Drawing the Minimum Number of Lines to Cover All 0’s
1. Mark all rows with no assignments (with a “‧”).
2. For each row just marked, mark each column that has a 0 in
that row (with a “‧”).
3. For each column just marked, mark each row that has an
assignment in that column (with a “‧”).
4. Repeat steps 2 and 3 until no more marks can be made.
5. Draw lines through unmarked rows and marked columns.
Example:
Minimum uncovered
number
Minimum uncovered
number
CONVERSION OF A MAXIMIZATION
PROBLEM TO A MINIMIZATION
PROBLEM

The Hungarian algorithm works only if the matrix is a


cost matrix. A maximization assignment problem can be
converted to a minimization problem by creating a lost
opportunity matrix. The problem then is to minimize the
total lost opportunity.
Profit Matrix:
J1 J2 J3 J4
W1 67 58 90 55
W2 58 88 89 56
W3 74 99 80 22
(D) 0 0 0 0
The lost opportunity matrix given below is derived by
subtracting each number in the J1 column from 74, each
number in the J2 column from 99, each number in the J3
column from 90, and each number in the J4 from 56.

J1 J2 J3 J4
W1 7 41 0 1
W2 16 11 1 0
W3 0 0 10 34
(D) 74 99 90 56

The Hungarian algorithm can now be applied to this lost


opportunity matrix to determine the maximum profit set of
assignments.
The Travelling Salesperson:

Typically a travelling salesperson will want to cover all the


towns in their area using the minimum driving distance.

Travelling salesperson problems tackle this by first turning


the towns and roads into a network.

We have seen that finding a tour that visits


every arc (route inspection) depended on earlier
work by Euler. Similarly the problem of finding a
tour that visits every node (travelling
salesperson) was investigated in the 19th Century
by the Irish mathematician Sir William Hamilton.

Definition:
A Hamiltonian cycle is a tour which contains every node once
Consider the Hamiltonian cycles for this graph
A
There are just three essentially
2 4 different Hamiltonian cycles:
5
4 D 4
C ACBDA with weight 16
6 ABCDA with weight 17
B
ABDCA with weight 17

A A
A
2 4
2 4
5
4 D C D 4 5
C D 4
4 C
6 6
B B
B
However, not all graphs have Hamiltonian cycles.

Consider this graph: A salesperson living in town B, say, may still


want to find the shortest round trip visiting
A
every town.
33
12 To enable you to use the next three algorithms
35 21 you can replace any network like this one by the
23 D C
complete network of shortest distances.
38
B The shortest distance between A and C is 33 (via D: 12 + 21)
The shortest distance between A and B is 35 (via D: 12 + 23)
Adding direct arcs AC and AD does not change the problem but
does produce a graph with Hamiltonian cycles.
The rest of this module assumes the problem is always the classical one of
finding a Hamiltonian cycle of minimum weight with no repetition of nodes.
Our next algorithm is called the “Nearest Neighbour” – it will find a solution
to a travelling salesperson problem but not necessarily the best (optimum).
So we will then look at finding limits between which the optimum solution
must lie and how to improve any solution we find.
The Nearest Neighbour
Algorithm
To find a (reasonably good) Hamiltonian cycle i.e. a closed
trail containing every node of a graph

Step 1 Choose any starting node

Step 2 Consider the arcs which join the node just


chosen to nodes as yet unchosen. Pick the
one with minimum weight and add it to the
cycle
Step 3 Repeat step 2 until all nodes have been
chosen
Step 4 Then add the arc that joins the last-chosen node
to the first-chosen node
To find a (reasonably good) Hamiltonian cycle i.e. a closed trail containing
every node of this graph using the Nearest Neighbour Algorithm:
A Choose any starting node – let’s say A
6 5
B 6 E Consider the arcs which join the node just chosen to nodes
8 4
as yet unchosen. Pick the one with minimum weight and
5 7 8 5 add it to the cycle
The first arc is AD as 4 is the least of 6, 8, 4 and 5
C 7 D
From D, DE is chosen as 5 is the least of 7, 7, and 5
A
Then EB, BC, and finally CA
B 6 E
8 4 The cycle is ADEBCA which has weight 28 (5 + 5 + 6 + 4 + 8)
5 5
The Nearest Neighbour Algorithm is ‘greedy’ because
C D
at each stage the immediately best route is chosen,
without a look ahead, or back, to possible future
problems or better solutions.
We do know that the optimum solution will have an equal or lower weight
than it (by definition) and so the algorithm has produced an upper bound.
This then begs the question: Can we find a lower bound to see
how much potential there is for
improvement?
The Lower Bound Algorithm
To find a lower bound for a travelling salesperson
problem
Step 1 Pick any node and remove the two connecting
arcs with least weight
Step 2 Find the minimum spanning tree for the other
nodes using Prim’s algorithm
Step 3 Add back in the two arcs removed previously

Step 4 The weight of the resulting graph (which may not


be a cycle) is a lower bound i.e. any optimum
solution must have at least this weight
To find a lower bound for a Hamiltonian cycle using the Lower Bound Algorithm:
Pick any node and remove the two connecting arcs
A
4 5
with least weight
B 6 E Let’s pick A and remove arcs AB and AE
8 6 Find the minimum spanning tree for BCDE using
5 7 8 5 Prim’s algorithm
Pick any starting node, say, B.
C 7 D The shortest arc is BC, then BE and finally ED.
Now we add in the two arcs we removed earlier
This tree has weight 25 (4 + 6 + 5 + 5 + 5)
The optimum solution must have at least this weight
How do we know this?
Well, any Hamiltonian cycle must consist of 5 arcs (because there are 5 nodes): 2
arcs into and out of A and the other three arcs connecting B, C, D and E.
The two arcs from A must have weight of at least 4 + 5, since these are the
smallest available.
By applying Prim’s algorithm to B, C, D and E we can see that the three arcs
connecting B, C, D and E must have at least weight 5 + 5 + 6 (the minimum
connector or spanning tree)
Hence, it is impossible for any Hamiltonian cycle to have weight less than 4 + 5 + 5
+5+6
The Tour Improvement Algorithm
1
To look for possible improvement in a tour
found by the nearest neighbour algorithm: 5 2

Step 1 Number the nodes in the order of


the tour; start at node 1 4 3

Step 2 Consider just the part of the tour 1

1-2-3-4 2

Step 3 Swap the middle nodes to change


the order to 1 - 3 - 2 - 4
4 3

Step 4 Compare the two and keep the order


with the lower weight

Step 5 Move on to node 2 and repeat until each node has been
the start node once
To look for possible improvement in a tour using the tour improvement algorithm:
Here is a network:
Pick A as node 1 and consider the tour from A-C-E-D
A This has weight 8 + 8 + 5 = 21
6 5 Now swap the middle two nodes
6 E
B
8 4 And you get A-E-C-D
5 5 This has weight 5 + 8 + 7 = 20
7 8
So swap E and C and the tour becomes AECDBA
C 7 D Now look at ECDB
ECDB = 8 + 7 + 7 = 22 EDCB = 5 + 7 + 5 = 17
And here is a tour:
So swap C and D and the tour becomes AEDCBA
ACEDBA
Now look at DCBA
It has weight 34
DCBA = 7 + 5 + 6 = 18 DBCA = 7 + 5 + 8 = 20
Let’s see if we can So leave the tour as AEDCBA
improve on that:
Now look at CBAE
We have now had
CBAE = 5 + 6 + 5 = 16 CABE = 8 + 6 + 6 = 20
each node at the
front and so we stop. So leave the tour as AEDCBA
The final tour is Now look at BAED
ADCBEA.
BAED = 6 + 5 + 5 = 16 BEAD = 6 + 5 + 4 = 15

So swap A and E and the tour becomes ADCBEA


Network Models
Overview:
• Networks and graphs are powerful
modeling tools.
• Most OR models have networks or
graphs as a major aspect
• Each representation has its advantages
– Major purpose of a representation
• efficiency in algorithms
• ease of use
Description
Many important optimization problems can be
analyzed by means of graphical or network
representation. In this chapter the following network
models will be discussed:

1. Shortest path problems


2. Maximum flow problems
3. CPM-PERT project scheduling models
4. Minimum Cost Network Flow Problems
5. Minimum spanning tree problems
8.1 Basic Definitions
A graph or network is defined by two sets of
symbols:
• Nodes: A set of points or vertices(call it V) are
Nodes
called nodes of a graph or network.
1 2

• Arcs: An arc consists of an ordered pair of


vertices and represents a possible
Arc direction of
motion that may occur between vertices.
1 2
• Chain: A sequence of arcs such that every arc
has exactly one vertex in common with the
previous arc is called a chain.
Common vertex
between two arcs

1 2
• Path: A path is a chain in which the terminal node
of each arc is identical to the initial node of next arc.
For example in the figure below (1,2)-(2,3)-(4,3) is a
chain but not a path; (1,2)-(2,3)-(3,4) is a chain and
a path, which represents a way to travel from node 1
to node 4.
1 4

2 3
Essence of Dijkstra’s Shortest- Path
Algorithm
• Key Points regarding the nature of the
algorithm
– In each iteration, the shortest path from the
origin to one of the rest of the nodes is found.
That is, we obtain one new “solved” node in
each iteration. (More than one such path and
node may be found in one iteration when
there is a tie. There may also exist multiple
shortest paths from the origin to some nodes.)
– The algorithm stops when the shortest path to
the destination is found
Essence of Dijkstra’s Shortest- Path
Algorithm
• General thought process involved in each
iteration
– Let S be the current set of “solved nodes” (the
set of nodes whose shortest paths from the
origin been found), N be the set of all nodes,
and N – S be the set of “unsolved nodes
• 1. The next “solved” node should be reachable
directly from one of the solved nodes via one direct
link or arc (these nodes can be called neighboring
nodes of the current solved nodes). Therefore, we
consider only such nodes and all the links
providing the access from the current solved nodes
to these neighboring nodes (but no other links).
Essence of Dijkstra’s Shortest- Path
Algorithm
– 2. For each of these neighboring nodes, find
the shortest path from the origin via only
current solved nodes and the corresponding
distance from the origin
– 3. In general, there exist multiple such
neighboring nodes.The shortest path to one of
these nodes is claimed to have been found.
This node is the one that has the shortest
distance from the origin among these
neighboring nodes being considered. Call this
new node “solved node.”
Algorithm for the Shortest Path
Problem
• Objective of the nth iteration: Find the nth nearest node
to the origin (to be repeated for n = 1, 2, … until the nth
nearest node is the destination)
• Input for the nth Iteration: (n – 1) nearest nodes to the
origin (solved for at the previous iterations), including their
shortest path and distance from the origin. (These nodes
plus the origin will be called solved nodes; the others are
unsolved nodes)
• Candidates for the nth nearest node: Each solved node
that is directly connected by a link to one or more unsolved
nodes provides one candidate  the unsolved node with
the shortest connecting link (ties provide additional
candidates)
• Calculation of nth nearest node: For each solved node
and its candidate, add the distance between them and the
distance of the shortest path from the origin to this solved
node. The candidate with the smallest such total distance
is the nth nearest node (ties provide additional solved
nodes), and its shortest path is the one generating this
distance.
The Road System for Seervada Park
• Cars are not allowed into the park
• There is a narrow winding road system for trams
and for jeeps driven by the park rangers
– The road system is shown without curves in the next
slide
– Location O is the entrance into the park
– Other letters designate the locations of the ranger
stations
– The scenic wonder is at location T
– The numbers give the distance of these winding roads
in miles
• The park management wishes to determine
which route from the park entrance to station T
has the smallest total distance for the operation
of the trams
The Road System for Seervada Park

7
A
D 5
2
2
4 T
1
5
O B 7
3
4 1
E
4
C
Dijkstra’s Algorithm for Shortest Path
on a Network with Positive Arc Lengths
• Oth iteration: Shortest distance from node O
to Node O. S = {O}.
• Ist iteration:
– Step 1: Neighboring Nodes = {A, B, C}
– Step 2: Shortest path from O to neighboring
nodes that traverse through the current set of
solved nodes S. Min {2, 5, 4} = 2 (corresponding
to node A).
– Step 3: The shortest path from O to A has been
found with a distance of 2. S = {O, A}
Dijkstra’s Algorithm for Shortest Path
on a Network with Positive Arc Lengths

A
Solved Nodes 2

O B
5

C
Dijkstra’s Algorithm for Shortest Path
on a Network with Positive Arc Lengths
• 2nd Iteration:
– Step 1: Neighboring nodes = {B, C, D}
– Step 2: Min (Min (2 + 2, 5), 4, (2 + 7)) = 4.
– Step 3: Shortest path from B and C has been
found. S = {O, A, B, C}
• 7
Current Solved Nodes A D
(2) 2

(0) B
5
C
O
4
Dijkstra’s Algorithm for Shortest Path
on a Network with Positive Arc Lengths
• 3rd Iteration:
– Step 1: Neighboring nodes = {D, E}. Only AD, BD,
BE, and CE
– Step 2: Min(Min(2 + 7, 4+4), Min(4 + 3, 4+4)) = 7
– Step 3: The shortest path to E has been found S = {O,
A, B, C, E}

7
A
Current Solved (2) D
4
Nodes
3
B

O (4) E
C 4
(4)
Dijkstra’s Algorithm for Shortest Path
on a Network with Positive Arc Lengths
• Iteration 4
– Step 1: Include (only) Nodes D and T. Include
only arcs AD, BD, ED, & ET
– Step 2: Min((min(2+7, 4+4, 7+1), (7+7))) = 8
– Step 3: Shortest path from node O to Node D
has been found. S = {O, A, B, C, D, E}
7
A
(2) D
4
O 1
(4) B
E T
Current solved nodes 7
(4) (7)
C
Dijkstra’s Algorithm for Shortest Path
on a Network with Positive Arc Lengths
• Iteration 5
– Step 1: Include only node T and include arcs DT and
ET
– Step 2: Min(8+5, 7+7) = 13 (no other competing
nodes)
– Step 3: The shortest path from the origin to T, the
destination, has been found, with a distance of 13
A (2)
Current solved nodes D 5
(8)
B (4) T
(0)
O
7
E
(4) (7)
C
Dijkstra’s Algorithm for Shortest Path
on a Network with Positive Arc Lengths
• Final Solution
• Incidentally, we have also found the nth
nearest node from the origin sequentially
A (2)
(8)
2
2 5
D
4
T
O
B 1
(0) 4 3 (13)
(4)
E
(7)
C
(4)
Shortest-Path Algorithm Applied to
Seervada Park Problem
n Solved Nodes Closest Total Nth Nearest Minimum Last
Directly Connected Connected Distance Node Distance Connection
to Unsolved Nodes Unsolved Node Involved

1 O A 2 A 2 OA
2, 3 O C 4 C 4 OC
A B 2+2=4 B 4 AB
4 A D 2+7=9
B E 4+3=7 E 7 BE
C E 4+4=8
5 A D 2+7=9
B D 4+4=8 D 8 BD
E D 7+1=8 D 8 ED
6 D T 8+5=13 T 13 DT
E T 7+7=14
LP Formulation of the Shortest
Path Problem
• Consider the following shortest path
problem from node 1 to node 6
∀ ∆ : denotes a link
3
2 4 2
4
3
6
1 2 6
1
4
3 2

3
2 3 5 7

• Send one unit of flow from node 1 to node 6


LP Formulation of the Shortest
Path Problem
• Use flow conservation constraints
– (Outflow from any node – inflow to that node)
=0
– For origin = 1
– For destination = -1
– For all other nodes = 0
– Let xj denote the flow along link j, j = 1, 2, ..,
7, xj = 0 or 1
– It turns out that this 0-1 constraints can be
replaced by 0 ≤ xj ≤ 1, which can in turn be
replaced by xj ≥ 0
LP Formulation of the Shortest Path
Problem
• Min 4x1 + 3x2 + 3x3 + 2x4 + 3x5 + 2x6 +
2x7
• S.t. x1 + x2 =1
• -x1 + x3 + x4 =0
• - x2 + x5 =0
• - x3 + x6 =0
• - x4 – x5 + x7 = 0
• - x6 – x7 = -1
• Xj ≥ 0, j = 1, 2, …, 7 (xj integers)
Maximum Flow Problem
• Maximum flow problem description
– All flow through a directed and connected network
originates at one node (source) and terminates at one
another node (sink)
– All the remaining nodes are transshipment nodes
– Flow through an arc is allowed only in the direction
indicated by the arrowhead, where the maximum amount
of flow is given by the capacity of that arc. At the source,
all arcs point away from the node. At the sink, all arcs
point into the node
– The objective is to maximize the total amount of flow from
the source to the sink (measured as the amount leaving
the source or the amount entering the sink)
Maximum Flow Problem
• Typical applications
– Maximize the flow through a company’s distribution
network from its factories to its customers
– Maximize the flow through a company’s supply network
from its vendors to its factories
– Maximize the flow of oil through a system of pipelines
– Maximize the flow of water through a system of
aqueducts
– Maximize the flow of vehicles through a transportation
network
Maximum Flow Algorithm
• Some Terminology
– The residual network shows the remaining
arc capacities for assigning additional flows
after some flows have been assigned to the
arcs

The residual capacity for assigning some flow from node B to node O

5
2
O B

The residual capacity for flow from node O to Node B


Maximum Flow Algorithm
– An augmenting path is a directed path from the
source to the sink in the residual network such that
every arc on this path has strictly positive residual
capacity
– The residual capacity of the augmenting path is the
minimum of these residual capacities (the amount of
flow that can feasibly be added to the entire path)
• Basic idea
– Repeatedly select some augmenting path and add a
flow equal to its residual capacity to that path in the
original network. This process continues until there
are no more augmenting paths, so that the flow from
the source to the sink cannot be increased further
Maximum Flow Algorithm
• The Augmenting Path Algorithm
– Assume that the arc capacities are either integers or
rational numbers
• 1. identify an augmenting path by finding some
directed path from the source to the sink in the
residual network such that every arc on this path has
strictly positive residual capacity. If no such path
exists, the net flows already assigned constitute an
optimal flow pattern
• 2. Identify the residual capacity c* of this augmenting
path by finding the minimum of the residual
capacities of the arcs on this path. Increase the flow
in this path by c*
Maximum Flow Example
• During the peak season the park management
of the Seervada park would like to determine
how to route the various tram trips from the park
entrance (Station O) to the scenic (Station T) to
maximize the number of trips per day. Each tram
will return by the same route it took on the
outgoing trip so the analysis focuses on outgoing
trips only. To avoid unduly disturbing the ecology
and wildlife of the region, strict upper limits have
been imposed on the number of outgoing trips
allowed per day in the outbound direction on
each individual road. For each road the direction
of travel for outgoing trips is indicated by an
arrow in the next slide. The number at the base
of the arrow gives the upper limit on the number
of outgoing trips allowed per day.
Maximum Flow Example
• Consider the problem of sending as many units from
node O to node T for the following network (current flow,
capacity): (0,3)
A
D (0,9)
(0,5)
(0,1)
(0,4) T
(0,1)
(0,7)
B (0,6)
O
(0,5)
(0,4) (0,2)
(0,4) E

C
Maximum Flow Example
• Iteration 1: one of the several augmenting paths is O→B→E→T, which has a
residual capacity of min{7, 5, 6} = 5. By assigning the flow of 5 to this path,
the resulting network is shown above
(0,3)
A
D (0,9)
(0,5)
(0,1)
(0,4) T
(0,1)
(5,7)
B (5,6)
O
(5,5)
(0,4) (0,2)
(0,4) E

C
Maximum Flow Example
• Iteration 2: Assign a flow of 3 to the augmenting
path O→A→D→T. The resulting residual network
is (3,3)
A
D (3,9)
(3,5)
(0,1)
(0,4) T
(0,1)
(5,7)
B (5,6)
O
(5,5)
(0,4) (0,2)
(0,4) E

C
Maximum Flow Example
• Iteration 3: Assign a flow of 1 to the augmenting
path O→A→B→D→T. The resulting residual
network is (3,3)
A
D (4,9)
(4,5)
(1,1)
(1,4) T
(0,1)
(5,7)
B (5,6)
O
(5,5)
(0,4) (0,2)
(0,4) E

C
Maximum Flow Example
• Iteration 4: Assign a flow of 2 to the augmenting path O→B→D→T.
The resulting residual network is

(3,3)
A
D (6,9)
(4,5)
(1,1)
(3,4) T
(0,1)
(7,7)
B (5,6)
O
(5,5)
(0,4) (0,2)
(0,4) E

C
Maximum Flow Example
Iteration 5: Assign a flow of 1 to the augmenting
path O→C→E→D→T. The resulting residual
network is (3,3)
A
D (7,9)
(4,5)
(1,1)
(3,4) T
(1,1)
(7,7)
B (5,6)
O
(5,5)
(1,4) (0,2)
(1,4) E

C
Maximum Flow Example
Iteration 6: Assign a flow of 1 to the augmenting
path O→C→E→T. The resulting residual network
is (3,3)
A
D (7,9)
(4,5)
(1,1)
(3,4) T
(1,1)
(7,7)
B (6,6)
O
(5,5)
(2,4) (0,2)
(2,4) E

C
Maximum Flow Example
• There are no more flow augmenting paths, so the
current flow pattern is optimal

3
A 7
D
1
4
T
13 3 1
O B 13
6
7
2 5
E

C 2
Maximum Flow Example
• Recognizing optimality
• Max-flow min-cut theorem can be useful
• A cut is defined as any set of directed arcs
containing at least one arc from every directed
path from the source to the sink
• For any particular cut, the cut value is the sum of
the arc capacities of the arcs of the cut
• The theorem states that, for any network with a
single source and sink, the maximum feasible
flow from the source to the sink equals the
minimum cut value for all cuts of the network
8.6 Minimum Spanning Tree
Problems
Suppose that each arc (i,j) in a network has a
length associated with it and that arc (i,j)
represents a way of connecting node i to node j.
For example, if each node in a network represents
a computer in a computer network, arc(i,j) might
represent an underground cable that connects
computer i to computer j. In many applications, we
want to determine the set of arcs in a network that
connect all nodes such that the sum of the length
of the arcs is minimized. Clearly, such a group of
arcs contain no loop.
Minimum Spanning Tree Problem
• An undirected and connected network is being
considered, where the given information
includes some measure of the positive length
(distance, cost, time, etc.) associated with each
link
• Both the shortest path and minimum spanning
tree problems involve choosing a set of links that
have the shortest total length among all sets of
links that satisfy a certain property
– For the shortest-path problem this property is that the
chosen links must provide a path between the origin
and the destination
– For the minimum spanning tree problem, the required
property is that the chosen links must provide a path
between each pair of nodes
Some Applications
• Design of telecommunication networks (fiber-
optic networks, computer networks, leased-line
telephone networks, cable television networks,
etc.)
• Design of lightly used transportation network to
minimize the total cost of providing the links (rail
lines, roads, etc.)
• Design of a network of high-voltage electrical
transmission lines
• Design of a network of wiring on electrical
equipment (e.g., a digital computer system) to
minimize the total length of the wire
• Design of a network of pipelines to connect a
number of locations
For a network with n nodes, a spanning tree is a
group of n-1 arcs that connects all nodes of the
network and contains no loops.

12
1 2

(1,2)-(2,3)-(3,1) is a loop
4
7

(1,3)-(2,3) is the minimum spanning tree


3
Minimum Spanning Tree Problem
Description
• You are given the nodes of the network but not
the links. Instead you are given the potential
links and the positive length for each if it is
inserted into the network (alternative measures
for length of a link include distance, cost, and
time)
• You wish to design the network by inserting
enough links to satisfy the requirement that there
be a path between every pair of nodes
• The objective is to satisfy this requirement in a
way that minimizes the total length of links
inserted into the network
Minimum Spanning Tree Algorithm
• Greedy Algorithm
– 1. Select any node arbitrarily, and then connect (i.e.,
add a link) to the nearest distinct node
– 2. Identify the unconnected node that is closest to a
connected node, and then connect these two nodes
(i.e., add a link between them). Repeat the step until
all nodes have been connected
– 3. Tie breaking: Ties for the nearest distinct node
(step 1) or the closest unconnected node (step 2) may
be broken arbitrarily, and the algorithm will still yield
an optimal solution.
• Fastest way of executing algorithm manually is
the graphical approach illustrated next
Example: The State University campus has five
computers. The distances between computers are
given in the figure below. What is the minimum
length of cable required to interconnect the
computers? Note that if two computers are not
connected this is because of underground rock
formations. 1
1
2
2
2
6
5 4
2 3
4
4
5 3
Solution: We want to find the minimum spanning
tree.
• Iteration 1: Following the MST algorithm
discussed before, we arbitrarily choose node 1 to
begin. The closest node is node 2. Now C={1,2},
Ć={3,4,5}, and arc(1,2) will be in the minimum
spanning tree. 1 1
2
2
2
6
5 4
2 3
4
4
5 3
• Iteration 2: Node 5 is closest to C. since node 5
is two blocks from node 1 and node 2, we may
include either arc(2,5) or arc(1,5) in the minimum
spanning tree. We arbitrarily choose to include
arc(2,5). Then C={1,2,5} and Ć={3,4}.

1 1
2
2
2
6
5 4
2 3
4
4
5 3
• Iteration 3: Since node 3 is two blocks from node
5, we may include arc(5,3) in the minimum
spanning tree. Now C={1,2,5,3} and Ć={4}.

1 1
2
2
2
6
5 4
2 3
4
4
5 3
• Iteration 4: Node 5 is the closest node to node 4.
Thus, we add arc(5,4) to the minimum spanning
tree.
We now have a minimum spanning tree consisting
of arcs(1,2), (2,5), (5,3), and (5,4). The length of
the minimum spanning tree is 1+2+2+4=9 blocks.
1 1
2
2
2
6
5 4
2 3
4
4
5 3

You might also like