You are on page 1of 24

ASSIGNMENT ON OPERATIONS RESEARCH

ASSIGNMENT ON
OPERATIONS
RESEARCH

BY RAHUL GUPTA
Q.1: Describe in details the OR approach of problem solving. What
are the limitations of the Operations Research?
Answer:

 OR approach of problem solving:

Optimization is the act of obtaining the best result under any given circumstance. In
various practical problems we may have to take many technical or managerial decisions
at several stages. The ultimate goal of all such decisions is to either maximize the
desired benefit or minimize the effort required. We make decisions in our every day life
without even noticing them. Decision-making is one of the main activities of a manager
or executive.

In simple situations decisions are taken simply by common sense, sound judgment
and expertise without using any mathematics. But here the decisions we are concerned
with are rather complex and heavily loaded with responsibility. Examples of such

RAHUL GUPTA, MBAHCS (2ND SEM), SUBJECT CODE-MB0032, SET-2 Page 1


ASSIGNMENT ON OPERATIONS RESEARCH

decision are finding the appropriate product mix when there are large numbers of
products with different profit contributions and production requirement or planning
public transportation network in a town having its own layout of factories, apartments,
blocks etc. Certainly in such situations also decision may be arrived at intuitively from
experience and common sense, yet they are more judicious if backed up by
mathematical reasoning.

The search of a decision may also be done by trial and error but such a search may
be cumbersome and costly. Preparative calculations may avoid long and costly research.
Doing preparative calculations is the purpose of Operations research. Operations
research does mathematical scoring of consequences of a decision with the aim of
optimizing the use of time, efforts and resources and avoiding blunders.

The application of Operations research methods helps in making decisions in such


complicated situations. Evidently the main objective of Operations research is to
provide a scientific basis to the decision-makers for solving the problems involving the
interaction of various components of organization, by employing a team of scientists
from different disciplines, all working together for finding a solution which is the best
in the interest of the organization as a whole. The solution thus obtained is known as
optimal decision.

 Features of Operations Research:


• It is System oriented:

OR studies the problem from over all points of view of organizations or


situations since optimum result of one part of the system may not be optimum for some
other part.

•It imbibes Inter – disciplinary team approach:

Since no single individual can have a thorough knowledge of all fast developing
scientific know-how, personalities from different scientific and managerial cadre form a
team to solve the problem.

RAHUL GUPTA, MBAHCS (2ND SEM), SUBJECT CODE-MB0032, SET-2 Page 2


ASSIGNMENT ON OPERATIONS RESEARCH

•It makes use of Scientific methods to solve problems.


•OR increases the effectiveness of a management Decision making ability.
•It makes use of computer to solve large and complex problems.
•It gives Quantitative solution.
•It considers the human factors also.

The first and the most important requirement is that the root problem should be identified
and understood. The problem should be identified properly, this indicates three major
aspects:
• A description of the goal or the objective of the study.
• An identification of the decision alternative to the system.
• Recognition of the limitations, restrictions and requirements of the system.

 Limitations of OR:

The limitations are more related to the problems of model building, time and money factors.
• Magnitude of computation: Modern problem involve large number of variables and
hence to find interrelationship, among makes it difficult.
• Non – quantitative factors and Human emotional factor cannot be taken into account.
• There is a wide gap between the managers and the operation researches.
• Time and Money factors when the basic data is subjected to frequent changes then
incorporation of them into OR models are a costly affair.
• Implementation of decisions involves human relations and behavior

Q 2: What are the characteristics of the standard form of


L.P.P.? What is the standard form of L.P.P.? State the
fundamental theorem of L.P.P?
Answer:

Introduction:

RAHUL GUPTA, MBAHCS (2ND SEM), SUBJECT CODE-MB0032, SET-2 Page 3


ASSIGNMENT ON OPERATIONS RESEARCH

In mathematics, linear programming (LP) is a technique for optimization of a linear


objective function, subject to linear equality and linear inequality constraints. Informally, linear
programming determines the way to achieve the best outcome (such as maximum profit or
lowest cost) in a given mathematical model and given some list of requirements represented as
linear equations.

More formally, given a polyhedron (for example, a polygon), and a real-valued affine
function defined on this polyhedron, a linear programming method will find a point on
the polyhedron where this function has the smallest (or largest) value if such point
exists, by searching through the polyhedron vertices.

Linear programs are problems that can be expressed in canonical form:

Maximize
Subject to

Represents the vector of variables (to be determined), while and are vectors of (known)
coefficients and is a (known) matrix of coefficients. The expression to be maximized or
minimized is called the objective function ( in this case). The equations are the
constraints which specify a convex polytope over which the objective function is to be
optimized.

Linear programming can be applied to various fields of study. Most extensively it is


used in business and economic situations, but can also be utilized for some engineering
problems. Some industries that use linear programming models include transportation, energy,
telecommunications, and manufacturing. It has proved useful in modeling diverse types of
problems in planning, routing, scheduling, assignment, and design.

Uses:
Linear programming is a considerable field of optimization for several reasons. Many
practical problems in operations research can be expressed as linear programming problems.
Certain special cases of linear programming, such as network flow problems and multi
commodity flow problems are considered important enough to have generated much research on

RAHUL GUPTA, MBAHCS (2ND SEM), SUBJECT CODE-MB0032, SET-2 Page 4


ASSIGNMENT ON OPERATIONS RESEARCH

specialized algorithms for their solution. A number of algorithms for other types of
optimization problems work by solving LP problems as sub-problems. Historically, ideas from
linear programming have inspired many of the central concepts of optimization theory, such as
duality, decomposition, and the importance of convexity and its generalizations. Likewise,
linear programming is heavily used in microeconomics and company management, such as
planning, production, transportation, technology and other issues. Although the modern
management issues are ever-changing, most companies would like to maximize profits or
minimize costs with limited resources. Therefore, many issues can boil down to linear
programming problems.

Standard form:
Standard form is the usual and most intuitive form of describing a linear programming
problem. It consists of the following three parts:

• A linear function to be maximized

E.g. maximize

• Problem constraints of the following form

e.g.

• Non-negative variables

e.g.

The problem is usually expressed in matrix form, and then becomes:

Maximize
Subject to
RAHUL GUPTA, MBAHCS (2ND SEM), SUBJECT CODE-MB0032, SET-2 Page 5
ASSIGNMENT ON OPERATIONS RESEARCH

Other forms, such as minimization problems, problems with constraints on alternative


forms, as well as problems involving negative variables can always be rewritten into an
equivalent problem in standard form.

Example-

Suppose that a farmer has a piece of farm land, say A square kilometers large, to be planted
with either wheat or barley or some combination of the two. The farmer has a limited
permissible amount F of fertilizer and P of insecticide which can be used, each of which is
required in different amounts per unit area for wheat (F1, P1) and barley (F2, P2). Let S1 be the
selling price of wheat, and S2 the price of barley. If we denote the area planted with wheat and
barley by x1 and x2 respectively, then the optimal number of square kilometers to plant with
wheat vs. barley can be expressed as a linear programming problem:

maximize (maximize the revenue — revenue is the "objective function")


subject to (limit on total area)
(limit on fertilizer)
(limit on insecticide)
(cannot plant a negative area)

Which in matrix form becomes?

Maximize

Subject to

Augmented form (slack form):

RAHUL GUPTA, MBAHCS (2ND SEM), SUBJECT CODE-MB0032, SET-2 Page 6


ASSIGNMENT ON OPERATIONS RESEARCH

Linear programming problems must be converted into augmented form before being solved
by the simplex algorithm. This form introduces non-negative slack variables to replace
inequalities with equalities in the constraints. The problem can then be written in the following
block matrix form.

Maximize Z in:

Where the newly introduced slack variables, and Z are is the variable to be maximized.

Example-

The example above is converted into the following augmented form:

maximize (objective function)


subject to (augmented constraint)
(augmented constraint)
(augmented constraint)

Where are (non-negative) slack variables, representing in this example the unused
area, the amount of unused fertilizer, and the amount of unused insecticide.

In matrix form this becomes:

Maximize Z in:

Duality:

RAHUL GUPTA, MBAHCS (2ND SEM), SUBJECT CODE-MB0032, SET-2 Page 7


ASSIGNMENT ON OPERATIONS RESEARCH

Every linear programming problem, referred to as a primal problem, can be converted into a
dual problem, which provides an upper bound to the optimal value of the primal problem. In
matrix form, we can express the primal problem as:

Maximize
Subject to

The corresponding dual problem is:

Minimize
Subject to

Where y is used instead of x as variable vector.

There are two ideas fundamental to duality theory. One is the fact that the dual of a dual
linear program is the original primal linear program. Additionally, every feasible solution for a
linear program gives a bound on the optimal value of the objective function of its dual. The
weak duality theorem states that the objective function value of the dual at any feasible solution
is always greater than or equal to the objective function value of the primal at any feasible
solution. The strong duality theorem states that if the primal has an optimal solution, x*, then
the dual also has an optimal solution, y*, such that cTx*=bTy*.

A linear program can also be unbounded or infeasible. Duality theory tells us that if the
primal is unbounded then the dual is infeasible by the weak duality theorem. Likewise, if the
dual is unbounded, then the primal must be infeasible. However, it is possible for both the dual
and the primal to be infeasible (See also Farkas' lemma).

Example-

Revisit the above example of the farmer who may grow wheat and barley with the set
provision of some A land, F fertilizer and P insecticide. Assume now that unit prices for each
of these means of production (inputs) are set by a planning board. The planning board's job is to
minimize the total cost of procuring the set amounts of inputs while providing the farmer with a
floor on the unit price of each of his crops (outputs), S1 for wheat and S2 for barley. This
corresponds to the following linear programming problem:

RAHUL GUPTA, MBAHCS (2ND SEM), SUBJECT CODE-MB0032, SET-2 Page 8


ASSIGNMENT ON OPERATIONS RESEARCH

(minimize the total cost of the means of production as the


minimize "objective function")
subject
(the farmer must receive no less than S1 for his wheat)
to
(the farmer must receive no less than S2 for his barley)
(prices cannot be negative)

Which in matrix form becomes?

Minimize

Subject to

The primal problem deals with physical quantities. With all inputs available in limited
quantities, and assuming the unit prices of all outputs is known, what quantities of outputs to
produce so as to maximize total revenue? The dual problem deals with economic values. With
floor guarantees on all output unit prices, and assuming the available quantity of all inputs is
known, what input unit pricing scheme to set so as to minimize total expenditure? To each
variable in the primal space corresponds an inequality to satisfy in the dual space, both indexed
by output type? To each inequality to satisfy in the primal space corresponds a variable in the
dual space, both indexed by input type? The coefficients that bound the inequalities in the
primal space are used to compute the objective in the dual space, input quantities in this
example. The coefficients used to compute the objective in the primal space bound the
inequalities in the dual space, output unit prices in this example. Both the primal and the dual
problems make use of the same matrix. In the primal space, this matrix expresses the
consumption of physical quantities of inputs necessary to produce set quantities of outputs. In
the dual space, it expresses the creation of the economic values associated with the outputs
from set input unit prices. Since each inequality can be replaced by equality and a slack
variable, this means each primal variable corresponds to a dual slack variable, and each dual
variable corresponds to a primal slack variable. This relation allows us to complementary
slackness.

RAHUL GUPTA, MBAHCS (2ND SEM), SUBJECT CODE-MB0032, SET-2 Page 9


ASSIGNMENT ON OPERATIONS RESEARCH

Covering-Packing Dualities:
Covering-Packing Dualities
Covering problems Packing problems A covering LP is a linear program of the
Minimum Set Cover Maximum Set Packing form
Minimum Vertex Cover Maximum Matching
Minimum Edge Cover Maximum Independent Set Minimize
Subject to

Such that the matrix and the vectors and are non-negative.

The dual of a covering LP is a packing LP, a linear program of the form

Maximize
Subject to

Such that the matrix and the vectors and are non-negative.

Examples-

Covering and packing LPs commonly arise as a linear programming relaxation of a


combinatorial problem and are important in the study of approximation algorithms.[1] For
example, the LP relaxation of set packing problem, independent set problem, or matching is a
packing LP. The LP relaxation of set cover problem, vertex cover problem, or dominating set
problem is a covering LP.

Finding a fractional coloring of a graph is another example of a covering LP. In this


case, there is one constraint for each vertex of the graph and one variable for each independent
set of the graph.

Complementary slackness:
It is possible to obtain an optimal solution to the dual when only an optimal solution to the
primal is known using the complementary slackness theorem. The theorem states:

RAHUL GUPTA, MBAHCS (2ND SEM), SUBJECT CODE-MB0032, SET-2 Page 10


ASSIGNMENT ON OPERATIONS RESEARCH

Suppose that x = (x1, x2, xn) is primal feasible and that y = (y1, y2, . . . , ym) is dual feasible.
Let (w1, w2, . . ., wm) denote the corresponding primal slack variables, and let (z1, z2, . . . , zn)
denote the corresponding dual slack variables. Then x and y are optimal for their respective
problems if and only if xjzj = 0, for j = 1, 2, . . . , n, wiyi = 0, for i = 1, 2, . . . , m.

So if the ith slack variable of the primal is not zero, then the ith variable of the dual is equal
zero. Likewise, if the jth slack variable of the dual is not zero, then the jth variable of the primal
is equal to zero.

This necessary condition for optimality conveys a fairly simple economic principle. In
standard form (when maximizing), if there is slack in a constrained primal resource (i.e., there
are "leftovers"), then additional quantities of that resource must have no value. Likewise, if
there is slack in the dual (shadow) price non-negativity constraint requirement, i.e., the price is
not zero, and then there must scarce supplies (no "leftovers").

Theory:
Geometrically, the linear constraints define a convex polytope, which is called the feasible
region. It is not hard to see that every local optimum (a point x such that for every unit direction
vector d with positive objective value any every ε > 0 it holds that x + εd is infeasible) is also a
global optimum. This holds more generally for convex programs: see the KKT theorem.

There are two situations in which no optimal solution can be found. First, if the constraints
contradict each other (for instance, x ≥ 2 and x ≤ 1) then the feasible region is empty and there
can be no optimal solution, since there are no solutions at all. In this case, the LP is said to be
infeasible. Alternatively, the polyhedron can be unbounded in the direction of the objective
function (for example: maximize x1 + 3 x2 subject to x1 ≥ 0, x2 ≥ 0, x1 + x2 ≥ 10), in which case
there is no optimal solution since solutions with arbitrarily high values of the objective function
can be constructed. Barring these two conditions (which can often be ruled out when dealing
with specific LPs), the optimum is always attained at a vertex of the polyhedron (unless the
polyhedron has no vertices, for example in the feasible bounded linear program
; polyhedral with at least one vertex are called pointed). However, the
optimum is not necessarily unique: it is possible to have a set of optimal solutions covering an
edge or face of the polyhedron, or even the entire polyhedron (this last situation would occur if
the objective function were constant on the polyhedron).

The vertices of the polyhedron are also called basic feasible solutions. The reason for this
choice of name is as follows. Let d denote the dimension, i.e. the number of variables. Then the
following theorem holds: for every vertex x* of the LP feasible region, there exists a set of d
inequality constraints from the LP such that, when we treat those d constraints as equalities, the
unique solution is x*. Thereby we can study these vertices by means of looking at certain

RAHUL GUPTA, MBAHCS (2ND SEM), SUBJECT CODE-MB0032, SET-2 Page 11


ASSIGNMENT ON OPERATIONS RESEARCH

subsets of the set of all constraints (a discrete universe), rather than the continuous universe of
LP solutions. This principle underlies the simplex algorithm for solving linear programs.

Q 3: Describe the Two-Phase method of solving a linear


programming problem with an example?
Answer:

 Two Phase Method:

The drawback of the penalty cost method is the possible computational error that could
result from assigning a very large value to the constant M. To overcome this difficulty, a
new method is considered, where the use of M is eliminated by solving the problem in two
phases. They are-

 Phase I:

Formulate the new problem by eliminating the original objective function by the sum of
the artificial variables for a minimization problem and the negative of the sum of the
artificial variables for a maximization problem. The resulting objective function is
optimized by the simplex method with the constraints of the original problem. If the
problem has a feasible solution, the optimal value of the new objective function is zero
(which indicates that all artificial variables are zero). Then we proceed to phase II.
Otherwise, if the optimal value of the new objective function is non zero, the problem has
no solution and the method terminates.
Phase II :
Use the optimum solution of the phase I as the starting solution of the original problem.
Then the objective function is taken without the artificial variables and is solved by simplex
method.
Examples:
Use the two phase method to Maximize z = 3x1 – x2

RAHUL GUPTA, MBAHCS (2ND SEM), SUBJECT CODE-MB0032, SET-2 Page 12


ASSIGNMENT ON OPERATIONS RESEARCH

RAHUL GUPTA, MBAHCS (2ND SEM), SUBJECT CODE-MB0032, SET-2 Page 13


ASSIGNMENT ON OPERATIONS RESEARCH

Phase I is complete, since there are no negative elements in the last row. The Optimal solution
of the new objective is Z* = 0.

 Phase II:
Consider the original objective function, Maximize z = 3x1 – x2 + 0S1 + 0S2 + 0S3
Subject to x1 + x2/2 – S1/2=1 5/2 x2 + S1/2 + S2=1 x2 + S3 = 4x1, x2, S1, S2, S3 ≥ 0 with the initial
solution x1 = 1, S2 = 1, S3 = 4, the corresponding simplex table is

RAHUL GUPTA, MBAHCS (2ND SEM), SUBJECT CODE-MB0032, SET-2 Page 14


ASSIGNMENT ON OPERATIONS RESEARCH

Q.4: What do you understand by the transportation


problem? What is the basic assumption behind the
transportation problem? Describe the MODI method of
solving transportation problem.
Answer:
RAHUL GUPTA, MBAHCS (2ND SEM), SUBJECT CODE-MB0032, SET-2 Page 15
ASSIGNMENT ON OPERATIONS RESEARCH

 Transportation Problem & its basic


assumption:
This model studies the minimization of the cost of transporting a commodity from a
number of sources to several destinations. The supply at each source and the demand at each
destination are known. The transportation problem involves m sources, each of which has
available an i (i = 1, 2, m) units of homogeneous product and n destinations, each of which
requires bj (j = 1, 2…., n) units of products. Here a
i and bj are positive integers. The cost cij of transporting one unit of the product from the ith
source to the
jth destination is given for each i and j. The objective is to develop an integral transportation
schedule that meets all demands from the inventory at a minimum total transportation cost. It is
assumed that the total supply and the total demand are equal i.e.

Condition (1) The condition (1) is guaranteed by creating either a fictitious destination
with a demand equal to the surplus if total demand is less than the total supply or a (dummy)
source with a supply equal to the shortage if total demand exceeds total supply. The cost of
transportation from the fictitious destination to all sources and from all destinations to the
fictitious sources are assumed to be zero so that total cost of transportation will remain the
same.

 Formulation of Transportation Problem:

The standard mathematical model for the transportation problem is as follows. Let xij
be number of units of the homogenous product to be transported from source i to the
destination j. Then objective is to-

RAHUL GUPTA, MBAHCS (2ND SEM), SUBJECT CODE-MB0032, SET-2 Page 16


ASSIGNMENT ON OPERATIONS RESEARCH

 Theorem:
A necessary and sufficient condition for the existence of a feasible solution to the
transportation problem (2) is that

 The Transportation Algorithm (MODI


Method):
The first approximation to (2) is always integral and therefore always a feasible
solution. Rather than determining a first approximation by a direct application of the
simplex method it is more efficient to work with the table given below called the
transportation table. The transportation algorithm is the simplex method specialized to
the format of table it involves: i. finding an integral basic feasible solution ii. testing the
solution for optimality iii. improving the solution, when it is not optimal iv. Repeating
steps (ii) and (iii) until the optimal solution is obtained.

RAHUL GUPTA, MBAHCS (2ND SEM), SUBJECT CODE-MB0032, SET-2 Page 17


ASSIGNMENT ON OPERATIONS RESEARCH

The solution to T.P is obtained in two stages. In the first stage we find Basic feasible
solution by any one of the following methods a) North-west corner rule b) Matrix Minima
Method or least cost method c) Vogel’s approximation method. In the second stage we test the
B.Fs for its optimality either by MODI method or by stepping stone method.

RAHUL GUPTA, MBAHCS (2ND SEM), SUBJECT CODE-MB0032, SET-2 Page 18


ASSIGNMENT ON OPERATIONS RESEARCH

Q.5: Describe the North-West Corner rule for finding the


initial basic feasible solution in the transportation problem.
Answer:

 The Initial basic Feasible solution using


North-West corner rule:
Let us consider a T.P involving m-origins and n-destinations. Since the sum of
origin capacities equals the sum of destination requirements, a feasible solution always
exists. Any feasible solution satisfying m + n – 1 of the m + n constraints is a redundant
one and hence can be deleted. This also means that a feasible solution to a T.P can have
at the most only m + n – 1 strictly positive component, otherwise the solution will
degenerate.

It is always possible to assign an initial feasible solution to a T.P. in such a manner


that the rim requirements are satisfied. This can be achieved either by inspection or by
following some simple rules. We begin by imagining that the transportation table is
blank i.e. initially all xij = 0. The simplest procedures for initial allocation discussed in
the following section.

 North West Corner Rule:

• Step1:
a. The first assignment is made in the cell occupying the upper left hand (North
West) corner of the transportation table.

b. The maximum feasible amount is allocated there, that is x11 = min (a1, b1) So
that either the capacity of origin O1 is used up or the requirement at destination
D1 is satisfied or both.

c. This value of x11 is entered in the upper left hand corner (Small Square) of cell
(1, 1) in the transportation table.

RAHUL GUPTA, MBAHCS (2ND SEM), SUBJECT CODE-MB0032, SET-2 Page 19


ASSIGNMENT ON OPERATIONS RESEARCH

• Step 2:
a. If b1 > a1 the capacity of origin O, is exhausted but the requirement at
destination D1 is still not satisfied , so that at least one more other variable in the
first column will have to take on a positive value.

b. Move down vertically to the second row and make the second allocation of
magnitude x21 = min (a2, b1 – x21) in the cell (2, 1). This either exhausts the
capacity of origin O2 or satisfies the remaining demand at destination D1.

c. If a1 > b1 the requirement at destination D1 is satisfied but the capacity of origin


O1 is not completely exhausted. Move to the right horizontally to the second
column and make the second allocation of magnitude x12 = min (a1 – x11, b2)
in the cell (1, 2).

d. This either exhausts the remaining capacity of origin O1 or satisfies the demand
at destination D2 .If b1 = a1, the origin capacity of O1 is completely exhausted
as well as the requirement at destination is completely satisfied.

e. There is a tie for second allocation; an arbitrary tie breaking choice is made.
Make the second allocation of magnitude x12 = min (a1 – a1, b2) = 0 in the cell
(1, 2) or x21 = min (a2, b1 – b2) = 0 in the cell (2, 1).

• Step 3:

a. Start from the new North West corner of the transportation table satisfying
destination requirements and exhausting the origin capacities one at a time.

b. Move down towards the lower right corner of the transportation table until all
the rim requirements are satisfied.

RAHUL GUPTA, MBAHCS (2ND SEM), SUBJECT CODE-MB0032, SET-2 Page 20


ASSIGNMENT ON OPERATIONS RESEARCH

Q.6: Describe the Branch and Bound Technique to solve an I.P.P.


problem.

Answer:
 The Branch And Bound Technique:
Sometimes a few or all the variables of an IPP are constrained by their upper or lower bounds or by
both. The most general technique for the solution of such constrained optimization problems is the
branch and bound technique. The technique is applicable to both all IPP as well as mixed I.P.P. the
technique for a maximization problem is discussed below: Let the I.P.P be

RAHUL GUPTA, MBAHCS (2ND SEM), SUBJECT CODE-MB0032, SET-2 Page 21


ASSIGNMENT ON OPERATIONS RESEARCH

Or the linear constraint xj ≤ I ………………...(7)To explain how this partitioning helps,


let us assume that there were no integer restrictions (3), and suppose that this then yields
an optimal solution to L.P.P. – (1), (2), (4) and (5). Indicating
x1 = 1.66 (for example). Then we formulate and solve two L.P.P’s each containing (1),
(2) and (4). But (5) for
j=1
Is modified to be

RAHUL GUPTA, MBAHCS (2ND SEM), SUBJECT CODE-MB0032, SET-2 Page 22


ASSIGNMENT ON OPERATIONS RESEARCH

2 ≤ x1 ≤ U1
In one problem and
L1 ≤ x1 ≤ 1
In the other. Further each of these problems process an optimal solution satisfying
integer constraints (3) Then the solution having the larger value for z is clearly optimum
for the given I.P.P. However, it usually happens that one (or both) of these problems has
no optimal solution satisfying (3), and thus some more computations are necessary. We
now discuss step wise the algorithm that specifies how to apply the partitioning (6) and
(7) in a systematic manner to finally arrive at an optimum solution.
We start with an initial lower bound for z, say) 0(Zat the first iteration which is less than
or equal to the optimal value z*, this lower bound may be taken as the starting Lj for
some xj. In addition to the lower bound) 0(Z, we also have a list of L.P.P’s (to be called
master list) differing only in the bounds (5). To start with (the 0th iteration) the master
list contains a single L.P.P. consisting of (1), (2), (4) and (5). We now discuss below,
the step by step procedure that specifies how the partitioning (6) and (7) can be applied
systematically to eventually get an optimum integer valued solution.
Branch and Bound Algorithm
At the tth iteration (t = 0, 1, 2 …)

RAHUL GUPTA, MBAHCS (2ND SEM), SUBJECT CODE-MB0032, SET-2 Page 23


ASSIGNMENT ON OPERATIONS RESEARCH

RAHUL GUPTA, MBAHCS (2ND SEM), SUBJECT CODE-MB0032, SET-2 Page 24

You might also like