You are on page 1of 70

PROCESS OPTIMIZATION THROUGH THE

ARTIFICIAL NEURAL NETWORKS


TECHNIQUES

Dr. A. K. Santra
Senior Professor
School of Information Technology
VIT University, Vellore

1
INTRODUCTION

SAVING MONEY A FOCAL POINT OF DISCUSSION DUE TO

INCREASED GLOBAL COMPETITION

RECESSION IN INDUSTRY SURVIVAL MODE

EXTENSIVE COST CUTTING MEASURES NEEDED

2
INTRODUCTION Contd

ARTIFICIAL NEURAL NETWORKS (ANN) IS A NEW TECHNOLOGY FOR:

PROCESS MODELING, PREDICTION, OPTIMIZATION AND CONTROL

PROCESS IMPROVEMENT THROUGH NEW TECHNOLOGY

PRODUCT QUALITY CONTROL

3
INTRODUCTION Contd

NEURON IS A SIMPLE PROCESSING ELEMENT - IT RECEIVES INPUT


FROM OTHER NEURONS VIA WEIGHTED CONNECTION
BASED ON NON-LINEAR SQUASHING FUNCTION SIGMOIDAL IN FORM
PROMISING RESULTS OF NEURAL NETS
SOME SAY :
- MODELING OF HUMAN BRAIN
- THINKING BRAIN MODELS
- MIMIC HUMAN THOUGHT PROCESS
IN REALITY :
- NONE OF THESE - ONLY SIMILARITY
i.e., NEURON LIKE STRUCTURE

4
NEURON

IT IS THE BASIC CELL OF HUMAN BRAIN


NEURAL NETS ATTEMPT TO SIMULATE THE THINKING AND
PROCESSING PROCEDURES OF THE HUMAN BRAIN BY
MODELLING THE NEURON

SOMA : THE BASIC COMPONENT OF NEURON WHICH IS


ATTACHED TO THE AXON

AXON : THE AXON IS ELECTRICALLY ACTIVE AND PRODUCES


THE PULSE EMITTED BY NEURON
SYNAPSES : ELECTRICALLY PASSIVE DENDRITES RECEIVES
NEURONS BY MEANS OF SPECIALIZED CONTACTS
CALLED SYNAPSES, WHICH ACT AS WEIGHTS TO
THE INPUT INFORMATION

5
SINGLE BIOLOGICAL NEURON AND ITS MODEL

6
MULTIPLE NEURONS AND THEIR MODEL REPRESENTATION

7
NEURAL NETWORKS
NEURONS ARE CONNECTED IN LAYERS
EACH LAYER RECEIVES INPUT FROM THE PRECEEDING
LAYER FEED FORWARD
- INPUT LAYER
- MIDDLE / HIDDEN LAYER (S)
- OUTPUT LAYER

OUTPUT LAYER

MIDDLE LAYERS
INPUT LAYER 8
UNSUPERVISED LEARNING METHODS

CLASSICAL STATISTICS
CLUSTERING
k - MEANS
FEATURE MAPS
INFOMAX PRINCIPLE
MAXIMUM ENTROPY
COMPETITIVE LEARNING PARADIGMS NEURAL NET

9
SUPERVISED LEARNING METHODS

BACK PROPAGATION
PARZEN WINDOWS
NEAREST NEIGHBOUR NORMS
PROJECTION PURSUIT
GAUSSIAN MIXTURE
GROUP METHOD OF DATA HANDLING ( GMDH )
MULTIVARIATE ADAPTIVE REGRESSION SPLINE
HYPER SPHERE CLASSIFIERS
CLASSIFICATION AND REGRESSION TREES

10
BENEFITS

BETTER PROCESS RESOURCE USAGE AND CONTROL


ALTERNATIVES FOR BETTER MANUFACTURING AND PRODUCTION
- BETTER PROCESS EQUIPMENT
- BETTER OPERATIONAL PRACTICES
- NEW PROCESSES
- PROCESS IMPROVEMENT THROUGH NEW TECHNOLOGY
LOWER SCRAP GENERATION
LOWER DEFECT RATE IN PRODUCT
HIGHER PRODUCT QUALITY

11
APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS

IRON - MAKING
STEEL - MAKING
DATA MINING AND CLUSTERING
IMAGE PROCESSING (2D / 3D)
- FACE RECOGNITION
- VEHICLE CLASSIFICATION
- SATELLITE OBJECT RECOGNITION
- TARGET (ENEMY) DETECTION AND RECOGNITION
WEATHER FORECASTING
CROP YIELD PREDICTION
.
.
.
FACILITY LOCATION PROBLEMS
12
Applicatin of ANN in Steel Industry
Optimum Stove Heating and Changing, Optimum Driving
Rate of Blast Furnace
Adjusting End Point Composition Based on In-Blow
Sampling in Oxygen Furnace
Optimum Scheduling and Sequencing in Continuous
Casting
Optimization of Annealing Parameters based on Material
and Equipment Characteristics
Desulphurization of Hot Metal in Torpedo
Optimization of Refractory Lining Life in Basic Oxygen
Furnace

13
Case Study
Optimization of Refractory Lining Life in Basic Oxygen
Furnace (BOF)
Various Factors that affect the Campaign Life of
Refractory Lining in BOF :
Vessel Design, Refractory Quality, Operating
Parameters, etc.
Solution by ANN, Regression Tech. & GMDH
A Multilayer Feedforward Architectures of Neural Nets
used
8 Parameters and 10 Campaign Data used
For Training: 8 Data Sets & For Prediction: 2 Data Sets

14
Models Output:
Method Actual Life Predicted % Error
(Hrs.) Life (Hrs.)
Regression 595 558 6.1
(Linear) 539 623 15.5
Regression 595 677 13.4
(Exponential) 539 732 35.5
GMDH 595 660 10.92
539 558 3.5
ANN 595 583 2.02
539 540 0.18

15
Discussion
Prediction Error is very low in case of ANN.
Iron and Steel making process is very complicated.
Practically impossible to describe the process
mathematically.
Where the classical mathematical model has got its
inherent limitations, ANN technique provides a
better solution to Prediction / Process Optimization
problem.
ANN techniques are becoming very popular in the
recent years for their process optimizing capability.

16
FACILITY LOCATION PROBLEMS
What is a Facility Location Problem ?

Work of Cabot, Francis and Stray[1], Pritsker and Ghare [2], Francis
[3], White [4], Eyster and White [5], Santra and Nasira [6,7], Hartstein
[8], Mortin Moller [9], Trentin and Gori [10], Sarkar and Tassiulas [11],
Katz [12], and Kuhn [13]. Multifacility Location Problem (MLP) has
been solved without considering Area Constraint, except by Santra
and Nasira [6,7], by assuming that the entire space is available as
solution space.

A detailed literature survey in the area of application of Artificial


Neural Network (ANN).

No attempt has been made by any author / researcher, except by


Santra and Nasira [6,7], to find the solution of MLP using ANN
technique.

17
OBJECTIVE

Formulate and solve Analytically the Multifacility Location


Problem having interactions between sources and destinations
with Circular area Constraint.

To develop an alternative solution method for the same


problem using Artificial Neural Network (ANN) technique.

18
MATHEMATICAL FORMULATION AND ANALYTICAL
SOLUTION OF THE PROBLEM:

Minimize f ((x1 , y1 ), (x 2 , y 2 ),......., (x n , y n ))

m n 1/2
2 2
w [ ( x x ) ( y y i ) ] (2.1)
ji j i j
i 1j 1
2 2 2
Subject to:
(xj x ) ( yj y) R 0 (2.2)

x j 0, y j 0 (j 1,2, ., n)
where n = number of new facilities,
m = number of existing facilities,
(xj, yj) = co-ordinates of the jth new facility,

( x , y ) = co-ordinates of the ith existing facility,
i i
( x , y )= co-ordinates of the centre of gravity of the
existing facilities and
Wji = cost per unit distance between new
facility j and existing facility I,
R = radius of the circular area.
19
MATHEMATICAL FORMULATION AND ANALYTICAL
SOLUTION OF THE PROBLEM: (Contd.)

Without any loss of generality we can always suitably choose our origin
2 2 2
such that the circular area given by (x x) (y y) R
will always be in the first quadrant of the xy-plane. Then this circle will
divide the plane into three sets of points, viz.,

(i) those which are on the boundary, i.e. on the circle


2 2 2
(xj x) (yj y) R

(ii) those which are the interior points


2 2 2
( xj x ) ( yj y) R 0

(iii) those which are the exterior points


2 2 2
( xj x ) ( yj y) R 0
We get feasible solutions either corresponding to the interior points or
else occur on the boundary only. 20
SOLUTION PROCEDURE :

We use Kuhn -Tucker conditions to get the solution of the problem for which we
construct the auxiliary function as :

2 2 2 2
h(x, y) f(x, y) [(x x ) (y y ) R ] (2.3)

i.e. h ((x1 , y1 ), (x 2 , y 2 ),..., (x n , y n ))


f ((x1 , y1 ), (x 2 , y 2 ),..., (x n , y n ))

n 2 2 2 2
j [(x j x ) (y j y) R j ] (2.4)
j1
where j2 are the artificial variables and are given by

2 2 2 2
j [(x j x ) (y j y ) R ].

21
SOLUTION PROCEDURE : (contd.)

Now by using Kuhn -Tucker conditions we get :


h f
2 (x j x ) j 0 (2.5)
x j x j

h f
2 (y j y) j 0 (2.6)
y j y j

2 2 2
j[( x j x ) ( y j y ) R ] 0 (2.7)

2 2 2 (2.8)
( xj x ) ( yj y) R 0

j 0 ( j = 1, 2, , n).

22
SOLUTION PROCEDURE : (contd.)

In view of (2.1), we have


^
f m w ji (x j x i )
(2.9)
x j i 1 E ji

^
f m w ji (y j y i )
(2.10)
y j i 1 E ji
^ ^
2 2 1/2
E ji [(x j x i ) (y j y i ) ] (2.11)

There are only two possible cases to be examined, viz., (i) when j = 0 and
(ii) when j 0 . First of all we consider the case when j = 0.

23
SOLUTION PROCEDURE : (contd.)

Case - I : j = 0 (j = 1,2,..,n)

f f
As j = 0 the equations (2.5) and (2.6) reduce to 0 and 0
x j y j
respectively which in view of (2.9) and (2.10) lead to

^
m w ji (x j x i )
0 (2.12)
i1 E ji
^
m w ji (y j y i )
0 (2.13)
i 1 E ji

24
SOLUTION PROCEDURE : (contd.)

After straightforward calculation, (2.12) and (2.13) lead respectively to

m ^
w ji x i
x j i 1 (2.14)
m w ji

i 1 E ji

m ^
w ji y i
y j i 1 (2.15)
m w ji

i1 E ji

To solve the set of non-linear equations represented by (2.14) and (2.15) we


use the following iterative procedure :
25
SOLUTION PROCEDURE : (contd.)

m ^
w ji x i
(N 1)
xj i 1 (2.16)
m w ji

i 1 E (N)
ji
m ^
w ji y i
(N 1)
yj i 1 (2.17)
m w ji

i 1 E (N)
ji
(N) (N) ^ 2 (N) ^ 2 1/2
E ji [(x j x i ) (y j y i ) ] (2.18)

and the subscripts denote the iteration number.


26
SOLUTION PROCEDURE : (contd.)
We take the initial solution as :

m ^
w ji x i
(0) i 1
xj m (2.19)
w ji
i 1
m ^
w ji y i
(0) i 1
yj m (2.20)
w ji
i 1

for the rapid convergence of the iterative procedure.

From the structure of the equations (2.16) and (2.17) it is clear that they represent
n independent iterative schemes from each of which we get the optimum
location of a single new facility.

27
SOLUTION PROCEDURE : (contd.)
For example, we may have the iterative scheme for any one say, (x1, y1)
of the n new facilities as follows:
m ^
w1i x i
(N 1)
x1 i 1 (2.21)
m w1i

i 1 E (N)
1i
m ^
w1i y i
(N1)
y1 i 1 (2.22)
m w1i

i 1 E (N)
1i
with starting initial solution as (x (0) , y ( 0) ) given by (2.19) and (2.20) for j = 1,
(N) 1 1
where E is given by
(N) ^ 2 (N) ^ 2
1i
(N) 1/2
E1i [(x1 x i ) (y1 y i ) ] (2.23)
28
SOLUTION PROCEDURE : (contd.)

The solution (xj, yj) ( j = 1, 2, , n ) obtained by the above iterative procedure has
to be tested whether it satisfies the area constraint (2.2).

If it satisfies (2.2), the problem is solved and (xj, yj) ( j = 1, 2, , n ) give the
optimum locations for the new facilities sought.

If (xj, yj) ( j = 1, 2, , n ) do not satisfy (2.2), we have the only alternative of


considering the case when j 0 ( j = 1, 2, , n ) which makes all the new
facilities lie on the boundary.

29
SOLUTION PROCEDURE : (contd.)

Case -II : j 0 (j =1,2,..,n)


2 2 2
Since j 0, the equation (2.7) takes the form ( xj x) (yj y) R 0
which gives
2 2
yj y R ( xj x ) (2.24)

The equations (2.5) and (2.6) lead to

f f
( y j y) ( xj x) (2.25)
x j y j
which in view of (2.9) and (2.10) takes the form

^ ^
m w ji (x j x i ) m w ji (y j y i )
(y j y ) (x j x ) (2.26)
i 1 E ji i 1 E ji
30
SOLUTION PROCEDURE : (contd.)

Further simplification of (2.26) by using (2.11) and the value of yj from (2.24)
finally leads to:

^
m w ji (x j x i )
R 2 (x x ) 2
j i 1 Tji
x j x (2.27)
^
2 2
w
m ji [ y y i R (x j x ) ]

i 1 Tji

^ 2
2 2 2 1/2
Tji [(x j x ) { y yi R (x j x ) } ]

31
SOLUTION PROCEDURE : (contd.)

In a similar way we obtain the relation for yj as :

^
w (y y i )
2 2 m ji j
R (y j y) (2.27A)
i 1 K ji
y j y
^
2 2
w
m ji [ x x i R (y j y ) ]

i 1 K ji

^ 2
2 2 2 1/2
K ji [( y j y ) { x x i R (y j y) } ] .

32
SOLUTION PROCEDURE : (contd.)
To solve the set of non-linear equations represented by (2.27) or (2.27A)
we use the following iterative scheme:
(N)
^
(N) m w ji (x j xi )
2 2
R (x j x ) (N)
i 1 Tji
(N 1)
xj x (2.28)
^ (N)
2 2
w
m ji [ y y i R (x j x ) ]
(N)
i1 Tji

(N) (N)
^ (N) 2
2 2 2 1/2
Tji [(x j x ) {y yi R (x j x ) } ]
and the superscripts denote the iteration number. We take a feasible initial
solution for rapid convergence of the iterative scheme.

From the structure of the equation it is clear that it represents n independent


iterative schemes from each of which we get the x co-ordinate of the optimum
location of a single new facility. 33
SOLUTION PROCEDURE : (contd.)

For example, we may have the iterative scheme for the x co-ordinate of any one
of n new facilities, say, x1 of the new facility (x1, y1) as follows :
(N)
^
2 (N) m w1i (x1
2 xi )
R (x1 x ) (N)
(N 1)
i 1 T1i
x1 x
^ (N) (2.29)
2 2
w
m 1i [ y y i R (x1 x ) ]
(N)
i 1 T1i

(N) (N)
^ (N) 2
2 2 2 1/2
T1i [(x1 x ) {y y i R (x1 x ) } ]
It may be pointed out that after finding the values of all xj ( j = 1, 2, , n ) one
need not calculate the values of yj ( j = 1, 2, , n ) by using iterative schemes as
those can be found directly from the equation (2.24) with the help of the
determined values of xj.
We may mention that the problem where the new facilities have to be located on
the circumference of the circle can be solved by utilizing the above algorithm.
34
SOLUTION OF MULTIFACILITY LOCATION PROBLEM UNDER
CIRCULAR AREA CONSTRAINT USING ANN TECHNIQUES:

Analytical solution method is tedious and time consuming.

An attempt has been made to get the solution of Multifacility


Location Problem (MLP) using Artificial Neural Network (ANN)
techniques.

We have used resilient back propagation / Fletcher-Reeves training


algorithm for training the net.

35
NEURAL NETWORK STRUCTURE

x1
y1
x2
z1
x3

x4
o1
o2
o3
o4

x8 z25
y50
x9

Network Structure (9, 50, 25, 4)


Input Layer (9 neuron)
First Hidden Layer (50 neuron)
Second Hidden Layer (25 neuron)
Output Layer (4 neuron) 36
NUMERICAL EXAMPLE

We have considered numerical examples involving 2 new and 2 existing


facilities where new facilities are to be located in a circular area given
2 2 2
( x j x ) ( y j y ) by
R 0 where R will have different
values for different problems.

We have solved a number of problems analytically which have been used for
training the nets. Each of these problems will have the following 9 basic input
parameters with their usual meanings as mentioned in (2.1) and (2.2) :

^ ^ ^ ^
(i) x1 (ii) x2 (iii) y1 (iv) y2 (v) w11

(vi) w12 (vii) w21 (viii) w22 (ix) R

37
NUMERICAL EXAMPLE (contd.)

By using these input parameters we have found the locations of 2 new facilities
viz. (i) (x1, y1) and (ii) (x2, y2) as the solution.

The same set of examples has been considered for training the net. Once the
training is complete, we have tested the prediction capabilities of the trained net
by using a new set of data patterns.

The predicted output values obtained from the net have been compared with
those obtained through the Analytical method and it is found that they compare
well within the acceptable limit.

38
Table 1 : Training Data Set
^ ^ ^ ^
S.No. w11 w12 w21 w22 R
x1 x2 Y1 y2
1 2 1 1 2 5 3 7 8 0.5
2 1 1 1 1 2 4 3 1 0.5
3 2 1 1 2 5 3 7 10 0.5
4 2 1 1 1 8 2 4 5 0.5
5 2 1 1 2 7 3 9 1 0.5
6 1 1 2 1 3 3 9 4 0.5
7 2 1 1 2 10 4 8 3 0.5
8 2 1 1 2 4 6 2 9 1
9 1 1 1 1 1 1 1 1 1
10 3 1 1 2 7 3 7 4 1
11 3 3 1 2 5 3 7 8 1
12 2 1 1 2 7 2 7 9 1
13 2 1 1 2 7 7 6 7 1
14 2 1 1 2 3 4 1 2 1
15 2 1 1 2 5 3 7 8 1.5
39
Table 1 : Training Data Set (Contd.)
^ ^ ^ ^
S.No. w11 w12 w21 w22 R
x1 x2 Y1 y2
16 1 1 2 2 5 2 8 6 1.5
17 2 3 1 1 6 5 8 9 1.5
18 2 1 1 2 5 3 8 8 1.5
19 2 1 1 1 7 5 6 9 1.5
20 1 1 1 1 5 3 8 3 1.5
21 3 1 2 2 5 6 7 8 1.5
22 1 2 3 4 5 6 7 6 2
23 1 1 1 1 4 7 2 9 2
24 1 1 2 1 8 3 2 1 2
25 2 1 1 2 7 2 7 9 2
26 2 1 1 2 5 3 7 10 2
27 1 1 2 1 3 3 9 4 2
28 1 1 1 1 5 3 8 8 2
29 2 1 1 2 5 3 7 8 2.5
30 1 1 1 1 2 4 3 1 2.5
40
Table 1 : Training Data Set (Contd.)
^ ^ ^ ^
S.No. w11 w12 w21 w22 R
x1 x2 Y1 y2
31 2 1 1 1 3 4 1 2 2.5
32 1 1 2 1 3 3 9 4 2.5
33 1 1 1 2 5 6 2 10 2.5
34 2 1 1 2 7 7 6 7 2.5
35 3 3 1 2 5 3 7 8 2.5
36 2 1 1 2 2 5 6 8 3
37 1 1 1 1 2 4 3 1 3
38 2 1 1 1 3 4 1 2 3
39 1 1 2 1 3 3 9 4 3
40 1 1 1 2 5 6 2 10 3
41 2 1 1 2 7 7 6 7 3
42 2 1 1 2 4 6 2 9 3
43 2 1 1 2 2 5 6 8 3.5
44 1 1 1 1 2 4 3 1 3.5
45 1 1 1 1 3 4 1 2 3.5
41
Table 1 : Training Data Set (Contd.)
^ ^ ^ ^
S.No. w11 w12 w21 w22 R
x1 x2 Y1 y2
46 1 1 2 1 3 3 9 4 3.5
47 2 2 1 2 6 6 6 6 3.5
48 3 3 1 2 5 3 7 8 3.5
49 2 1 1 2 4 6 2 9 3.5
50 2 1 1 2 7 7 3 7 4
51 2 2 1 2 5 3 8 8 4
52 1 1 1 1 2 4 3 1 4
53 1 1 1 1 3 4 1 2 4
54 3 1 1 2 5 6 7 8 4
55 2 1 1 2 5 6 2 10 4
56 3 3 1 2 5 3 7 8 4
57 2 1 1 2 7 7 3 7 4.5
58 1 1 1 1 2 4 3 1 4.5
59 2 3 1 1 6 5 8 9 4.5
60 1 1 1 1 3 4 1 2 4.5
42
Table 1 : Training Data Set (Contd.)

^ ^ ^ ^
S.No. w11 w12 w21 w22 R
x1 x2 Y1 y2
61 2 1 1 2 4 6 2 9 4.5
62 2 2 1 2 6 6 6 6 4.5
63 3 3 1 2 5 3 7 8 4.5
64 2 1 1 2 7 7 3 7 5
65 3 1 1 2 5 6 7 8 5
66 1 1 1 1 2 4 3 1 5
67 2 1 1 3 4 3 7 10 5
68 1 1 1 1 3 4 1 2 5
69 2 1 1 2 5 6 2 10 5
70 3 3 1 2 5 3 7 8 5

43
Table 2 : Target Data Set (Analytical output)

S.No. x1 x2 y1 y2
1 1.57 1.71 1.99 1.95
2 1.08 1.25 1.49 1.43
3 1.57 1.93 1.99 1.74
4 1.55 1.97 1.49 1.16
5 1.56 1.14 1.99 1.85
6 1.58 1.82 1.49 1.37
7 1.54 1.68 1.99 1.96
8 1.59 1.81 2.49 2.44
9 1.47 1.25 1.88 1.96
10 2.09 2.77 2.49 2.13
11 2.14 2.13 3.48 2.72
12 1.61 1.98 2.49 2.37
13 1.56 2.40 2.49 1.92
14 1.63 2.29 2.49 2.11
15 1.68 2.79 2.98 2.25 44
Table 2 : Target Data Set (Analytical output) (Contd.)

S.No. x1 x2 y1 y2
16 1.71 0.17 2.98 2.20
17 1.64 1.62 3.49 2.14
18 1.70 2.79 2.98 2.26
19 1.62 2.95 2.49 1.38
20 1.18 1.25 2.48 2.47
21 2.63 1.07 2.99 1.97
22 2.19 0.47 4.99 4.29
23 1.18 1.25 2.99 2.98
24 1.67 3.41 2.99 1.57
25 1.72 3.19 3.48 2.56
26 1.74 3.18 3.48 2.57
27 1.82 3.36 2.97 1.72
28 1.24 1.25 2.98 2.98
29 1.80 3.91 3.98 2.14
30 1.40 1.25 3.46 3.48 45
Table 2 : Target Data Set (Analytical output) (Contd.)

S. No. x1 x2 y1 y2
31 1.84 3.91 3.47 1.64
32 1.90 3.94 3.46 1.53
33 1.22 3.14 3.98 2.78
34 1.67 3.81 3.99 2.44
35 2.32 2.13 4.97 2.72
36 1.89 4.48 4.47 1.81
37 1.47 1.25 3.96 3.98
38 1.91 4.41 3.97 1.70
39 1.98 4.32 3.96 2.00
40 1.26 3.45 4.48 3.21
41 1.71 4.36 4.49 2.38
42 1.79 3.79 4.48 3.42
43 1.96 4.79 4.96 2.69
44 1.55 1.25 4.45 4.49
45 1.48 1.25 4.46 4.49 46
Table 2 : Target Data Set (Analytical output) (Contd.)

S. No. x1 x2 y1 y2
46 2.06 4.90 4.45 1.81
47 1.78 1.25 5.48 5.49
48 2.45 2.13 5.97 2.72
49 1.83 4.28 4.98 3.61
50 1.78 5.35 5.49 2.57
51 1.99 1.25 5.96 5.99
52 1.63 1.25 4.95 4.99
53 1.55 1.25 4.96 4.99
54 2.35 2.32 5.48 1.84
55 1.85 5.00 5.48 3.43
56 2.51 2.13 6.46 2.72
57 1.81 5.49 5.98 3.57
58 1.70 1.25 5.44 5.49
59 1.91 1.62 6.48 2.14
60 1.61 1.25 5.45 5.49 47
Table 2 : Target Data Set (Analytical output) (Contd.)

S. No. x1 x2 y1 y2
61 1.93 5.27 5.97 3.95
62 1.87 1.25 6.48 6.49
63 2.57 2.13 6.96 2.72
64 1.85 6.27 6.48 2.96
65 2.43 2.32 6.48 1.84
66 1.77 1.25 5.93 5.99
67 2.18 1.46 6.95 2.24
68 1.68 1.25 5.95 5.99
69 1.94 6.08 6.48 3.50
70 2.63 2.13 7.45 2.72

48
Table 3 : Testing Data Set (Resilient Back Propogation)

^ ^ ^ ^
S.No. x w11 w12 w21 w22 R
1 x2 Y1 y2

1 2 3 1 1 9 6 7 8 1.5

2 2 1 1 2 5 3 7 8 2

3 2 1 1 2 5 3 7 10 3

4 2 1 1 2 5 3 7 10 3.5

5 2 1 1 2 7 7 3 7 3.5

49
Table 4 : Analytical and ANN (Resilient) Model Output

y1
x1 x2 y2
S.No.
Anal Ann Anal Ann Anal Ann Anal Ann

1 1.60 1.85 1.62 1.93 3.49 3.04 1.84 2.16

2 1.74 1.73 3.31 3.39 3.48 3.47 2.72 2.21

3 1.87 1.84 4.29 3.82 4.47 4.33 2.48 2.69

4 1.93 1.91 4.78 4.04 4.97 4.79 2.57 2.49

5 1.74 1.77 4.55 5.02 4.99 4.99 2.14 1.68

50
Table 5 : Error Percentage (Resilient)

x1 x2 y1 y2
S. No.
%Error %Error %Error %Error

1 15.63 21.17 12.64 17.35

2 0.21 2.58 0.08 18.63

3 1.50 10.89 2.99 8.42

4 0.72 15.42 3.43 3.19

5 2.22 10.49 0.41 21.72

Error percentage = ((Anal Value Ann Value) / Anal value)*100

51
Graph 1 : Analytical and ANN (Resilient) Model Output (X1)

52
Graph 2 : Analytical and ANN (Resilient) Model Output (X2)

53
Graph 3 : Analytical and ANN (Resilient) Model Output (Y1)

54
Graph 4 : Analytical and ANN (Resilient) Model Output (Y2)

55
Table 6 : Output Locations (Resilient)

(x1, y1) (x2, y2)


Sl.
No. Anal ANN Anal ANN

(1.60, 3.49) (1.85, 3.04) (1.62, 1.84) (1.93, 2.16)


1

(1.74, 3.48) (1.73, 3.47) (3.31, 2.72) (3.39, 2.21)


2

(1.87, 4.47) (1.84, 4.33) (4.29, 2.48) (3.82, 2.69)


3

4 (1.93, 4.97) (1.91, 4.79) (4.78, 2.57) (4.04, 2.49)

5 (1.74, 4.99) (1.77, 4.99) (4.55, 2.14) (5.02, 1.68)

56
Table 7: Testing Data Set (Fletcher - Reeves Algorithm)

^ ^ ^ ^
S.No. x w11 w12 w21 w22 R
1 x2 Y1 y2

1 1 1 1 1 6 3 2 6 0.5

2 2 1 1 2 5 3 7 8 2

3 2 1 1 2 3 4 1 2 2.5

4 2 1 1 2 5 3 7 10 3

5 2 1 1 2 7 2 7 9 3

57
Table 8 : Analytical and ANN (Fletcher-Reeves) Model Output

y1
x1 x2 y2
S.No.
Anal Ann Anal Ann Anal Ann Anal Ann

1 1.05 0.93 1.25 1.40 1.49 1.33 1.43 1.41

2 1.74 1.73 3.31 3.32 3.48 3.47 2.72 2.36

3 1.84 1.89 3.75 3.76 3.97 4.24 2.56 2.77

4 1.87 1.85 4.29 4.01 4.47 4.35 2.48 2.29

5 1.83 1.96 4.48 4.26 4.48 4.34 2.35 2.28

58
Table 9 : Error Percentage (Fletcher-Reeves)

x1 x2 y1 y2
S. No.
%Error %Error %Error %Error

1 11.17 12.73 10.63 1.50

2 0.41 0.47 0.25 13.27

3 3.23 0.34 7.00 8.26

4 0.63 6.47 2.60 7.70

5 7.31 4.81 3.10 2.79

Error percentage = ((Anal Value Ann Value) / Anal value)*100

59
Graph 5 : Analytical and ANN (Fletcher-Reeves) Model Output (X1)

60
Graph 6 : Analytical and ANN (Fletcher-Reeves) Model Output (X2)

61
Graph 7 : Analytical and ANN (Fletcher-Reeves) Model Output (Y1)

62
Graph 8 : Analytical and ANN (Fletcher-Reeves) Model Output (Y2)

63
Table 10 : Output Locations (Fletcher-Reeves)

(x1, y1) (x2, y2)


Sl.
No. Anal ANN Anal ANN

1 (1.05, 1.49) (0.93, 1.33) (1.25, 1.43) (1.40, 1.41)

2 (1.74, 3.48) (1.73, 3.47) (3.31, 2.72) (3.32, 2.36)

3 (1.84, 3.97) (1.89, 4.24) (3.75, 2.56) (3.76, 2.77)

4 (1.87, 4.47) (1.85, 4.35) (4.29, 2.48) (4.01, 2.29)

5 (1.83, 4.48) (1.96, 4.34) (4.48, 2.35) (4.26, 2.28)

64
CONCLUSION

The present work deals with the finding of solutions analytically and
through ANN technique for any given multifacility location problem
having Interactions between Sources and Destinations with Circular
area constraint.

Thus an attempt has been made to solve a multifacility location


problem with Circular area constraint for the first time in the literature
by using ANN technique.

It has been established its capability to obtain the acceptable


solutions for the same through ANN.

65
REFERENCES

[1] Cabot A.V., R. L . Francis and M .A. Stary, A Network Flow Solution to a
Rectilinear Distance Facility Location Problem, AIIE Transactions, Vol.
2, No. 2, 1970, pp. 132-141.

[2] Pritsker A. A. B. and P. M .Ghare, Locating New Facilities with Respect


to Existing Facilities , AIIE Transactions, Vol. 2, No. 4, 1970, pp. 290-297.

[3] Francis R. L., A Geometric Solution Procedure for a Rectilinear Distance


Minimax Problem , AIIE Transactions, Vol. 4, No. 4, 1972, pp.328-332.

[4] White J. A. , A Quadratic Facility Location Problem , AIIE Transactions,


Vol. 3, No. 2, 1971, pp. 156-157.

[5] Eyster J. W. and J. A. White, Some Properties of the Squared Euclidean


Distance Location Problem , AIIE Transactions, Vol. 5, No. 3, 1973,
pp. 275-280.
66
REFERENCES ( Contd.)

[6] Santra A. K. and G. M. Nasira, Application of Artificial Neural Networks to


Multifacility Location Problem with Area Constraint, International Journal
of Systemics, Cybernetics and Informatics (IJSCI), October 2006, pp. 18-
24.

[7] Santra A. K. and G. M. Nasira, A Neural Network Approach to


Multifacility Location problem under Semi-open Rectangular Area
Constraint, International Conference on Systemics, Cybernetics and
Informatics (ICSCI-2007), 3-7 January, 2007, pp. 210-215.

[8] Hartstein, A. , A Backpropagation Algorithm for a Network of Neurons with


Threshold Controlled Synapses IJNS, Vol. 2, No. 1, 1991,
pp. 135-141.

[9] Martin Moller, Supervised Learning on Large Redundant Training Sets,


IJNS Vol. 4, No. 1, 1993, pp. 15-25.
67
REFERENCES ( Contd.)

[10] Trentin, E., and M. Gori, Robust Combination of Neural Networks


and Hidden Markov Models for Speech Recognition, Neural Networks,
IEEE, Vol. 14, No. 6, 2003, pp. 1519-1531.

[11] Sarkar, S., and L. Tassiulas, Back Pressure based Multicast


Scheduling for Fair Bandwidth Allocation Neural Networks, IEEE, Vol. 16, No.
5, 2005, pp. 1279- 1290.

[12] Katz, I. N, On the Convergence of a Numerical Scheme for Solving


Some Locational Equilibrium Problems, Journal of the Society of
Industrial and Applied Maths. , 17, 1969, pp. 1224-1231.

[13] Kuhn, H. W, A Note on Fermats Problem, Mathematical


Programming, 4, 1973, pp. 98-107.

68
69
70

You might also like