Professional Documents
Culture Documents
A SEMINAR
BY
SATYAJEET S. BHONSALE
ROLL NUMBER: 20090405
MAY 2012
DR. BABASAHEB AMBEDKAR TECHNOLOGICAL UNIVERSITY,
Lonere, Tal. Mangaon, Dist. Raigad (MS). 402103
Department of Chemical Engineering
CERTIFICATE
DR. V. S. SATHE
GUIDE
Date:
i
ACKNOWLEDGEMENT
This seminar report would not have been possible without the kind support and help of
many individuals and the University. I would like to extend my sincere thanks to all of
them.
I am highly indebted to my guide, Dr. V. S. Sathe, for providing me with his MATLAB
GA code which is used in this study. I thank him for his guidance and constant
supervision as well as for providing necessary information regarding the seminar and also
for his support in completing the seminar.
I would like to express my gratitude towards Dr. P. V. Vijay Babu, the Head of the
Department of Chemical Engineering, Dr Babasaheb Ambedkar Technological
University, Lonere for his kind cooperation and encouragement which helped me in the
completion of the seminar.
My thanks and appreciations also go to my parents and my friends who have provided me
with the much needed moral support during the seminar work.
Satyajeet S. Bhonsale
Third Year B.Tech,
Chemical Engineering Dept.,
Dr. B. A. T. University, Lonere.
ii
ABSTRACT
Genetic algorithms are computerized search and optimization algorithms based on the
mechanics of natural genetics and natural selection. Even though they belong to the class
of randomized search techniques, genetic algorithms dwell upon the past experience and
utilize it to converge quickly on the optima. In this report, genetic algorithms are
introduced as an optimization technique. The Simple Genetic Algorithm (SGA) has been
studied and the various operators and techniques used in the SGA have been discussed. A
few types of reproduction, crossover and mutation schemes are discussed in brief even
though they are mostly not used in the SGA. The difference between the genetic
algorithms and the traditional optimization algorithms, and the advantages and
disadvantages of genetic algorithms are discussed. The report also presents two case
studies which emphasize on the use of genetic algorithms in the optimization of chemical
processes.
iii
CERTIFICATE i
ACKNOWLEDGEMENT ii
ABSTRACT iii
TABLE OF CONTENTS iv
LIST OF FIGURES v
LIST OF TABLES v
CONTENTS
CHAPTER 1 INTRODUCTION 1
CHAPTER 2 OPTIMIZATION 3
2.1 DESIGN VARIABLES………………………………………….. 4
2.2 CONSTRAINTS…………………………………………………. 5
2.3 OBJECTIVE FUNCTION……………………………………….. 5
2.4 OPTIMIZATION TECHNIQUES………………………………. 7
2.5 GENETIC ALGORITHMS……………………………………… 9
3.1.1 Working Principles…………………………………………. 10
CHAPTER 3 OPERATORS & TECHNIQUES IN GENETIC
ALGORITHMS 12
3.1 ENCODING OF PARAMETERS……………………………….. 12
3.1.1 Binary Encodings…………………………………………... 12
3.1.2 Value Encodings…………………………………………… 13
3.1.3 Permutation Encodings…………………………………….. 14
3.2 FITNESS EVALUATION………………………………………. 14
CHAPTER 4 GENETIC OPERATORS 15
4.1 REPRODUCTION OPERATORS………………………………. 15
4.1.1 Roulette Wheel Selection…………………………………... 15
4.2.2 Stochastic Remainder Sampling……………………………. 17
4.2.3 Ranking Selection………………………………………….. 17
4.2.4 Tournament Selection……………………………………… 17
4.2 CROSSOVER OPERATOR……………………………………... 17
4.2.1 Single Point Crossover……………………………………... 18
4.2.2 Two Point Crossover……………………………………….. 18
4.3 MUTATION OPERATOR………………………………………. 19
CHAPTER 5 A SIMPLE GENETIC ALGORITHM BY EXAMPLE 20
CHAPTER 6 CASE STUDIES 24
6.1 OPTIMIZATION OF CSTRS IN SERIES………………………. 24
6.2 TUNING OF PID CONTROLLER PARAMETERS…………… 26
CHAPTER 7 CONCLUSIONS 31
REFERENCES 32
iv
LIST OF FIGURES
Figure 2.1 A flowchart of the Optimal Design Formulation 4
Figure 2.2 Feasible region for an optimization problem involving two
variables 6
Figure 2.3 Genetic Algorithm - Program Flow Chart 11
Figure 4.1 A roulette wheel marked for eight individuals 16
Figure 4.2 Single Point Crossover 18
Figure 5.1 Plot of example function 20
Figure 5.2 Plot of Best Fitness Value in each Generation against the no. of
Generation 23
Figure 5.3 Plot of Best Value of Variable x in each Generation against no. of
Generations 23
Figure 6.1 Composition Control System 26
Figure 6.2 Block Diagram of Process in Figure 6.1 26
LIST OF TABLES
Table 5.1 Initial Population Generated Randomly 21
Table 5.2 Population after Generation 5 21
Table 5.3 Population after Generation 10 22
Table 6.1 Parameter values and nomenclature for case study 6.1 25
Table 6.2 Results of the case study 6.1 25
Table 6.3 Results for ITAE as an objective function 29
Table 6.4 Results for IAE as an objective function 29
Table 6.5 Results for ISE as an objective function 29
Table 6.6 Results for MSE as an objective function 30
v
CHAPTER 1
INTRODUCTION
2
CHAPTER 2
OPTIMIZATION
“Man's longing for perfection finds expression in the theory of optimization. It studies
how to describe and attain the Best, once one knows how to measure and alter what is
Good or Bad... Optimization Theory encompasses the quantitative study of optima and
methods for finding them.”
A typical engineering problem can be posed as follows: You have a process that can be
represented by some equations or perhaps solely by experimental data. You also have a
single performance criterion in mind such as minimum cost. The goal of optimization is to
find the values of the variables in the process that yield the best value of the performance
criterion. The ingredients described above -- the process or model and the performance
criterion -- compromise the optimization `problem'. Typical problems in chemical
engineering process design or plant operation have many, and possibly infinite number of
solutions. Optimization is concerned with selecting the best among the entire set by efficient
quantitative methods.
The objective in a design problem and associated design parameters vary from
product to product, different techniques need to be used in different problems. The purpose
of the formulation procedure is to create a mathematical model of the optimal design
problem, which then can be solved using an optimization algorithm.
Every optimization problem revolves three essential parameters:
1. The Design Variables
2. The Constraints
3. The Objective Function
Figure 2.1 shows an outline of the steps usually involved in an optimal design formulation
process. The first step is always to realize the need for using optimization in a specific design
problem.
3
Figure 2.1: A flowchart of the Optimal Design Formulation
4
largely on the user. The first thumb rule of formulation of a optimization problem is to
choose as few design variables as possible
2.2 Constraints
The constraints represent some functional relationships among the design variable and other
design parameters satisfying certain physical phenomenon and certain resource limitations.
Some of these considerations require that the design remain in static or dynamic equilibrium.
There are usually two types of constraints that emerge from most considerations:
Inequality Constraints
Equality Constraints
Inequality constraints state that the functional relationships among the design variable are
either greater than, smaller than, or equal to the resource value. For example, a certain
process may require the temperature of the system at any time t be always greater than min
temperature Tmin. Mathematically,
Most of the constraints encountered in the engineering design problems are of this type. One
type of inequality constraint can be converted into other type by multiplying both sides by -1
or by interchanging left and right sides.
Equality constraints state that the functional relationship should exactly match a resource
value. For example, a constraint may require the pressure at any time or place to be equal to 5
atm, or mathematically,
Equality constraints are usually more difficult to handle and, therefore need to be avoided
whenever possible.
5
The objective function can be of two types. Either the objective function is to be maximized,
or it has to be minimized. Unfortunately, the optimization algorithms are written either for
minimization or for maximization.
Figure 2.2: Feasible region for an optimization problem involving two variables
6
2.4 Optimization Techniques
The formulation of engineering design problems differs from problem to problem. Certain
problems involve linear terms for constraints and objective function, but certain involve non-
linear terms for them. In some problems, the terms are not explicit functions of the design
variables. Unfortunately, there does not exist a single optimization algorithm which will
work in all optimization problems equally efficiently. That is why the optimization literature
contains a large number of algorithms, each suitable to solve a particular type of problem
The current available literature identifies three main types of search methods:
calculus-based
enumerative
random
Calculus based methods have been studied extensively. These subdivide into two main
classes: indirect and direct. Indirect methods seek local extrema by solving usually nonlinear
set of equations resulting from setting the gradient of the objective function to zero. This is
the multidimensional generalization of the elementary calculus notion of the extremal points.
Given a smooth, unconstrained function, finding all possible peak starts by restricting search
to those points with slope of zero in all directions. Direct search methods, on the other hand,
see the local optima by hopping on the function and moving in a direction related to the local
gradient. This is simply the notion of hill-climbing: to find the local best, climb the function
in the steepest permissible direction.
Even though these calculus based methods have been improved, extended and
modified, they lack robustness. First, both methods are local in scope; the optima they seek
are the best in the neighborhood of current point. Second, calculus-based methods depend
upon the existence of derivatives. Many practical parameter spaces have little respect for the
notion of a derivative and the smoothness it implies. The real world of search is fraught with
discontinuities and vast multinodal, noisy search spaces. Thus, the methods depending upon
the restrictive requirements of continuity and derivative existence are unsuitable for all but
very limited problem domain.
Enumerative schemes have been considered in many shapes and sizes. The idea is
fairly straightforward; within a finite search space, or discretized infinite search space, the
search algorithm starts looking at the objective function values at every point in the space,
7
one at a time. Although the simplicity of this type of algorithm is attractive, many practical
search spaces are simply too large to search one at a time.
For the sake of clarity, Deb has further classified the optimization algorithms in five groups:
1. Single-Variable Optimization Algorithms. The algorithms are further classified into
two categories -- direct methods and gradient-based (calculus) methods. Direct
methods do not use any derivative information of the objective function; only the
objective function values are used to guide the search process. However, the gradient
based algorithms use the derivative information (first and/or second order) to guide
the search process. Although engineering optimization problems usually contain more
than one design variable, single-variable optimization algorithms are mainly used as
unidirectional search methods in multivariable optimization algorithms.
2. Multi-variable Optimization Algorithms. These demonstrate how the search for the
optimum point progresses in multiple directions. Depending on whether the gradient
information is used or not, these are also classified as direct and gradient based
techniques.
3. Constrained Optimization Algorithms. These algorithms use single variable and
multivariable optimization algorithms repeatedly and simultaneously maintain the
search effort inside the feasible region.
4. Specialized Optimization Algorithms. There exist a number of structured algorithms
which are ideal only for a certain class of optimization problems. Two of these
algorithms -- integer programming and geometric programming -- are often used in
engineering design problems
5. Nontraditional Optimization Algorithms. These algorithms are comparatively new
and are becoming popular in engineering design. Examples of non-traditional
algorithms are genetic algorithms, simulated annealing, tabu search, etc.
8
2.5 Genetic Algorithms
In nature, competition among individuals for scanty resources results in the fittest individual
dominating over the weaker ones. This formed the base of Darwin's theory of evolution –
„Survival of the Fittest‟. Genetic algorithms are adaptive heuristic search algorithm inspired
by the Darwinian idea of natural selection and genetics. GAs represent an intelligent
exploitation of random search used to solve optimization problems. Even though random in
nature, GAs exploit historical information to direct search into the region of better
performance within the search space. The concept of genetic algorithms was first introduced
by John Holland of the University of Michigan, Ann Arbor in 1975. Thereafter, he and his
students have contributed greatly to the development of the field.
Genetic algorithms are different from the traditional optimization algorithms and search
procedures in four ways:
1. GAs work with a coding of the parameter set, not the parameters themselves.
2. GAs search from a population of points, not a single point.
3. GAs use payoff (objective function) information, not gradient or other auxiliary
knowledge.
4. GAs use probabilistic transition rules, not deterministic
The advantage of working with a coding of variables is that coding discretizes the search
space, even though the function may be continuous. On the other hand, since GAs require
only function values at discrete points, a discrete or discontinuous function can be handled at
no extra cost. This allows the GAs to be applied to a wide variety of problems.
The most striking difference between GAs and many traditional optimization methods
is that GAs work with a population of points instead of single point. Because there are more
than one string being processed simultaneously, it is very likely that the expected GA
solution may be a global solution. Even though some traditional methods, like the Box's
evolutionary optimization and complex search methods are population based, those methods
do not use previously obtained information efficiently. In GAs, previously found good
information is emphasized using reproduction operator and propagated adaptively through
crossover and mutation operators
The other major difference in operation of GAs is the use of probabilities in their
operators. None of the genetic operators work deterministically. In the reproduction operator,
9
a simulation of the roulette-wheel selection scheme is used to assign the true number of
copies. In crossover operator, even though good strings are crossed, strings to be crossed are
created at random and cross sites ate created at random. in mutation operator, a random bit is
suddenly altered.
Taken together, these four differences – direct use of coding, search from a
population, blindness to auxiliary information, and randomized operators, contribute to a
genetic algorithm’s robustness and resulting advantage over other more commonly used
techniques.
10
Figure 2.3: Genetic Algorithm - Program Flow Chart
11
CHAPTER 3
OPERATORS AND TECHNIQUES IN GENETIC ALGORITHMS
Most traditional algorithms cannot solve problems that involve discreet search spaces. GAs
can effectively be used in such cases as GAs work with coding of the parameters. Many
search techniques require much auxiliary information in order to work properly GAs are
blind. To perform effective search for better and better structures they only require payoff
values associated with the strings. These payoff values are calculated on basis of the fitness
function and are often called the strings’ fitness. Thus, encoding of parameter and fitness
evaluation form a strong technique that make genetic algorithms robust and prevail over
traditional algorithms.
12
chromosome is a string of bits: 1’s and 0’s. The length of the string is usually determined
according to the desired solution accuracy. An n bit string can represent integers between 0 to
2n-1, i.e., 2n integers. However, the practical search space we would encounter may be larger
than this bracket or smaller or may lie outside this bracket. So a fixed mapping rule is used to
represent a point in the search space. The mapping rule divides the search space in 2n parts
and assigns each smaller bracket a binary string value. For example, if a 3-bit string is used
to encode the search space between 0 to 10, the search space would be divided into 23 = 8
parts. The first part i.e. 0-1.25 would be represented by the string 000 and the last part 8.75-
10 would be represented by string 111. The most commonly used rule is the linear mapping
rule, where the search space is mapped according the following relationship:
In above equation, the variable xi is coded in a substring si of length n. The decoded value of
the substring si is calculated as , where . For example a 4-bit string
(1101) has a decoded value equal to or 11. The accuracy that can
be obtained with a 4-bit string is only approximately 1/16th of the search space. But, as the
string length is increased by one, the accuracy increases exponentially to 1/32th of the search
space. Generalizing this concept, we can say that with an n-bit string, the approximate
accuracy in the variable is .
13
3.1.3 Permutation Encoding
In permutation encoding, every chromosome is a sting of numbers that represent a position in
a sequence. Permutation encodings are useful in ordering problems like the travelling
salesman problem. For some problems, crossover and mutation corrections need to be made
to leave the chromosome consistent.
For example,
CHROMOSOME 1: 1 2 6 9 3 7 5 4 8
CHROMOSOME 2: 8 1 3 9 2 7 6 4 5
This transformation does not alter the location of the minimum, but converts the
minimization problem to an equivalent maximization problem. The fitness function value of
a string is known as the string’s fitness.
14
CHAPTER 4
GENETIC OPERATORS
The mechanics of a simple genetic algorithm are surprisingly simple, involving nothing more
complex than copying strings and swapping partial strings. The operation of GAs begins with
a population of random strings representing the design or decision variables. Thereafter, each
string is evaluated to find the fitness value. The population is then operated by three main
operators – reproduction, crossover, and mutation – to create new population of points. The
new population is further evaluated and tested for termination. If termination criterion is not
met, the population is iteratively operated by above three operators and evaluated. One cycle
of these operations and the subsequent evaluation procedure is known as a generation in GA
terminology.
15
where n is the population size. The easiest way to implement selection is to create a biased
roulette wheel where each current string in population has a roulette wheel slot sized in
proportion to its fitness. To reproduce, we simply spin the weighted roulette thus defined n
times, each time selecting an instance of the string chosen by the roulette wheel pointer.
16
generated between 0 to 1. Thus the string that represents the chosen random number in the
cumulative probability range for the string is copied to the mating pool.
The number of copies of the ith string in the mating pool is given by the integer part of the
value given by . Suppose, the fitness of the ith string is 2.6 and the average or mean
fitness is 1.2. The stochastic remainder sampling operator would then make 2
copies of the string i. After the chromosomes have been assigned according to the integer
part, the rest of the selection is done stochastically according to the roulette wheel selection.
17
4.2 Crossover Operator
Crossover is a genetic operator that combines two chromosomes to produce, new
chromosomes. The idea behind crossover is that the new chromosome may be better than
both its parents if takes the best characteristics from each of the parents. Many crossover
operators are available in the GA literature. Most crossover operators proceed in two steps.
First, members of the newly reproduced strings in the mating pool are coupled at random.
Second, each pair of strings undergoes crossing over. Two crossover operators are described
in this report.
18
4.2.2 Two Point Crossover
The two point crossover operator randomly selects two crossover points within a
chromosome and then interchanges the two parent chromosomes between these points to
produce new offspring.
Other crossover operators include arithmetic crossover, heuristic crossover, uniform
crossover. The operator to be use is usually selected based on the way the chromosomes are
encoded. A crossover operator is mainly responsible for the search of new strings, even
though a mutation operator is also used for this purpose sparingly.
4.3 Mutation Operator
Mutation alters one or more gene values in a chromosome from its initial state. This
can result in entirely new gene values being added to the gene pool. The need for mutation is
to create a local search around the current solution. The mutation is also used to maintain
diversity in the population. Mutation is an important part of the genetic search; it helps
prevent the population from stagnating at any local optima.
The simplest mutation operator is the flip-bit mutation. This operator changes the 1s
to 0s and vice versa with a small mutation probability, pm. The bit-wise mutation is bit by bit
by flipping a coin with a probability pm. If at any bit, the outcome is true then the bit is
altered; else it is kept unchanged. The coin flip mechanism can be simulated easily. A
random number is generated between 0 and 1. If the number is less than pm the outcome of
coin-flip is true, otherwise it is false.
These three operators are simple and straightforward. The reproduction operator
selects good strings and crossover operator recombines good strings to hopefully create better
strings. The mutation operator alters a string locally to hopefully create a better string. Even
though none of these claims are guaranteed and/or tested while creating a string, it is
expected that if a bad string is created it will be eliminated by the reproduction operator in
the subsequent generations and if good strings are created, they will be increasingly
emphasized.
19
CHAPTER 5
A SIMGLE GENETIC ALGORITM BY EXAMPLE
in the interval (0,5). A plot of the function is shown in figure 6.1. The plot shows that the
minimum lies at x* = 3. The corresponding function value is f(x*) = 3
Figure 5.1: Single Variable optimization problem used as example. Minimum point is x
=3
In order to solve this problem using genetic algorithms, we choose binary coding to represent
the variable x. In the calculations, a 4-bit string is chosen. Thus we get an accuracy of 0.3333.
We choose roulette wheel selection, single point crossover, and bit-wise mutation operator.
The crossover and mutation probabilities are assigned to be 0.5 and 0.15 respectively. The
population size is fixed to be 10. The MATLAB code was run for above parameters and the
results obtained are tabulated below.
20
Table 5.1: Initial Population Generated Randomly
21
Table 5.3: Population after Generation 10
From Tables 6.1 – 6.3 we can observe that the average fitness has decreased through the
generations. Even though the fitness in 10th generation is larger, it is due to presence of few
odd strings in the population. The maximum strings have the fitness around 27 which is the
minimum. The above simulation of the simple genetic algorithm yielded the minimum at
3.3333.
The same problem when simulated with a 32-bit string and a population of 20 gave
the minimum at 2.9991 at end of 100 generations. Thus we can say that the accuracy of a GA
depends upon the length of the bit string used. Figure 6.1 shows the plot of the minimum cost
in each generation against the number of generations and Figure 6.2 shows the plot of the
optimal solution obtained in each solution against the number of generations. The figures
show that the algorithm converged upon the optimal solution in around 30 generations.
The above problem was taken from the textbook Optimization in Engineering
Design: Algorithms and Examples by Kalyanmoy Deb and was solved out of curiosity using
the Simple Genetic Algorithm code provided by Dr. V. S. Sathe. The optimum value of the
function is known to be 3 from the textbook.
22
Figure 5.2: Plot of Best Fitness Value in each Generation against the no. of Generation
(Plotted Using MATLAB)
Figure 5.3: Plot of Best Value of Variable x in each Generation against no. of
Generations. (Plotted Using MATLAB)
23
CHAPTER 6
SIMPLE CASE STUDIES
The case studies mentioned below have been taken as it is from the literature available
[6,10]. These case studies use GAs to solve representative problems in design and control of
chemical processes. Similarly, GAs can be used to solve complex problems in Transport
Phenomenon, Thermodynamics and Reaction Engineering.
takes place under isothermal conditions in a series of four CSTRs whose dynamics are given
by
The LHS of the equation is set to zero and the exit concentrations ci, i = 1,…, 4, of the four
CSTRs are solely determined by the inlet flow rate F, reaction constant k and feed
concentration c0. Parameter values and variable nomenclature are given in Table 7.1. Here a
GA is used as a function optimizer to solve the following:
subject to,
In each fitness evaluation, the routine FZERO is used to solve for the steady state algebraic
equation yielding c4 and the fitness is set equal to –c4. When the constraint is violated the
fitness is set equal to that of the minimum fitness encountered in that generation. Table 7.2
shows a comparison between this study (Hanagandi and Nikolaou) and that of Edgar and
Himmelblau.
24
Table 6.1: Parameter Values and nomenclature
25
6.2 TUNING OF PID CONTROLLER PARAMETERS
In composition control system represented by Figure 6.1, a concentrated stream of control
reagent containing water and solute is used to control the concentration of the stream leaving
a three tank system. The stream to be processed passes through a preconditioning stirred tank
where composition fluctuations are smoothed out before the outlet stream is mixed with
control reagent
The measurement of composition in the third tank is sent to the controller, which generates a
signal that opens or closes the control valve, which in turn supplies concentrated reagent to
the first tank. By choosing numerical values of the Time-constant of the control reagent tank
as 5 and steady-state gain of the control reagent tank as unity, the system is represented by
the block diagram shown in Figure 6.2.
26
The transfer function of the process is given by
The Ziegler-Nichols method is widely used for Controller Tuning. One of the disadvantages
of this method is prior knowledge regarding plant model. Once tuned the controller by
Ziegler Nichols method, a good but not optimum system response will be reached. The
Transient response can be even worse if the plant dynamics change. To assure an
environmentally independent good performance, the controller must be able to adapt the
changes of the plant dynamic characteristics. For these reasons, it is highly desirable to
increase the capabilities of PID controllers by adding new features. Many random search
methods, such as Genetic Algorithm (GA) have received much interest for achieving high
efficiency and searching global optimal solution in the problem space.
The Integral controller output is proportional to the amount of time there is an error present
in the system. The Integral action removes the offset introduced by the proportional control
but introduces a phase lag into the system.
The Derivative controller output is proportional to the rate of change of the error. Derivative
control is used to reduce overshoot and introduces a phase lead action that removes the phase
lag introduced by the integral action.
27
6.2.2 The Objective Functions
A GA is only as good as the objective function. The objective functions used in this study are
Mean of the Squared Error (MSE), Integral of Time multiplied by Absolute Error (ITAE),
Integral of Absolute magnitude of the Error (IAE) and the Integral of the Squared Error
(ISE).
6.2.4 Results
The selection scheme used in the case study was Roulette Wheel and the population size is
fixed to be 80 and the mutation probability 0.001. The range of the PID values was given to
be 0 – 40. The results of the case study are tabulated for various objective functions in tables
6.4 – 6.7.
28
Table 6.3 Results for ITAE as objective function (Bhawana Tandon et al)
ITEM GA ZN
%age Overshoot 40.4 29.6
Rise time(sec) 0.1 1.5
Peak Time(sec) 0.15 2.5
Settling Time(sec) 3 10
KP 54.96323 3.7
Ki 8.51 1.8
Kd 99.1578 1.8
Table 6.4 Results for IAE as objective function (Bhawana Tandon et al)
ITEM GA ZN
%age Overshoot 38.2 29.6
Rise time(sec) 0.1 1.5
Peak Time(sec) 0.35 2.5
Settling Time(sec) 2.9 10
KP 31.16428 3.7
Ki 9.0578 1.8
Kd 92.04361 1.8
Table 6.5 Results for ISE as objective function (Bhawana Tandon et al)
ITEM GA ZN
%age Overshoot 37.5 29.6
Rise time(sec) 0.1 1.5
Peak Time(sec) 0.2 2.5
Settling Time(sec) 2.5 9
KP 3.53227 3.7
Ki 24.3176 1.8
Kd 99.67948 1.5
29
Table 6.6 Results for MSE as objective function (Bhawana Tandon et al)
ITEM GA ZN
%age Overshoot 37.5 29.6
Rise time(sec) 0.1 1.5
Peak Time(sec) 0.12 2.5
Settling Time(sec) 2.5 8
KP 54.96323 3.7
Ki 8.5105 1.8
Kd 99.15787 1.8
It can be seen that though Overshoots are more in GA tuned system as compared to Ziegler-
Nichols tuned system, but settling time and Rise time are very less in case of GA tuned
system. The reason for bigger overshoots may be the assumptions taken during the
mathematical modeling of the system.
30
CHAPTER 7
CONCLUSION
The concept behind genetic algorithms is simple and the technique itself is robust. Genetic
algorithms can be used to solve any type of optimization problem provided that the objective
function is proper. A GA is only as good as the objective function. Genetic algorithms are
best used when the function has unrealistic or undefined derivative, or is nonlinear or noisy.
They work perfectly in discontinuous as well as stochastic environments. They can, however
be outperformed by more field specific algorithms. In such cases, the solution of a GA can be
used as an initial guess for the algorithm. As the robustness of the solutions of most
traditional methods depends upon the initial guess, the result of this approach can be
expected to be a good one.
In this report, only the Simple Genetic Algorithm was studied and only a few
operators and techniques of other advanced GAs were introduced. More advanced algorithms
like the NSGA, NSGA-II and BASIC have been developed and together these can be applied
to almost all optimization problems. The two case studies mentioned in this seminar show the
application of genetic algorithms in chemical engineering. Recently, genetic algorithms have
been used in many more chemical engineering problems. Reaction networks have been
modeled and optimized using GAs (Majumdar et al., 2004), large scale reactors have been
optimized using multi-objective GAs (Rangaiah et al., 2003), and even catalysts have been
optimized using GAs (Martin Holena, 2008). Most practical optimization problems in
chemical engineering are highly complex and non-linear, GA can be an effective technique
that can be used in solving these problems.
As a part of my B.Tech project I would be working upon the application of genetic
algorithms on simple chemical engineering problems. The effectiveness of the solution
would then be compared with the solution obtained by traditional methods and will be
validated experimentally.
31
REFERENCES
[1] Kalyanmoy Deb. Optimization for Engineering Design: Algorithms and Examples.
Pretince-Hall India Pvt. Ltd., 1996.
[2] T. F. Edgar and D. M. Himmelblau. Optimization of Chemical Processes. Tata
McGraw-Hill Book Co., Singapore, 1989.
[3] David E. Goldberg. Genetic Algorithms in Optimization, Search and Machine
Learning. Addison Wesley Longman (Singapore) Pte. Ltd., 2000.
[4] Kalyanmoy Deb. An Introduction to Genetic Algorithms. Sadhana Academy
Proceedings in Engineering Sciences 24(4-5): 293 – 315, 1999.
[5] R. Sivaraj and T. Ravichandran. A Review of Selection Methods in Genetic
Algorithm. International Journal of Engineering Science and Technology 3(5): 3792
– 3797, 2011
[6] Vijaykumar Hanagandi and Michael Nikolaou. Applications of Genetic Algorithms in
Chemical Engineering. The Practical Handbook of Genetic Algorithms Volume 3:
Applications, CRC Press, 1991.
[7] Saptarshi Majumdar and Kishalay Mitra. Modeling of a Reaction Network and its
Optimization by Genetic Algorithm. Chemical Engineering Journal 100: 109 – 118,
2004
[8] Yue Li, Gade P. Rangaiah and Ajay Kumar Ray. Optimization of Styrene Reactor
Design for Two Objectives using a Genetic Algorithm. International Journal of
Chemical Reactor Engineering 1: Article A13, 2003
[9] Martin Holena. Genetic Algorithms for the Optimization of Catalysts in Chemical
Engineering. in proceedings of The World Congress on Engineering and Computer
Science, October 22 – 24, 2008, San Fransisco, USA.
[10] Bhawana Tandon and Randeep Kaur. Genetic Algorithm based Parameter Tuning of
PID Controller for Composition Control System. International Journal of
Engineering Science and Technology 3(8): 6705 – 6711, 2011.
32