Professional Documents
Culture Documents
On
“Understanding of Genetic Algorithm
& Welded Beam Design”
By
Jhaveri Ronak Kirtikumar
(ID NO-2018H1430036H)
Study Oriented Project
Fig 1
Basically, Calculus Based Techniques helps us to find the best functional value by
moving towards an increasing gradient for maxima and decreasing gradient for
minima. Basically it deals with finding out the First Order Derivative and then
Equating it to Zero and will get the Stationary Points and then Evaluate Stationary
Points by finding the Second Order Derivative and if it is Lesser than Zero, then
function will have a Maxima at that point and if greater, then minima at that critical
point and if it is zero, then will get Saddle point. It can be evaluated further by
finding out the higher order derivatives in an intelligent manner to find out the Best
Optimal Solution among various Opportunities. Basically in Non Linear
Programming, there are various methods such as Newton Raphson, Steepest
Descent Method etc. to find out the Best Functional Value of Objective Function in
Calculus based techniques.
The Enumerative techniques again could be depth first, breadth first, also involves
dynamic programming methods which is another class of search techniques which
are really enumerative in nature. In these all methods, Computations are carried
out in stages by breaking the problems into sub-problems which are
interdependent and that generate feasible solution for the entire problem stuff. It is
having huge applications such as to find out shortest Route, Linear and Non- linear
programming problems, Integer Linear and Non-Linear Programming problems,
Cardo Loading, Reliability Problems, Capital Budget etc.
For Example, suppose we are having a function having multiple Peaks and
Troughs. So, there exists a Local as well as Global Maxima in the entire Solution
Space. Now, Heuristic Method deals with finding out the first peak among all
existing and then getting trapped there; So here the value which we will get by
these methods may be a Local Maxima or something else. These Limitation will be
overcome by Metaheuristic Method which deals with finding out the Best Solution
among all Decision Modelling by a Random Search Technique; that is, it will start
from multiple points and do a random search and tries to explore and exploit search
space and bring out the best optimal solution. There are so many methods involved
in it. We will Discuss one after another.
CHAPTER 2 – METAHEURISTIC METHODS
Fig 2
Hill Climbing Method ensures to move towards a point (Going like an Up Hill) which
is having a better functional value, if we are searching maximum among all the
values. From a given Search Space, we can Start from Random Points and then
identify all its neighbouring points and move towards the point which is having best
functional value among all. Repeat the same until all the neighbouring points are
of lower functional value and finally we will reach towards a peak by these methods.
As shown in figure, if we start to Climb a Hill from Random Point A and by using
these technique, will reach towards a Peak; that is Global Maximum as shown in
figure. But if it all, we start from Random Point B, then by using These Technique,
we will reach towards the another peak which is having Local Maxima and we are
unable to reach the Global Maxima by using Technique. So we didn’t get the Best
Optimal Solution. These Limitation will be overcome by next Technique.
Now, coming back to our figure 2. If we start from random Point B, then by
Probabilistic Analysis, we can move towards the Down-Hill in order to get the
Global Maxima which finally leads to Optimal Solution. So, that is the main
advantage of using these techniques over Hill Climbing.
2.3 - TABU SEARCH TECHNIQUE
Tabu Search is basically a Guided Local Search Method which explore the solution
space beyond local Optimality and it uses Memory Based Strategies. In these
Method, an initial solution is obtained by using initial memory structures and the
neighbourhood is explored by certain Aspiration Criteria which deals with moving
towards the right direction.
Based upon our Previous Results and from the Memory, suppose we want to move
from one point to another and the way which we approach to move is not perfect,
then such a Directions will be blocked by the use of Tabu Restrictions. So, by using
these technique, Best neighbour will be selected not only by appropriate choice but
also by looking into accountant, the Tabu Restrictions as well as Aspiration Criteria.
Fig 3
As Shown in Figure 3, Our Cell is having Nucleus at the Centre. Nucleus are having
Chromosomes inside it. It comprises of total 23 pairs; say 46 Chromosomes in
each of the Nucleus. Chromosomes are having DNA Structures which are made
up of Genes. Proteins are linked in between DNA. Each Protein will give us
information regarding the Characteristics of an individual; like Eye Colour of Body,
Hair Colour, Body structure, Skin Colour etc. Everything will be given by Strings.
In our Case, we will take all Strings in form of Numbers.
Genetic Algorithms are the most important family member of the Evolutionary
Computing Techniques. These are Intelligent Search Techniques maintaining a
population of candidate solutions for a given problem which search the solution
space by applying various variation operators.
The essential idea is that there should be a balance between Selection Pressure
and Population Diversity. Here, we are going to select the wide range of Solution
Space from Multiple Points. Selection Pressure deals with Selecting those
Population which are having Higher Fitness Value. On the another hand, we are
having Couple of points from all around the Solution Space. So, that is the role of
Population Diversity which will explore the entire Population rather than exploring
smaller number of points in a given Solution Space. These is also an advantage
that we will never going to stuck in Local Optima. Genetic Algorithms basically uses
Probabilistic Analysis for Selection, Evaluation and Discarding the Population in
the given Solution Space.
CHAPTER 3 – BASICS OF GENETIC ALGORITHM
Genetic Algorithms Are Nature Inspired Algorithms. It mimics the Darwin’s Theory
of Evolution by Natural Selection for Problem Solving. So, Basically Darwin’s
Theory is related to Survival of Fittest Theory. It talks about the Basic law of nature
that The one who will Survive will grow further and other will Die.
These are all the Natural situations that we observe in our Daily Life. Let us take
an another Example of Reproduction Process. The essential process of nature is
that the Parents which are selected in a process are by the Survival of the Fittest.
The fittest species are allow to participate in Reproduction and then the
chromosomes are supposed to join each other by processes like Cross over and
Mutation to really produce offspring. So, this is the essential Biological Process.
Genetic Algorithm Completely Mimics it. However, we can apply the Genetic
Algorithm in a Best Possible manner to find out the Optimal Solution.
Fig 4
Let us Consider an Example to Illustrate the above mentioned Cycle. Let us take
a Population from 1 to 100 which fits the functional value. That means, it returns
the Maximum Value of the Function. Now, we Randomly generate through random
numbers - 4 Values among them - say 29, 49, 87, 92 which gives most fittest value.
Now, we will evaluate each of them and say we get, f (29) = 0.10, f (49) = 0.87, f
(87) = 0.98, f (92) = 0.36. Now, based upon the Functional Value, we Conclude
that 49 and 87 are best possible parents. Now by the Process of Cross-Over and
Mutation, we get 4 Different Numbers – Say 66, 13, 56, 98. Now again Evaluate all
of these and among them those who is fitter, will Participate in Next generation and
Others will be Discarded.
Fig 5
The first step of genetic Algorithm is to do Encoding. There are Different Methods
of Encoding like Binary, Real, Permutation etc. Every Chromosome will represent
a Feasible Solution. Sometimes, it may not be feasible by looking into accountant
Proper Constraints and Bounds which are given to the Variables. So, depending
upon the number of Constraints, Sometimes, we may get infeasible solution whose
fitness value is pretty low.
1) Binary Encoding –
1 0 1 0 1 1 0
Suppose, we have a number 4 which is 22. Now, In Binary Coding, we can write it
as a 1x22 +0x21 +0x20. So, Number 4 can be written as 1-0-0 in Binary Coding.
Likewise, as shown in above Table, we can represent some number in Binary
Encoding Likewise. So, we can represent some numbers through a chromosome
that is called the binary encoding comprises of zeroes and ones.
Binary Encoding is difficult to apply directly and it is not a natural encoding. Binary
encoding plays a vital role in knapsack problems.
Real Number Encoding Plays a Vital role in Constant Optimization Problems. Each
Bit comprises of a Gene and it represented by Real Numbers. Chromosome
Comprises of Gene. The individual value for a gene which is Allele is also
represented likewise in a Real Number.
3) Permutation Encoding –
1 2 3 4 5 6 7 8
Travelling Salesman, Aircrew Problems, Assignment Problems, Quadratic
Problems etc. Plays a crucial role in Permutation Encoding. Each number
represents a Gene. Consider a Travelling Salesman Problem comprises of 8 cities
and a person has to go through all the cities exactly once before coming back to
the city of origin and No City should be travelled by him exactly once. The one such
Permutation Encoding is 1-2 3-4-5-6-7-8. So, Here Travelling Salesman Will Start
his Journey from City 1 and he travels to all the cities exactly once and coming
back to his origin from the last city. So, here the Permutation Encoding plays a vital
role.
4) Tree Encoding –
Whenever, we want to Discover a new Pattern or New Formula, at that time, Tree
Encoding Plays a Vital role there.
So, all these are different types of encoding and they are applicable for different
kind of problems and given problem must satisfy the Chromosome Encoding
Fig 6
Consider a Same Travelling Salesman Problem comprises of 4 Cities. Here, as
shown in Fig 6, The Solution Space will be a Tour that has to be carried out by
Travelling Salesman while moving from a one city to another one. The Coding
Space as shown in Fig 6 will represent a Number of Chromosomes that particularly
we define for a given problem statement.
The Coding Space and Solution Space may have a Feasible Solution, Infeasible
or may be illegal one. Let us Consider a Travelling Salesman problem comprises
of 4 cities. Now, if Coding represent as a 1-2-3-4, then will get a Feasible Solution.
But if any number is repeated – say if we have a coding like 1-2-2-3, then in such
a case, will get Infeasible Solution or Illegal one.
At the time of encoding, one must ensure that all the Chromosomes must come
from a Feasible Region. If at all under some circumstances, we need to
accommodate some infeasible ones, then take care of them by giving low Fitness
Values.
There Should be one to one mapping between Coding space and Solution Space.
It should not be likewise that one chromosome represents more than one solution
– one to n mapping and vice versa of it i.e. n to one mapping shouldn’t exists.
2) While doing Encoding, one must ensure that Chromosome represents a solution
to the problem
So, likewise we are having a 12-digit string – Each variable is having 4 bits inside
it. First will be for x1, Next x2 and Last 4 will be for x3. By Randomization, we select
any the Population Range in between 0 to 15. And by Tossing a Coin, we will
identify the Binary Digit whether it’s 0 or 1. If Head Exists, then it will code as a 1
and for tail, it will be 0. So, for one Chromosome, let us say, we get this result-
1 0 0 1 0 1 1 0 1 0 0 0
Let us Assume that Likewise, we are having total 10 Chromosomes. So, we have
to toss a coin 12x10 = 120 times. The Chromosomes along with their Fitness Value
is as mentioned in the given Table-
Chromosomes Fitness value
1 0 0 0 1 0 1 1 1 0 0 1 0.11
0 1 0 0 1 1 0 0 0 0 1 0 0.26
0 1 1 0 1 0 1 0 1 1 1 0 0.09
0 1 0 0 1 0 0 0 1 0 1 0 0.87
0 1 1 0 1 0 1 0 1 1 1 0 0.32
1 1 0 0 1 0 0 0 1 0 1 0 0.23
1 1 1 0 0 1 1 0 1 1 1 1 0.98
1 1 0 0 0 1 0 0 1 0 1 1 0.08
1 1 1 0 0 1 1 0 1 1 1 1 0.36
1 1 0 0 0 1 0 0 1 0 1 1 0.14
Chromosome and their Fitness Value Table
There are various Methods available for the Selection of the Population which has
been assigned early by the Evaluating Process
10%
15%
20%
55%
Fig 7
Roulette Wheel Selection Method Depends upon the Fitness value which is
assigned to the Chromosome. Each region is directly proportional to the fitness
value. It works on the Methodology of Rotating a Wheel and Whenever it stops, a
Pointer will point towards the Specific Region getting Selected. So, The Probability
of Higher Fitness value getting Selected is more.
As shown in the Pie- Chart which is shown in Fig 7 comprises of total 4 Regions.
Each Region is having its Probability of getting selected depending upon their
fitness value. We are having total 4 Chromosomes and their Chances of getting
Selected are 20%,10%,15% and 55% respectively. So, Whenever Roulette Wheel
Rotates, the Chances of getting selection of 55% is higher compared to others and
rest others will not be entertained in further Evaluation Process.
Modified Version of Roulette Wheel Selection is done through Scaling and it plays
a vital role in selection rather than without scaling. So, here will Assign a Particular
Scale to Chromosomes and find their Fitness value and then Rotate Wheel and
Stop and then Pointer Will Select a Particular Value.
Select Chromosome from space stochastic sampling roulette wheel selection and
determine survival probability which is proportional to the fitness value. It
comprises of Deterministic as well as Probabilistic Sampling. Sometimes,
Combination of the two will also to be consider at the time of Selection from Various
Sampling Size.
Fig 8
Here, in this Sampling the Chance of getting a Particular population to be selected
is represented by the ratio of Fitness Value of that particular function to the sum of
Fitness values of all the functions which takes part in the evaluation. So, The
Selection of Probability of Kth individual is represented by Pk = fk / ∑ f
Example –
As of all we have seen the above methods for the selection process and in all of
them, Scaling is something where we actually modify the fitness values based on
certain criteria and sometimes we select two of them and then make a tournament
between the better ones.
4) Elitism
These method works on the principle of sending the population directly to the next
generation without even Evaluating them. Basically, Those Parents which are
having Higher Fitness Value will directly sended to the next Generation without
even applying any kind of Evaluating procedures as we have seen above. There
is no Cross-Over and Mutation in these selection procedure.
In these all the Selection Procedure, one has to maintain balance between
Population Diversity and Selection Pressure in order to get the solution nearby
Optimal Value.
Fig 10
There are Various types of Crossover Operations which are likely to be seen-
Chromosome 1 - 11011|10110001
Chromosome 2 - 10110|01111110
Off Spring 1 - 10110|10110001
Chromosome 1 - 1 1 0 1 1 | 1 0 1 1 |0 0 0 1
3) Arithmetic Cross-Over
Chromosome 1 - 11011|10110001
Fig 11
Fig 12
Fig 13
As shown in Figure 11, There is a Carrier Mother and Carrier Father in First
Generation having some kind of Symptoms associated with them. Now, By the
Process of Cross-Over, the results came out to be the child fully affected with
symptom that inherent partially from both of the parents, one having No Symptoms
I.e. Healthier Child and the others are Partially affected Child with Symptoms
inherent from both of the parents. Figure 12 Shows an example of Cross-Over
Operation among the Parents Characteristics especially body colour from one
Generation to Another by the Process of Cross-Over Operation and Same thing in
Figure 13 is likely to be seen – Cross-Over Operation in terms of Body Colour,
Skills, Intelligence Power, Behaviour Modification etc.
B) Mutation Operation – The Process Mutation involves a Drastic Change in the
Solution Space by applying it in a particular Chromosome to produce Off-Spring.
So, Tremendous amount of Change is likely to be seen in Solution Space and as
a result of which the Population Diversity will suddenly boost up to a larger extent.
The main Difference between Cross-Over and Mutation Operators is that in Cross-
Over Operator- Let we have two Parents each having Fitness value 10 and 20
respectively. Now, the offspring’s produced by them will have a Fitness value
ranges in between 10 and 20. Whereas, in mutation Operator, let us take example
–
Parent 1 – Having Binary Encoding as 1-0-1-0 = 1x23 +0x22 +1x21 +0x20 = 10.
1 0 1 0
0 0 1 0
So, here as we see the first bit of Parent 1 (1) has just changed to Offspring 1 (0).
And we see a Drastic Change from 10 to 2 by the process of Mutation.
Parent 1-
1 0 0 1 0 1 0 1 1 0 1 0
Probability of Mutation-
0.58 0.67 0.58 0.59 0.01 0.58 0.98 0.76 0.02 0.76 0.58 0.39
Offspring 1-
1 0 0 1 1 1 0 1 0 0 1 0
As we see in the above mentioned Table, there are total 12 bits in a particular
string. Hence, Probability of Mutation is 1/12 = 0.0833. Hence, those values which
are lesser than or equal to 0.0833 will take participate in mutation process. So,
here looking at the Fifth and Ninth Gene having Probability 0.01 and 0.02 which
are lesser than 0.083. Hence, these two Bits will take Participate in Mutation
Process. And Remaining will be kept as it is. There are various Methods of
Mutation. One such applied here is interchange of Bits from 0 to 1.
Fig 14 – Positive Mutation Fig 15 – Negative Mutation
Figure 14 and 15 tells us about the Positive Mutation and Negative Mutation.
Einstein Parents are not that much clever as of what he is. So, there is a Drastic
Change in IQ Level between Parents and Offspring. Fig 15 Tells us about the
Negative Symptoms which are transferred by the process of Mutation from Parents
to Offspring.
Offspring Chromosome
Fig 16 – Generating Offspring
The above Figure represents How the Offspring has been generated by applying
the Process of Cross-Over and Mutation. So,Here the same steps are supposed
to be followed in order to create Offspring – Intialize Population, Select few of them
by various Techniques as mentioned above, Generate New Chromosome by
Process of Mutation and Cross-Over, And Finally we produce a Offspring here.
CHAPTER 5 – SUMMARY OF GENETIC ALGORITHM
Begin
{
Initialize Population;
Evaluate Population;
While {Termination Criteria Not Satisfied}
{
Select Parents for Reproduction;
Perform Cross-Over and Mutation;
Evaluate Population;
}
}
Here, The Flowchart represents the Summary of Genetic Algorithm. First of all,
initialize population, then evaluate population, while termination criteria not
satisfied, select parent for reproduction, perform crossover and mutation, then
again evaluate population in such a manner that more and more fitter Generation
will going to be produced by this Algorithm and will bring our Solution Nearby
Optimal Value.
There are some important points to be kept in mind while performing Genetic
Algorithm. They are as follows:
The Welding Process is broadly classified into the following well known two
Groups. They are as follows:
1) The Welding process that basically uses Heat alone to join the two parts such
as any Beam which has been attached with Rigid Member by the use of Welding.
2) The Welding process that uses a Combination of Heat and Pressure to join the
two Parts which can be again any Cantilever Beam applying Load at the end of a
member attached with a particular Rigid Wall or something else by Welding.
The Welding Process that uses Heat alone is basically termed technically as a
Fusion Welding Process. In this Particular Method, the parts have to be joined and
are held in position and the Molten Metal is supplied to the joint. The Molten metal
basically can come either from the Parts Themselves Which is known as Parent
metal or sometimes if require, then an External Filler metal is supplied to the joint.
The Joining Surface of the two parts becomes Plastic or even is Molten under the
action of Heat. When the Joint Solidifies, the two parts Fuse into a single unit.
6.2 - PROBLEM STATEMENT
Fig 17
Now, here coming to the Optimization Problem, we are supposed to Minimize the
Overall Cost including Set up, Material, Fabrication, Labour, Operational etc. by
varying the Size of the Weld and its Member Dimensions which are nothing but our
Decision Variables according to the Optimization Term and which are as explained
below-
The Design Constraints will include the Limits of the Shear Stress, Bending Stress,
Buckling Load and End Deflection which will be explained in upcoming Pages of
Project. These all limits will be as per IS 800:2007
6.3 - PARAMETERS INVOLVED IN PROBLEM
1) Young’s Modulus (E) to be used in Welding Design = 30 x 106 psi
2) Shearing modulus (G) for the beam material used in Design = 12 x 106 psi
3) Overhang length of the member (L) = 14 inch
4) Design stress of the weld to be used in Design (Ͳmax) = 13600 psi
5) Design normal stress for the beam material (σmax) = 30000 psi
6) Maximum Deflection (δmax) which beam may undergo = 0.25 inch
7) Load (P) applied at the end of the member = 6000 lb
Here, A1 = Set up Labour Cost including Fixtures available for the Setup and
Holding of the Bar during Welding
A2 = Welding Labour Cost including Operation and Maintenance Expense
A3 = Material Cost for both the work – Beam as well as Weld.
2) Welding Labour Cost - It is assumed that the welding will be done by machine
at a total cost of $10 per hr (including operating and maintenance expense).
Furthermore, it is assumed that the machine can lay down a cubic inch of weld in
6 min. The labour cost is then calculated as – Here, 𝑉𝑊 = Weld Volume
$ 𝑀𝐼𝑁 1 𝐻𝑂𝑈𝑅 $
𝐴2 = 〈10 〉 〈6 〉〈 〉 𝑉𝑤 = 1 𝑉𝑤
𝐻𝑟 𝐼𝑁𝐶𝐻 3 60 𝑀𝐼𝑁 𝐼𝑁𝐶𝐻 3
3) Material Cost – The Material Cost for both the work – Beam as well as Weld
is as expressed follows:
A3 = C3VW + C4VB
Thus, substitute all these values in above mentioned Material Cost Equation,
We get
A3 = (0.1047)( 𝑋 2 𝑌) + (0.04811)( 𝑍𝑊(𝐿 + 𝑌))
4) Total Cost – Total Cost can be calculated by adding up Set up Labour Cost
including Fixtures available for the Setup and Holding of the Bar during Welding,
Welding Labour Cost including Operation and Maintenance Expense and Material
Cost for both the work – Beam as well as Weld.
To complete the model, it is necessary to define the weld stress, bar bending
stress, bar deflection and the buckling load.
1) The Primary Stress acting over a Weld Throat Area is represented by Equation
1
T1 =
√2x1 x2
𝑋2
(𝐿 + )𝑅
T2 = 2
𝑋22
√2𝑋1 𝑋3 ( 3 + (𝑋1 + 𝑋3 )2 )
2 T2 T1 𝑋2
𝑇(𝑥) = 𝑃√ T12 + T22 +
𝑅
Thus, By the Help of all These Equations, we can Calculate the Optimal Cost of
welding.
The other two important Properties which will be useful to us in Calculating the
Various Constraints are-
𝑋 𝑋 2 6
4.013𝐸 √ 3 4 𝑋 𝐸
P=
36
(1 − 2𝐿3 √4𝐺 )
𝐿2
Substitute all the Values of E, L, G in Bucking Load Equation and finally We get,
1) The Shear Stress at the Beam Support Location cannot exceed the Maximum
Allowable for the material
Ͳ(x) ≤ 13600
2) The Maximum Bending Stress at the Beam Support location cannot exceed
the Maximum Yield Strength for the Material
504000
≤30000
𝑋4 𝑋3 2
3) The Applied Load which is acting at the end of Beam should be less than the
Buckling Load.
6000 ≤ 64746.022 (1-0.028234X3) X3X43
0.125 ≤X(1)≤ 5
0.1 ≤X(2)≤ 10
0.1≤ X(3)≤ 10
0.125 ≤X(4)≤ 5.
The Basic Steps involved while Coding the Algorithm are as follows:
A) Fitness Function :
function y = ronak2(x)
y = (1.10471)*(x(1)^2)*x(2) + (0.04811)*(x(3)*x(4)*(14+x(2)));
end
B) Constraints Equation:
function [c,c_eq] = pooja2(x)
c = [(6000 *
(sqrt(((1/(sqrt(2)*x(1)*x(2)))^2)+((((14+(x(2)/2))*(sqrt((x(2)^2)+((x(1)+x(3))^2))))/(sqrt(
2)*x(1)*x(3)*((x(2)^2/3)+((x(1)+x(3))^2))))^2)+(2*(1/(sqrt(2)*x(1)*x(2)))*(((14+(x(2)/2))*
(sqrt((x(2)^2)+((x(1)+x(3))^2))))/(sqrt(2)*x(1)*x(3)*((x(2)^2/3)+((x(1)+x(3))^2))))*x(2)/(s
qrt((x(2)^2)+((x(1)+x(3))^2)))))))-13600; 504000/(x(4)*(x(3)^2))-30000; 64746.022 *
(1-(0.028234*x(3)))*(x(3)*(x(4)^3))-6000];
c_eq = [];
end
function F = objval(x)
f1 = 1.10471*x(:,1).^2.*x(:,2) +
0.04811*x(:,3).*x(:,4).*(14.0+x(:,2));
end
function z = pickindex(x,k)
z = objval(x); % evaluate objectives
z = z(k); % return objective k
end
B) MATLAB OUTPUT’S
C) CONCLUSIONS
X1 = 0.2067
X2 = 5.0491
X3 = 8.0306
X4 = 0.2092
Cost = 0.00073721
(1) A1 A4, (2) A5 A12, (3)A13 A16, (4) A17 A18, (5) A19 A22, (6) A23
A30, (7) A31 A34, (8) A35 A36, (9) A37 A40, (10) A41 A48, (11 A49 A52,
(12) A53 A54, (13) A55 A58, (14) A59 A66, (15) A67 A70, (16) A71 A72.
The discrete variables are selected from the set are as follows:
D = {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3, 1.4, 1.5,
1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1,
3.2} (in2)
Number of Nodes – 20
Number of Elements – 72
The Following Table Shows the Node no. and Co-ordinated with respect
to it.
Node No. X Y Z
1 0 0 275.59
2 273.26 0 196.8
3 236.65 136.63 196.8
4 136.65 236.65 196.8
5 0 273.26 196.8
6 492.12 0 118.11
7 475.35 127.37 118.11
8 426.188 246.06 118.11
9 347.198 347.981 118.11
10 246.06 426.188 118.11
11 127.37 475.35 118.11
12 -475.35 127.37 118.11
13 426.188 -246.06 118.11
14 475.35 -127.37 118.11
15 625.59 0 0
16 541.777 312.798 0
17 312.795 541.777 0
18 0 625.59 0
19 -312.8 541.777 0
20 312.795 -541.78 0
E = Modulus of
Elasticity
A=Area of Cross-
Section
ar(1,1:4)=Q(1);
ar(1,5:12)=Q(2);
ar(1,13:16)=Q(3);
ar(1,17:18)=Q(4);
ar(1,19:22)=Q(5);
ar(1,23:30)=Q(6);
ar(1,31:34)=Q(7);
ar(1,35:36)=Q(8);
ar(1,37:40)=Q(9);
ar(1,41:48)=Q(10);
ar(1,49:52)=Q(11);
ar(1,53:54)=Q(12);
ar(1,55:58)=Q(13);
ar(1,59:66)=Q(14);
ar(1,67:70)=Q(15);
ar(1,71:72)=Q(16);
fixed_dof=[49,50,51,52,53,54,55,56,57,58,59,60]; % constrained DOF's
L=[1 2 3 4 5 1 2 3 3 4 1 4 8 6 5 5 5 8 5 6 7 8 9 5 6 7 7 8 5 8 12 10 9 9 9
12 9 10 11 12 13 9 10 11 11 12 9 12 16 14 13 13 13 16 13 14 15 16 17 13 14
15 15 16 13 16 20 18 17 17 17 20;5 6 7 8 2 6 7 6 8 7 8 5 7 7 6 8 7 6 9 10
11 12 6 10 11 10 12 11 12 9 11 11 10 12 11 10 13 14 15 16 10 14 15 14 16 15
16 13 15 15 14 16 15 14 17 18 19 20 14 18 19 18 20 19 20 17 19 19 18 20 19
18]; % element connecting matrix
coord=[0 120 120 0 0 120 120 0 0 120 120 0 0 120 120 0 0 120 120 0;0 0 0 0
60 60 60 60 120 120 120 120 180 180 180 180 240 240 240 240;0 0 120 120 0 0
120 120 0 0 120 120 0 0 120 120 0 0 120 120]; % coordinate vector for the 6
nodes in m
load=[0;0;0;5;5;-
5;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0
;0;0;0;0;0]; % load vector at 10th dof and 12th dof i.e.downwards at 5th &
6th nodes respectively in N
E=1e4; % youngs modulus in Pa
den=0.1; % density of material in Kg/m3
%% Calculation
%getting the ID matrix from the information given
ID=zeros(ndof,n);
r=size(ID);
c=r(1,2);
r=r(1,1);
number=1;
for c1=1:c
for r1=1:r
ID(r1,c1)=number;
number=number+1;
end
end
fixed_dof=sort(fixed_dof);
fixed1=fixed_dof;
node=1:1:n*ndof;
free_dof=node;
for i=1:length(fixed_dof)
free_dof(fixed1(i))=[];
fixed1=fixed1-1;
end
clear fixed1;
free_dof=sort(free_dof);
free_dof=free_dof';
for e=1:ne
for i=1:nen
for a=1:ndof
p=ndof*(i-1)+a;
LM(p,e)=ID(a,L(i,e));
end
end
end
K=zeros(n*ndof);+
+
lambda_vec=[];
h_vec=[];
%assembley
for e=1:ne %coordinates matrix of
each element
localcoord=[coord(:,L(1,e)) coord(:,L(2,e))];
h=0;
for count=1:ndof %length of member
temp=(localcoord(count,2)-localcoord(count,1))^2;
h=h+temp;
end
h=sqrt(h);
h_vec=[h_vec;h];
lambda=[];
for count=1:ndof %direction cosines in
lambda matrix like cosx
lambda=[lambda;(localcoord(count,2)-localcoord(count,1))/h];
%first one is lambda x ans second one is lambda y and so on
end
lambda_vec=[lambda_vec lambda];
A=lambda*lambda';
k=(E*ar(e))/h*[A -A;-A A];
for p=1:nee
P=LM(p,e);
for q=1:nee
Q=LM(q,e);
K(P,Q)=K(P,Q)+k(p,q); % GLobal Stiffness matrix
end
end
end
lambda_vec=lambda_vec';
K1=K;
%applying boundary conditions
for counter=1:length(fixed_dof)
K1(fixed_dof(counter),:)=[];
K1(:,fixed_dof(counter))=[];
fixed_dof=fixed_dof-1;
end
d=K1\load;
disp_vec=zeros(n*ndof,1);
for count=1:length(free_dof)
disp_vec(free_dof(count))=d(count);
end
%% Weight
TW=0;
for i=1:ne
W=den*ar(i)*h_vec(i);
TW=TW+W; % weight in Kg
end
%% post processing
D_big=[];
count=1;
for i=1:n
D=[];
for j=1:ndof
t=disp_vec(count);
count=count+1;
D=[D;t];
end
D_big=[D_big D];
end
axial_vec=[];
force_vec=[];
for i=1:ne
d1=lambda_vec(i,:)*D_big(:,L(1,i));
d2=lambda_vec(i,:)*D_big(:,L(2,i));
axial=(E/h_vec(i))*(d2-d1);
axial_vec=[axial_vec;axial];
force=ar(i)*axial_vec(i);
force_vec=[force_vec;force];
end
axial_vec;
% R=sqrt((x(2)^2)/4+((x(1)+x(3))/2)^2);
% P=6000;
% L=14;
% E=30e6;
% G=12e6;
% Px=((4.013*E*sqrt((x(3)^2*x(4)^6)/36))/(L^2))*(1-
(x(3)/(2*L))*sqrt(E/(4*G)));
% tap=P/(sqrt(2)*x(1)*x(2));
% Q=P*(L+(x(2)/2));
% J=2*(sqrt(2)*x(1)*x(2)*((x(2)^2/12)+((x(1)+x(3))/2)^2));
% tapp=(Q*R)/J;
% ta=sqrt(tap^2+((2*tap*tapp*x(2))/(2*R))+tapp^2);
% sigma=(6*P*L)/(x(4)*(x(3)^2));
% delta=(4*P*(L^3))/(E*(x(3)^3)*x(4));
% c=[ta-13600;
% sigma-30000;
% x(1)-x(4);
% 0.10471*x(1)^2+0.04811*x(3)*x(4)*(14+x(2))-5;
% 0.125-x(1);
% delta-0.25;
% Get Costs
Costs = [pop.Cost];
end
if ~exist('replacement','var')
replacement = false;
end
L = zeros(q,1);
for i=1:q
L(i) = randsample(numel(P), 1, true, P);
if ~replacement
P(L(i)) = 0;
end
end
end
end
%% Problem Definition
% Objective Function
CostFunction = @(Q)Sphere(Q);
DS=[0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8
1.9 2 2.1 2.2 2.3 2.4 2.5 2.6 2.8 3 3.2 3.4 3.5 3.6 3.7 3.8 3.9 4 4.1 4.2
4.3 4.4 4.5 4.6 4.7 4.8 4.9 5 ...
5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 6 6.1 6.2 6.3 6.4]; % in sq.in
L=[1 2 3 4 5 1 2 3 3 4 1 4 8 6 5 5 5 8 5 6 7 8 9 5 6 7 7 8 5 8 12 10 9 9 9
12 9 10 11 12 13 9 10 11 11 12 9 12 16 14 13 13 13 16 13 14 15 16 17 13 14
15 15 16 13 16 20 18 17 17 17 20;5 6 7 8 2 6 7 6 8 7 8 5 7 7 6 8 7 6 9 10
11 12 6 10 11 10 12 11 12 9 11 11 10 12 11 10 13 14 15 16 10 14 15 14 16 15
16 13 15 15 14 16 15 14 17 18 19 20 14 18 19 18 20 19 20 17 19 19 18 20 19
18]; % element connecting matrix
%% GA Parameters
% GA Parameters
ga_params.q = max(round(0.3*nPopMemeplex),2); % Number of Parents
ga_params.alpha = 3; % Number of Offsprings
ga_params.beta = 5; % Maximum Number of Iterations
ga_params.sigma = 2; % Step Size
ga_params.CostFunction = CostFunction;
ga_params.VarMin = VarMin;
ga_params.VarMax = VarMax;
%% Initialization
% Sort Population
pop = SortPopulation(pop);
%% GA Main Loop
for it = 1:MaxIt
ga_params.BestSol = BestSol;
% Run GA
Memeplex{j} = RunGA(Memeplex{j}, ga_params);
% Sort Population
pop = SortPopulation(pop);
end
%% Results
figure;
plot(BestCosts, 'LineWidth', 2);
% semilogy(BestCosts, 'LineWidth', 2);
xlabel('Iteration');
ylabel('Best Cost');
grid on;
pop(1)
%% GA Main Loop
for it = 1:beta
% Select Parents
L = RandSample(P,q);
B = pop(L);
% Generate Offsprings
for k=1:alpha
% Sort Population
[B, SortOrder] = SortPopulation(B);
L = L(SortOrder);
% Improvement Step 1
NewSol1 = B(end);
Step = sigma*rand(VarSize).*(B(1).Position-B(end).Position);
NewSol1.Position = B(end).Position + Step;
if IsInRange(NewSol1.Position, VarMin, VarMax)
NewSol1.Cost = CostFunction(NewSol1.Position);
if NewSol1.Cost<B(end).Cost
B(end) = NewSol1;
else
ImprovementStep2 = true;
end
else
ImprovementStep2 = true;
end
% Improvement Step 2
if ImprovementStep2
NewSol2 = B(end);
Step = sigma*rand(VarSize).*(BestSol.Position-
B(end).Position);
NewSol2.Position = B(end).Position + Step;
if IsInRange(NewSol2.Position, VarMin, VarMax)
NewSol2.Cost = CostFunction(NewSol2.Position);
if NewSol2.Cost<B(end).Cost
B(end) = NewSol2;
else
Censorship = true;
end
else
Censorship = true;
end
end
% Censorship
if Censorship
B(end).Position = unifrnd(LowerBound, UpperBound);
B(end).Cost = CostFunction(B(end).Position);
end
end
end
end
7.5 – RESULTS: