You are on page 1of 27

PRACTICAL FILE

ON

SOFT COMPUTING

B.E. (COMPUTER Science & Engineering)

Sub i!!ed !" GGCT# $%b%&'ur

Sub i!!ed B( Bri)e*+ ,u %r Sing+ -.-/c*0-0-12 3III Se

Index

S.N" 1 * 3 4 / 1 2 4 9 18

Li*! "6 Ex'eri en!* (tud" of #iolo!ical Neuron ) Artificial Neural Networks. (tud" of +arious acti&ation functions ) t eir Matlab implementations. ,AP in C-- to implement Perceptron Trainin! al!orit m. ,AP in C-- to implement .elta learnin! rule. ,rite an al!orit m for Adaline N0, wit flowc art. ,rite an al!orit m for Madaline N0, wit flowc art. ,AP in C-- to implement 3rror #ack Propa!ation Al!orit m. (tud" of 5enetic Al!orit m. ,rite a MAT6A# pro!ram to find union$ intersection and complement of fu77" sets. ,rite a MAT6A# pro!ram for ma9imi7in! f:9;<9* usin! 5A.

9%!e

Sign

Re %r4*5

Ob)ec!5 0

S!ud( "6 Bi"&"gic%& Neur"n & 7ri!e %b"u! Ar!i6ici%& Neur%& Ne!8"r4*.

Bi"&"gic%& Neur"n Artificial neural networks born after McCulloc and Pitts introduced a set of simplified neurons in 1943. T ese neurons were represented as models of biolo!ical networks into conceptual components for circuits t at could perform computational tasks. T e basic model of t e artificial neuron is founded upon t e functionalit" of t e biolo!ical neuron. #" definition$ %Neurons are basic si!nalin! units of t e ner&ous s"stem of a li&in! bein! in w ic eac neuron is a discrete cell w ose se&eral processes are from its cell bod"'

T e biolo!ical neuron as four main re!ions to its structure. T e cell bod"$ or soma$ as two offs oots from it. T e dendrites and t e a9on end in pre=s"naptic terminals. T e cell bod" is t e eart of t e cell. >t contains t e nucleolus and maintains protein s"nt esis. A neuron as man" dendrites$ w ic look like a tree structure$ recei&es si!nals from ot er neurons. A sin!le neuron usuall" as one a9on$ w ic e9pands off from a part of t e cell bod". T is > called t e a9on illock. T e a9on main purpose is to conduct electrical si!nals !enerated at t e a9on illock down its len!t . T ese si!nals are called action potentials. T e ot er end of t e a9on ma" split into se&eral branc es$ w ic end in a pre=s"naptic terminal. T e electrical si!nals :action potential; t at t e neurons use to con&e" t e information of t e brain are all identical. T e brain can determine w ic t"pe of information is bein! recei&ed based on t e pat of t e si!nal. T e brain anal"7es all patterns of si!nals sent$ and from t at information it interprets t e t"pe of information recei&ed. T e m"elin is a fatt" issue t at insulates t e a9on. T e non=insulated parts of t e a9on area are called Nodes of ?an&ier. At t ese nodes$ t e si!nal tra&elin! down t e a9on is re!enerated. T is ensures t at t e si!nal tra&el down t e a9on to be fast and constant. T e s"napse is t e area of contact between two neurons. T e" do not p "sicall" touc because t e" are separated b" a cleft. T e electric si!nals are sent t rou! c emical interaction. T e neuron sendin! t e si!nal is called pre=s"naptic cell and t e neuron recei&in! t e electrical si!nal is called posts"naptic cell. T e electrical si!nals are !enerated b" t e membrane potential w ic is based on differences in concentration of sodium and potassium ions and outside t e cell membrane. #iolo!ical neurons can be classified b" t eir function or b" t e @uantit" of processes t e" carr" out. , en t e" are classified b" processes$ t e" fall into t ree cate!oriesA Bnipolar neurons$ bipolar neurons and multipolar neurons. Uni'"&%r neur"n* a&e a sin!le process. T eir dendrites and a9on are located on t e same stem. T ese neurons are found in in&ertebrates.

Bi'"&%r neur"n* a&e two processes. T eir dendrites and a9on a&e two separated processes too. Mu&!i'"&%r neur"n*A T ese are commonl" found in mammals. (ome e9amples of t ese neurons are spinal motor neurons$ p"ramidal cells and purkinCe cells. , en biolo!ical neurons are classified b" function t e" fall into t ree cate!ories. T e first !roup is sensor" neurons. T ese neurons pro&ide all information for perception and motor coordination. T e second !roup pro&ides information to muscles$ and !lands. T ere are called motor neurons. T e last !roup$ t e interneuronal$ contains all ot er neurons and as two subclasses. One !roup called rela" or protection interneurons. T e" are usuall" found in t e brain and connect different parts of it. T e ot er !roup called local interneuronDs are onl" used in local circuits. Ar!i6ici%& Neur%& Ne!8"r4 An artificial neural network is a s"stem based on t e operation of biolo!ical neural networks$ in ot er words$ is an emulation of biolo!ical neural s"stem. , " would be necessar" t e implementation of artificial neural networksE Alt ou! computin! t ese da"s is trul" ad&anced$ t ere are certain tasks t at a pro!ram made for a common microprocessor is unable to performF e&en so a software implementation of a neural network can be made wit t eir ad&anta!es and disad&anta!es. Ad:%n!%ge*A

A neural network can perform tasks t at a linear pro!ram can not. , en an element of t e neural network fails$ it can continue wit out an" problem b" t eir parallel nature. A neural network learns and does not need to be repro!rammed. >t can be implemented in an" application. >t can be implemented wit out an" problem. 9i*%d:%n!%ge*A T e neural network needs trainin! to operate. T e arc itecture of a neural network is different from t e arc itecture of microprocessors t erefore needs to be emulated. ?e@uires i! processin! time for lar!e neural networks. Anot er aspect of t e artificial neural networks is t at t ere are different arc itectures$ w ic conse@uentl" re@uires different t"pes of al!orit ms$ but despite to be an apparentl" comple9 s"stem$ a neural network is relati&el" simple. Artificial neural networks :ANN; are amon! t e newest si!nal=processin! tec nolo!ies in t e en!ineerGs toolbo9. T e field is i! l" interdisciplinar"$ but our approac will restrict t e &iew to t e en!ineerin! perspecti&e. >n en!ineerin!$ neural networks ser&e two

important functionsA as pattern classifiers and as nonlinear adapti&e filters. ,e will pro&ide a brief o&er&iew of t e t eor"$ learnin! rules$ and applications of t e most important neural network models. .efinitions and (t"le of Computation An Artificial Neural Network is an adapti&e$ most often nonlinear s"stem t at learns to perform a function :an input0output map; from data. Adapti&e means t at t e s"stem parameters are c an!ed durin! operation$ normall" called t e trainin! p ase . After t e trainin! p ase t e Artificial Neural Network parameters are fi9ed and t e s"stem is deplo"ed to sol&e t e problem at and :t e testin! p ase ;. T e Artificial Neural Network is built wit a s"stematic step=b"=step procedure to optimi7e a performance criterion or to follow some implicit internal constraint$ w ic is commonl" referred to as t e learnin! rule . T e input0output trainin! data are fundamental in neural network tec nolo!"$ because t e" con&e" t e necessar" information to Hdisco&erH t e optimal operatin! point. T e nonlinear nature of t e neural network processin! elements :P3s; pro&ides t e s"stem wit lots of fle9ibilit" to ac ie&e practicall" an" desired input0output map$ i.e.$ some Artificial Neural Networks are uni&ersal mappers . T ere is a st"le in neural computation t at is wort describin!.

An input is presented to t e neural network and a correspondin! desired or tar!et response set at t e output :w en t is is t e case t e trainin! is called super&ised ;. An error is composed from t e difference between t e desired response and t e s"stem output. T is error information is fed back to t e s"stem and adCusts t e s"stem parameters in a s"stematic fas ion :t e learnin! rule;. T e process is repeated until t e performance is acceptable. >t is clear from t is description t at t e performance in!es ea&il" on t e data. >f one does not a&e data t at co&er a si!nificant portion of t e operatin! conditions or if t e" are nois"$ t en neural network tec nolo!" is probabl" not t e ri! t solution. On t e ot er and$ if t ere is plent" of data and t e problem is poorl" understood to deri&e an appro9imate model$ t en neural network tec nolo!" is a !ood c oice. T is operatin! procedure s ould be contrasted wit t e traditional en!ineerin! desi!n$ made of e9 austi&e subs"stem specifications and intercommunication protocols. >n artificial neural networks$ t e desi!ner c ooses t e network topolo!"$ t e performance function$ t e learnin! rule$ and t e criterion to stop t e trainin! p ase$ but t e s"stem automaticall" adCusts t e parameters. (o$ it is difficult to brin! a priori information into t e desi!n$ and w en t e s"stem does not work properl" it is also ard to incrementall" refine t e

solution. #ut ANN=based solutions are e9tremel" efficient in terms of de&elopment time and resources$ and in man" difficult problems artificial neural networks pro&ide performance t at is difficult to matc wit ot er tec nolo!ies. .enker 18 "ears a!o said t at Hartificial neural networks are t e second best wa" to implement a solutionH moti&ated b" t e simplicit" of t eir desi!n and because of t eir uni&ersalit"$ onl" s adowed b" t e traditional desi!n obtained b" stud"in! t e p "sics of t e problem. At present$ artificial neural networks are emer!in! as t e tec nolo!" of c oice for man" applications$ suc as pattern reco!nition$ prediction$ s"stem identification$ and control. Neur%& Ne!8"r4 !"'"&"gie* >n t e pre&ious section we discussed t e properties of t e basic processin! unit in an artificial neural network. T is section focuses on t e pattern of connections between t e units and t e propa!ation of data. As for t is pattern of connections$ t e main distinction we can make is betweenA

Ieed=forward neural networks$ w ere t e data ow from input to output units is strictl" feedforward. T e data processin! can e9tend o&er multiple :la"ers of; units$ but no feedback connections are present$ t at is$ connections e9tendin! from outputs of units to inputs of units in t e same la"er or pre&ious la"ers. ?ecurrent neural networks t at do contain feedback connections. Contrar" to feed= forward networks$ t e d"namical properties of t e network are important. >n some cases$ t e acti&ation &alues of t e units under!o a rela9ation process suc t at t e neural network will e&ol&e to a stable state in w ic t ese acti&ations do not c an!e an"more. >n ot er applications$ t e c an!e of t e acti&ation &alues of t e output neurons is si!nificant$ suc t at t e d"namical be a&iour constitutes t e output of t e neural network :Pearlmutter$ 1998;.

Classical e9amples of feed=forward neural networks are t e Perceptron and Adaline. 39amples of recurrent networks a&e been presented b" Anderson :Anderson$ 1922;$ Jo onen :Jo onen$ 1922;$ and Kopfield :Kopfield$ 194*; .

Tr%ining "6 Ar!i6ici%& neur%& ne!8"r4* A neural network as to be confi!ured suc t at t e application of a set of inputs produces :eit er GdirectG or &ia a rela9ation process; t e desired set of outputs. +arious met ods to set t e stren!t s of t e connections e9ist. One wa" is to set t e wei! ts e9plicitl"$ usin! a priori knowled!e. Anot er wa" is to GtrainG t e neural network b" feedin! it teac in! patterns and lettin! it c an!e its wei! ts accordin! to some learnin! rule. ,e can cate!ories t e learnin! situations in two distinct sorts. T ese areA

(uper&ised learnin! or Associati&e learnin! in w ic t e network is trained b" pro&idin! it wit input and matc in! output patterns. T ese input=output pairs can be pro&ided b" an e9ternal teac er$ or b" t e s"stem w ic contains t e neural network :self=super&ised;.

Bnsuper&ised learnin! or (elf=or!ani7ation in w ic an :output; unit is trained to respond to clusters of pattern wit in t e input. >n t is paradi!m t e s"stem is supposed to disco&er statisticall" salient features of t e input population. Bnlike t e super&ised learnin! paradi!m$ t ere is no a priori set of cate!ories into w ic t e patterns are to be classifiedF rat er t e s"stem must de&elop its own representation of t e input stimuli. ?einforcement 6earnin! T is t"pe of learnin! ma" be considered as an intermediate form of t e abo&e two t"pes of learnin!. Kere t e learnin! mac ine does some action on t e en&ironment and !ets a feedback response from t e en&ironment. T e learnin! s"stem !rades its action !ood :rewardin!; or bad :punis able; based on t e en&ironmental response and accordin!l" adCusts its parameters. 5enerall"$ parameter adCustment is continued until an e@uilibrium state occurs$ followin! w ic t ere will be no more c an!es in its parameters. T e self or!ani7in! neural learnin! ma" be cate!ori7ed under t is t"pe of learnin!.

Ob)ec!5 .

S!ud( "6 3%ri"u* %c!i:%!i"n 6unc!i"n* & !+eir M%!&%b i '&e en!%!i"n*.

Ac!i:%!i"n 6unc!i"n* T e acti&ation function acts as a s@uas in! function$ suc t at t e output of a neuron in a neural network is between certain &alues :usuall" 8 and 1$ or =1 and 1;. >n !eneral$ t ere are t ree t"pes of acti&ation functions$ denoted b" L:.; . Iirst$ t ere is t e T res old Iunction w ic takes on a &alue of 8 if t e summed input is less t an a certain t res old &alue :&;$ and t e &alue 1 if t e summed input is !reater t an or e@ual to t e t res old &alue.

(econdl"$ t ere is t e Piecewise=6inear function. T is function a!ain can take on t e &alues of 8 or 1$ but can also take on &alues between t at dependin! on t e amplification factor in a certain re!ion of linear operation.

T irdl"$ t ere is t e si!moid function. T is function can ran!e between 8 and 1$ but it is also sometimes useful to use t e =1 to 1 ran!e. An e9ample of t e si!moid function is t e "perbolic tan!ent function.

T e artifcial neural networks w ic we describe are all &ariations on t e parallel distributed processin! :P.P; idea. T e arc itecture of eac neural network is based on &er" similar buildin! blocks w ic perform t e processin!. >n t is c apter we first discuss t ese processin! units and discuss diferent neural network topolo!ies. 6earnin! strate!ies as a basis for an adapti&e s"stem

Ob)ec!5 1 Perce'!r"n

Ex'&%in Perce'!r"n Tr%ining A&g"ri!+

T e 'erce'!r"n is an al!orit m for super&ised classification of an input into one of two possible outputs. >t is a t"pe of linear classifier$ i.e. a classification al!orit m t at makes its predictions based on a linear predictor function combinin! a set of wei! ts wit t e feature &ector describin! a !i&en input. T e learnin! al!orit m for perceptrons is an online al!orit m$ in t at it processes elements in t e trainin! set one at a time. T e perceptron al!orit m was in&ented in 19/2 at t e Cornell Aeronautical 6aborator" b" Irank ?osenblatt.M1N >n t e conte9t of artificial neural networks$ t e perceptron al!orit m is also termed t e *ing&e; &%(er 'erce'!r"n$ to distin!uis it from t e case of a multila"er perceptron$ w ic is a more complicated neural network. As a linear classifier$ t e :sin!le=la"er; perceptron is t e simplest kind of feedforward neural network. T e perceptron is a binar" classifier w ic maps its input &alue :a sin!le binar" &alue;A :a real=&alued &ector; to an output

w ere is a &ector of real=&alued wei! ts$ is t e dot product :w ic ere computes a wei! ted sum;$ and is t e GbiasG$ a constant term t at does not depend on an" input &alue. T e &alue of :8 or 1; is used to classif" as eit er a positi&e or a ne!ati&e instance$ in t e case of a binar" classification problem. >f is ne!ati&e$ t en t e wei! ted combination of inputs must produce a positi&e &alue !reater t an in order to pus t e classifier neuron o&er t e 8 t res old. (patiall"$ t e bias alters t e position :t ou! not t e orientation; of t e decision boundar". T e perceptron learnin! al!orit m does not terminate if t e learnin! set is not linearl" separable. Perce'!r"n Tr%ining A&g"ri!+ #elow is an e9ample of a learnin! al!orit m for a :sin!le=la"er; perceptron. Ior multila"er perceptrons$ w ere a idden la"er e9ists$ more complicated al!orit ms suc as backpropa!ation must be used. Alternati&el"$ met ods suc as t e delta rule can be used if t e function is non= linear and differentiable$ alt ou! t e one below will work as well. , en multiple perceptrons are combined in an artificial neural network$ eac output neuron operates independentl" of all t e ot ersF t us$ learnin! eac output can be considered in isolation.

,e first define some &ariablesA


o o

denotes t e output from t e perceptron for an input &ector . is t e bias term$ w ic in t e e9ample below we take to be 8. is t e training set of samples$ w ereA is t e =dimensional input &ector. is t e desired output &alue of t e perceptron for t at input.

,e s ow t e &alues of t e nodes as followsA


is t e &alue of t e t node of t e t trainin! input vector. .

To represent t e wei! tsA

is t e t &alue in t e weight vector$ to be multiplied b" t e &alue of t e t input node. $ in

An e9tra dimension$ wit inde9 $ can be added to all input &ectors$ wit w ic case replaces t e bias term. To s ow t e time=dependence of $ we useA

is t e wei! t at time . is t e learning rate$ w ere

Too i! a learnin! rate makes t e perceptron periodicall" oscillate around t e solution unless additional steps are taken.

T e appropriate wei! ts are applied to t e inputs$ and t e resultin! wei! ted sum passed to a function t at produces t e output ". Le%rning %&g"ri!+ *!e'*

1. >nitialise wei! ts and t res old. Note t at wei! ts ma" be initialised b" settin! eac wei! t node to 8 or to a small random &alue. >n t e e9ample below$ we c oose t e former. $ perform t e followin! steps o&er t e input and

*. Ior eac sample in our trainin! set desired output A

*a. Calculate t e actual outputA

*b. Adapt wei! tsA $ for all nodes .

(tep * is repeated until t e iteration error is less t an a user=specified error t res old $ or a predetermined number of iterations a&e been completed. Note t at t e al!orit m adapts t e wei! ts immediatel" after steps *a and *b are applied to a pair in t e trainin! set rat er t an waitin! until all pairs in t e trainin! set a&e under!one t ese steps.

Ob)ec!5 < 9e&!% Ru&e

7ri!e %b"u! 9e&!% &e%rning ru&e

.elta rule is a !enerali7ation of t e perceptron trainin! al!orit m. >t e9tends t e tec ni@ue to continuous input and outputs in perceptron trainin! Al!orit m a term delta is introduced w ic is t e difference between t e desired Or tar!et output T and actual output a .elta < :T=A; Kere if delta< 8 $t e o0p is correct ) not in! is done >f deltaO 8 $ t e o0p is incorrect ) is 8 $add eac >0p to its correspondin! wt. >f deltaP 8 $ t e o0p is incorrect and is 1 (ubract eac i0p from its correspondin! wt. >t learnin! rate coefficient of multiples t e delta9$ Product to allow control of t e a&!. si7e of wt c an!es ><: ; ,i:n-1;<wi:n;-delta > , ere delta >< t e correction associated wit t e it input 9i ,i:n-1;<&alue of w-> often adCustment ,i:n;< &alue of w-> before adCustment

I '&e en!%!i"n "6 9e&!% Ru&e QincludePPiostream. OO QincludePPconio. OO +oid main:; R clrscr: ;F float inputM3N$d$wei! tM3N$deltaF for:int ><8F> P 3 F >--; R cout PP'Sn initili7e wei! t &ector %PP>PP'St'F cinOOinputM>NF T coutPP''Sn enter t e desired outputSt'F cinOOdF do Rfor del<d=aF if:delP8; for:><8 F>P3 F>--; wM>N<wM>N=inputM>NF else if:del O8; for:><8F>P3F>--; wei! tM>N<wei! tM>N-inputM>NF for:><8F>P3F>--; R &alM>N<delUinputM>NF

wei! tM-1N<wei! tM>N-&alM>NF T coutPP'S&alue of delta is %PPdelF coutPP'Sn wei! t a&e been adCusted'F Tw ile:del note@ual 8; if:del<8; coutPP'Sn output is correct'F T

Ob)ec!5 =

7ri!e %n %&g"ri!+

6"r Ad%&ine N>7 8i!+ 6&"8c+%r!.

Ad%&ine Ne!8"r4 T e Adaline network trainin! al!orit m is as follows A S!e' -A ,ei! ts and bias are set to some random &alues but not 7ero. (et t e learnin! rate parameter . S!e' 0A Perform step *=1 w en stoppin! condition is false. S!e' .A Perform step 3=/ for eac bipolar trainin! pair sAt . S!e' 1A (et acti&ations for input units > < 1 to n. 9i < si S!e' <A Calculate t e net input to t e output unit. Vin<b - WX9iwi S!e' =A Bpdate t e wei! ts and bias for > < 1 to nA ,i:new; < wi:old; - : t = "in; 9i b: new; < b:old; - :t = "in; S!e' ?A >f t e i! est wei! t c an!e t at occurred durin! t e trainin! is smaller t an a specified tolerance t e stop t e trainin! process$ else continue. T is is t e test for stoppin! conditionof a network Testin! Al!orit m A S!e' -A >nitiali7ed t e wei! ts. S!e' 0A Perform step *=4 for eac bipolar input units to 9. S!e' .A (et t e acti&ations of t e input units to 9. S!e' 1A Calculate t e net input to t e output unit. Vin<b - W9iwi S!e' < 5 Appl" t e acti&ation function o&er t e net input calculated A 1 if "in O< 8 =1 if "inP 8

y=

Ob)ec!5 ?

7ri!e %n %&g"ri!+

6"r M%d%&ine N>7 8i!+ 6&"8c+%r!.

M%d%&ine Ne!8"r4 T e Madaline network trainin! al!orit m is as followsA S!e' -5 >nitiali7ed t e wei! ts. Also set initial learnin! rate X. S!e' 05 , en stoppin! condition is false$ perform step *=3 S!e' .5 Ior eac bipolar trainin! pair sAt $ perform step 3=2 S!e' 15 Acti&ate input la"er units. for i < 1 to n. 9i < si S!e' <5 Calculate t e net input to eac idden Adaline unit. A 7inC < b C- W9iwiC$ C<1 to m i<1 S!e' =5 Calculate t e output to eac idden unitA 7C < f:7inC; S!e' ?5 Iind t e output of t e net A Vin < bO - W7i&C
Y<1

V<f : "in ; S!e' 25 Calculate t e error and update t e wei! tsA 1. >f t < "$ no wei! t updation is re@uired. *. >f t WX" and t < -1$ update wei! ts on 7C$ w ere net input is closest to 8:7ero;A bC:new; < bC:old; - X: 1 = 7inC; wiC: new; < wiC :old; - :1 = 7inC;9i 3. >f t WX " and t < =1$ update wei! ts on 7k w ose net input is positi&eA wik: new; < wik :old; - X:1 = 7ink;9i bk:new; < bk:old; - X: =1 = 7ink; S!e' /5 Test for stoppin! condition.

Ob)ec!5 2

7ri!e % 'r"gr%

!" i '&e en! Err"r B%c4 Pr"'%g%!i"n A&g"ri!+

A&g"ri!+

Err"r 6"r B%c4 'r"'%g%!i"n

(tart wit randoml" c osen wei! t w ile Mse is inoat is factor" and computational bounds are not e9ceeds$ do Ior eac input pattern nF ) desined o0p &ecto dCF Compute idden node o0p 9F:1;F Compute t e n0w o0p &ector oCF Compute t e error between oC and desired o0p &ector dCF Modif" t e wei! ts between idden ) o0p nodes. ,k$C:*$1; < :dk=Ok; Ok :1 = Ok; ZC :1; Modif" t e wei! t between i0p ) idden nodes ,C$C:1$8; < 3nd for 3nd w ile. Pr"gr% 6"r B%c4 'r"'%g%!i"n A&g"ri!+ :dk = Ok; Ok :1 = Ok; ,k$C :*$1; ZC :1; :1= ZC :1;; ZC :8;

Pro!ram Code A= Q include PAiostream. O Q include Pprocess. O Q include Pconio. O float 9<8F ><8$ C<8$ r<8F float ard :float >$ float m$ float w M*8N M*8N;F float p M/N M/N$ float t M/N$ float a R float n<8$ s<8$ e<8$ pt M/8NM/8NF for :int ><8$ >P*F > --; R (< stw MiNMCNU pMiNMCNF T if "es :" < <1; for :C<8 F CP* F C--; R pt MiNMCN < * 9 9 - pMiNMCN-wMiNMCNF wM> - 1N MCN [ w M*N MCN - pt MiNMCNF (< s - w MiNMCN - pt MiNMCNF T T if :s < < t MiN; R w M> -1N M8N < w M1NM8NF

w M> - 1N M1N < w M1NM*NF 9--F if :9 O :m=1;; R if : " < <8; R cout PP'Sn T e wt b0w input input ) idden node is ASn w < M'F coutPP w M>-1NPP''PPwM>-1NM1NPP'NF T if:" < <1; R cout PP' Sn Sn Two wei! t b0w idden and output node is Sn w < M% coutPP M>-1NM8N PP %PP wM>-1N'M1NPP'N'F e9it M8NF T >--F ?eturn 9F 3lse if :s\ < tMiN; 3 < tMiN=sF >f :" < < 1; R cout PP'Sn Sn T e wt b0w idden and output nodes is A Sn w < M'F cout PPw M>-1NM8NPP'PP w M>-1N M 1 NPP 'NF e9it M8NF for :C<8 F CP*F C--; R pt MiNMCN < *-9U>U p MiNMCN F w M>-1N MCN < w MiNMCN - pt MiNMCNF T 9==F > --F ?eturn 9F +oid main :; R clrscr :;F R float *$n$p M/NM/N$ b M*8N$ t M/N$ eF float w M*8NM*8NF float PP'Sn 3nter t e learnin! ?ule A'F cin OOa for :> <8F >PnF >--; R cout PP %3nter t e input p'PP > -1 PP'i'F for :><8F >P*F >--; R cin OOpMiNMCNF

T cout PP'Sn 3nter t e Tar!et' PP >-1 PP'N' F cin OO tM1NF T cout PP %Sn 3nter t e wt &ector'F cout PP' Sn w MoNF for M><8F>P*F>--N R cin OO w MiNMCNF for :><8F >PnF >--; R 9 < ard :>$n$w$p$t$a; n<*F if :> < :n=1;; R > < < =1 R T !etc :;F T

Ob)ec!5 / S!ud( "6 Gene!ic A&g"ri!+ Gene!ic A&g"ri!+

Professor Yo n Kolland in 192/ proposed an attracti&e class of computational models$ called 5enetic Al!orit ms :5A;$ t at mimic t e biolo!ical e&olution process for sol&in! problems in a wide domain. T e mec anisms under 5A a&e been anal"7ed and e9plained later b" 5oldber!$ .e Yon!$ .a&is$ Mue lenbein$ C akraborti$ Io!el$ +ose and man" ot ers. 5enetic Al!orit ms as t ree maCor applications$ namel"$ intelli!ent searc $ optimi7ation and mac ine learnin!. Currentl"$ 5enetic Al!orit ms is used alon! wit neural networks and fu77" lo!ic for sol&in! more comple9 problems. #ecause of t eir Coint usa!e in man" problems$ t ese to!et er are often referred to b" a !eneric nameA %Fsoft=computin!'. A 5enetic Al!orit ms operates t rou! a simple c"cle of sta!esA

i; Creation of a %Fpopulation' of strin!s$ ii; 3&aluation of eac strin!$ iii; (election of best strin!s and i&; 5enetic manipulation to create new population of strin!s.

T e c"cle of a 5enetic Al!orit ms is presented below 3ac c"cle in 5enetic Al!orit ms produces a new !eneration of possible solutions for a !i&en problem. >n t e first p ase$ an initial population$ describin! representati&es of t e potential solution$ is created to initiate t e searc process. T e elements of t e population are encoded into bit=strin!s$ called c romosomes. T e performance of t e strin!s$ often called fitness$ is t en e&aluated wit t e elp of some functions$ representin! t e constraints of t e problem. .ependin! on t e fitness of t e c romosomes$ t e" are selected for a subse@uent !enetic manipulation process. >t s ould be noted t at t e selection process is mainl" responsible for assurin! sur&i&al of t e best=fit indi&iduals. After selection of t e population strin!s is o&er$ t e !enetic manipulation process consistin! of two steps is carried out. >n t e first step$ t e crosso&er operation t at recombines t e bits :!enes; of eac two selected strin!s :c romosomes; is e9ecuted. +arious t"pes of crosso&er operators are found in t e literature. T e sin!le point and two points crosso&er operations are illustrated T e crosso&er points of an" two c romosomes are selected randoml". T e second step in t e !enetic manipulation process is termed mutation$ w ere t e bits at one or more randoml" selected positions of t e c romosomes are altered. T e mutation process elps to o&ercome trappin! at local ma9ima. T e offsprin!Ds produced b" t e !enetic manipulation process are t e ne9t population to be e&aluated.

Ii!.A Mutation of a c romosome at t e /t bit position. 39ampleA T e 5enetic Al!orit ms c"cle is illustrated in t is e9ample for ma9imi7in! a function f:9; < 9* in t e inter&al 8 < 9 < 31. >n t is e9ample t e fitness function is f :9; itself. T e lar!er is t e functional &alue$ t e better is t e fitness of t e strin!. >n t is e9ample$ we start wit 4 initial strin!s. T e fitness &alue of t e strin!s and t e percenta!e fitness of t e total are estimated in Table A. (ince fitness of t e second strin! is lar!e$ we select * copies of t e second strin! and one eac for t e first and fourt strin! in t e matin! pool. T e selection of t e partners in t e matin! pool is also done randoml". Kere in table #$ we selected partner of strin! 1 to be t e *=nd strin! and partner of 4=t strin! to be t e *nd strin!. T e crosso&er points for t e first=second and second=fourt strin!s a&e been selected after o=t and *=nd bit positions respecti&el" in table #. T e second !eneration of t e population wit out mutation in t e first !eneration is presented in table C.

Table A

Table #A

Table CA

A (c ema :or sc emata in plural form; 0 "per plane or similarit" template is a !enetic pattern wit fi9ed &alues of 1 or 8 at some desi!nated bit positions. Ior e9ample$ ( < 81E

1EE1 is a 2=bit sc ema wit fi9ed &alues at 4=bits and donGt care &alues$ represented b" E$ at t e remainin! 3 positions. (ince 4 positions matter for t is sc ema$ we sa" t at t e sc ema contains 4 !enes. .eterministic 39planation of KollandGs Obser&ation To e9plain KollandGs obser&ation in a deterministic manner let us presume t e followin! assumptions.

i) There are no recombination or alternations to genes. ii) Initially, a fraction f of the population possesses the schema S and those individuals reproduce at a fixed rate r. iii) All other individuals lacking schema S reproduce at a rate s r.

T us wit an initial population si7e of N$ after t !enerations$ we find Nf r t indi&iduals possessin! sc ema ( and t e population of t e rest of t e indi&iduals is N:1 = f; st. T erefore$ t e fraction of t e indi&iduals wit sc ema ( is !i&en b"

Ior small t and f$ t e abo&e fraction reduces to f :r 0 s; t $ w ic means t e population a&in! t e sc ema ( increases e9ponentiall" at a rate :r 0 s;. A stoc astic proof of t e abo&e propert" will be presented s ortl"$ &ide a well=known t eorem$ called t e fundamental t eorem of 5enetic al!orit m. (toc astic 39planation of 5enetic Al!orit ms Ior presentation of t e fundamental t eorem of 5enetic Al!orit ms$ t e followin! terminolo!ies are defined in order. .efinitionA T e order of a sc ema K$ denoted b" O:K;$ is t e number of fi9ed positions in t e sc ema. Ior e9ample$ t e order of sc ema K < E881E1E is 4$ since it contains 4 fi9ed positions. Ior e9ample$ t e sc ema E1E881 as a definin! len!t d:K; < 4 = 8 < 4$ w ile t e d:K; of EEE1EE is 7ero. .efinitionA T e sc emas defined o&er 6=bit strin!s ma" be !eometricall" interpreted as "per planes in an 6= dimensional "perspace :a binar" &ector space; wit eac 6=bit strin! representin! one corner point in an n=dimensional cube

Ob)ec!5 @. C"n*ider !+e 6"&&"8ing 6uAA( *e!*

AB BB

Pr"gr%

!" 6ind uni"n# in!er*ec!i"n %nd c" '&e en! "6 6uAA(* *e!*

] 3nter t e two Iu77" sets u < input:^enter t e first fu77" set AD;F & < input:^enter t e second fu77" set #D;F disp:^Bnion of A and #D;F

w < ma9:u$&;F disp:^>ntersection of A and #D;F p < min:u$&;F MmN < si7e:u;F disp:^Complement of AD;F @1 < ones:m;=u MnN < si7e:&;F disp:^Complement of #D;F @* < ones:n;=&

Ou!'u! enter t e first fu77" set AM1 8.4 8.1 8.3N enter t e second fu77" set #M8.3 8.* 8.1 8./N Bnion of A and # w< 1.8888 8.4888 8.1888 8./888 >ntersection of A and # p< 8.3888 8.*888 8.1888 8.3888 Complement of A @1 < 8 8.1888 8.4888 8.2888 Complement of # @* < 8.2888 8.4888 8.4888 8./888

Ob)ec!5 0-. 7ri!e % MATLAB 'r"gr% 6"r 6r" - !" 10. Per6"r = i!er%!i"n* "n&(. Pr"gr% 6"r Gene!ic %&g"ri!+ !"

%xi iAing 6(x)Bx. u*ing GA. 7+ere x i* r%nge

%xi iAe !+e 6unc!i"n

6(x) Bx*Cu%re

clear allF clcF ]9 ran!e from 8 to 31 *power/ < 3* ]fi&e bits are enou! to represent 9 in binar" representation N<input :^3nter no. of population in eac iterationD; Nit< input :^3nter no. of iterationsD;F ]5enerate t e initial population Moldc romN < initbp:n$/; ]T e population in binar" is con&erted to inte!er Iield.<M/F8F31F8F8F1F1N for i< 1Anit p en<bindecod:oldc rm$ Iield.$ 3; F ] p en !i&e t e inte!er &alue of t e population ]obtain fitness &alue (@9<p en. _ *F (ums@9<sum:s@9;F a&s@9<sum@90nF s@9<ma9:s@9;F pselect<s@9.0sum@9F sumpselect<sum:pselect;F a&pselect<sumpselect0nF pselect<ma9:pselect;F ]appl" roulette w eel selection Iitn+<s@9F Nsel<4F Newc ri9<selrws:Iitn+$ Nsel;F Newc ron<oldc ron:newc ri9$ A ;F ]perform Crosso&er Crosso&er<1F Newc romc<recsp :newc rom$ crossrate ;F

]new population after crosso&er ]Perform mutation +lub<8 A 31F Mutrate<8.881F Newc rom<mutrandbin :newc romc$ &lub$ 8.881;F ]new population after mutation disp:^Ior iterationD;F i disp:^populationD;F oldc rom disp:^ZD; p en disp:^` :Z;D;F s@9 oldc rom<newc rommF end

You might also like