46 views

Uploaded by Mansour Naseri

- Machine Learning - CheatSheet
- seguridad
- New Algorithms for Large Scale SVM - Thesis
- How Humans and AI Are Working Together in 1,500 Companies
- Chapter 5
- Machine Learning Lab at DIKU
- Cluster Australia
- Deep Learning Networks for New Computer Vision Technologies
- JCI Makeathon 2017 - Problem Statements
- 5_6107007113349824537
- L1_SS13.R43-Fintech-in-Investment-Management-Lesson-1
- Artificial Intelligence Peface
- Thriver- Understanding of Lambda Calculus .Mango+Salado
- The Business of Artificial Intelligence
- Data Mining
- 2018 and 2019 Java IEEE Projects List
- EE561-fall2013-HW4-sol
- answers
- Multilabel Things
- IJETR2226

You are on page 1of 30

Manosur Naseri

Saeedeh Nazari Mostafa Taghubi

Reza Zarei

Sara Sattarzedeh Mehran Shakarami

Hopfield Networks

Elman Networks

Content

Introduction (Hopfield Networks) Convergence Matlab Toolbox

Introduction:

Hopfield Network

Auto-Association Network Fully-connected (clique) with symmetric weights Weight values based on Hebbian principle Performance: Must iterate a bit to converge on a pattern,but generally much less computation than in backpropagation networks.

Weights

Energy Function

Bipolar

Hopfield

Discrete (TLU)

Continuous (Tanh, Sigmoid, )

Unipolar

Inputs

Feedforward

Outputs

Inputs

Feedback

Outputs

5

N=Dimension of Patterns N-Dimensional Hypercube

Selected Target Points

Method of Convergence

Exm: 2D State Space

HD

Existance of Equ. Point is guaranteed BUT Stable Equ. Points

8

Convergence

The network will stabilize when all the states of the neurons stay the same. IMPORTANT PROPERTY: Hopfields network will ALWAYS stabilize after finite time.

Note: There is No Escape Way from Local Min. except Changing the input

Complement

Converge to the disturbed pattern

10

Synchronous & Asynchronous Update Synchronous All neuron outputs would be updated High Speed Convergence Low Guarantee for convergence to the Stable Equ. Point

Asynchronous Only one neuron output would be updated Low Speed Convergence High Guarantee for convergence to the Stable Equ. Point

11

Feedforward Networks Solutions are known Weights are learned Evolves in the weight space Used for:

Prediction Classification Function approximation

Feedback Networks Solutions are unknown Weights are prescribed Evolves in the state space Used for:

Constraint satisfaction Optimization Feature matching

12

net = newhop(T)

T : R Q matrix of Q target vectors (values must be +1 or -1)

13

Given a set of target equilibrium points represented as a matrix T of vectors, newhop returns weights and biases for a recursive network.

Once the network has been designed, it can be tested with one or more input vectors. Hopefully those input vectors close to target equilibrium point. The ability to run batches of trial input vectors quickly allows you to check the design in a relatively short time. First you might check to see that the target equilibrium point vectors are indeed contained in the network. Then you could try other input vectors to determine the domains of attraction of the target equilibrium points and the locations of spurious equilibrium points if they are present.

Consider a Hopfield network with just two neurons. The target equilibrium points are defined to be stored in the network as the two columns of the matrix T.

T = [1 -1; -1 1]; net = newhop(T); W= net.LW{1,1} W= 0.6925 -0.4694 -0.4694 0.6925 b = net.b{1,1} b= 0 0 Ai = T; [Y,Pf,Af] = sim(net,2,[],Ai); Y Y= 1 -1 -1 1

Next test the design with the target vectors T to see if they are stored in the network. The targets are used as inputs for the simulation function sim.

Ai = T; [Y,Pf,Af] = sim(net,2,[ ],Ai); Y Y = The size of the batch 1 -1 -1 1 Thus, the network has indeed been designed to be stable at its design points. Next you can try another input condition that is not a design point, such as Ai = {[-0.9; -0.8; 0.7]}; [Y,Pf,Af] = sim(net,{1 5},{},Ai); Y{1}

This produces

ans = -1 -1 1

We

would like to obtain a Hopfield network that has the two stable points defined by the two target (column) vectors in T. T = [+1 -1; -1 +1]; Here is a plot where the stable points are shown at the corners

xlabel('a(1)'); ylabel('a(2)'); net = newhop(T); [Y,Pf,Af] = net([ ],[ ],T); Y Y = 1 -1 -1 1

17

Here we define a random starting point and simulate the Hopfield network for 20 steps. It should reach one of its stable points.

We can make a plot of the Hopfield networks activity. record = [cell2mat(a) cell2mat(y)]; start = cell2mat(a); hold on plot(start(1,1),start(2,1),'bx',record(1,:),record(2,:));

18

color = 'rgbmy'; for i=1:25 a = {rands(2,1)}; [y,Pf,Af] = net({20},{},a); record=[cell2mat(a) cell2mat(y)]; start=cell2mat(a); plot(start(1,1),start(2,1),'kx',record(1,: record(2,:),color(rem(i,5)+1)) end

),

19

The Hopfield network designed here is shown to have an undesired equilibrium point. However, these points are unstable in that any noise in the system will move the network out of them

T = [+1 -1; -1 +1]; plot(T(1,:),T(2,:),'r*') axis([-1.1 1.1 -1.1 1.1]) title('Hopfield Network State Space') xlabel('a(1)'); ylabel('a(2)');

20

Unfortunately, the network has undesired stable points at places other than the corners. We can see this when we simulate the Hopfield for the five initial weights, P.

These points are exactly between the two target stable points. The result is that they all move into the center of the state space, where an undesired stable point exists.

plot(0,0,'ko'); P = [-1.0 -0.5 0.0 +0.5 +1.0; -1.0 -0.5 0.0 +0.5 +1.0]; color = 'rgbmy'; for i=1:5 a = {P(:,i)}; [y,Pf,Af] = net({1 50},{},a); record=[cell2mat(a) cell2mat(y)]; start = cell2mat(a); plot(start(1,1),start(2,1),'kx',record(1,:), record(2,:),color(rem(i,5)+1)) drawnow end

21

We would like to obtain a Hopfield network that has the four stable points defined by the two target (column) vectors in T. T = [+1 +1 -1 +1; ... -1 +1 +1 -1; ... -1 -1 -1 +1; ... +1 +1 +1 +1; ... -1 -1 +1 +1]; net = newhop(T); Here we define 4 random starting points and simulate the Hopfield network for 50 steps.

Some initial conditions will lead to desired stable points. Others will lead to undesired stable points.

ans =

1 -1 1 1 1

-1 1 -1 1 1

1 -1 1 1 1

1 1 -1 1 -1

22

We would like to obtain a Hopfield network that has the two stable points defined by the two target (column) vectors in T. T = [+1 +1; -1 +1; -1 -1]; Here is a plot where the stable points are shown at the corners

axis([-1 1 -1 1 -1 1]) set(gca,'box','on'); axis manual; hold on; plot3(T(1,:),T(2,:),T(3,:),'r*') title('Hopfield Network State Space') xlabel('a(1)'); ylabel('a(2)'); zlabel('a(3)'); view([37.5 30]);

23

The function NEWHOP creates Hopfield networks given the stable points T.

net = newhop(T);

Here we define a random starting point and simulate the Hopfield network for 50 steps. It should reach one of its stable points. a = {rands(3,1)}; [y,Pf,Af] = net({1 10},{},a);

24

Now we simulate the Hopfield for the following initial These points were exactly between the two target stable points. The result is that they all move into the center of the state space, where an undesired stable point exists. conditions, each a column vector of P.

P = [ 1.0 -1.0 -0.5 1.00 1.00 0.0; ... 0.0 0.0 0.0 0.00 0.00 -0.0; ... -1.0 1.0 0.5 -1.01 -1.00 0.0]; cla plot3(T(1,:),T(2,:),T(3,:),'r*') color = 'rgbmy'; for i=1:6 a = {P(:,i)}; [y,Pf,Af] = net({1 10},{},a); record=[cell2mat(a) cell2mat(y)]; start=cell2mat(a); plot3(start(1,1),start(2,1),start(3,1),'kx', ... record(1,:),record(2,:),record(3,:),color(rem(i,5)+1)) end

25

Network Target

26

Programming

27

Example:

J = imnoise (I , 'salt & pepper , d)

Original image

10% noisy

20% noisy

30% noisy

40% noisy

28

29

30

- Machine Learning - CheatSheetUploaded bylucy01123
- seguridadUploaded byandres python
- New Algorithms for Large Scale SVM - ThesisUploaded bydr_s_m_afzali8662
- How Humans and AI Are Working Together in 1,500 CompaniesUploaded bycerasela
- Chapter 5Uploaded byapi-26228448
- Machine Learning Lab at DIKUUploaded byAzizi van Abdullah
- Cluster AustraliaUploaded bySteven Truong
- Deep Learning Networks for New Computer Vision TechnologiesUploaded byIndrasen Poola
- JCI Makeathon 2017 - Problem StatementsUploaded byMadhusudanan Ashok
- 5_6107007113349824537Uploaded byviralnanobio_4150420
- L1_SS13.R43-Fintech-in-Investment-Management-Lesson-1Uploaded bymmqasmi
- Artificial Intelligence PefaceUploaded bywaxstone
- Thriver- Understanding of Lambda Calculus .Mango+SaladoUploaded byHV Uio
- The Business of Artificial IntelligenceUploaded bySachin Rajoria
- Data MiningUploaded byAgil Langga
- 2018 and 2019 Java IEEE Projects ListUploaded byRaga Vendra
- EE561-fall2013-HW4-solUploaded bySoo Kin Wah
- answersUploaded byapi-195130729
- Multilabel ThingsUploaded byRowan
- IJETR2226Uploaded byanil kasot
- 'Artificial Intelligence' Gains Fans Among Investors - WSJUploaded bydougct
- assAsUploaded byMukesh Bisht
- NN - Unstratified - 90% - 1Uploaded bySiti Qomariyah
- Math May 2008 Exam C2Uploaded bydylandon
- A Survey on the Use of Pattern Recognition TechniquesUploaded byEditor IJRITCC
- Filteration of messages.docxUploaded byGarima Singh
- tc_pakdd01Uploaded byannabhai13
- New Microsoft Word Document (5)Uploaded bybhavishyaarora
- 2008_Human Activity Recognition With Wearable Sensors_PhD HuyhnUploaded byFitriza Marsha Safira
- Ch5+Ken+BlackUploaded byArpit Kapoor

- The Semeiosis of Poetic MetaphorUploaded bySihina Vijithaya
- Culture.pptUploaded byfidu
- Revised Handbook for English LanguageUploaded byPa Alieu M. Fye
- Spanking Corporal PunishmentUploaded bymskeeta887238
- Tseng: Iconicity as PowerUploaded byzeldavid
- A Methodological Synthesis of Self-Paced Reading InUploaded byh.larsen
- Closed or Open Innovation - Problem Solving and the Governance ChoiceUploaded byvikramdas
- nursing 405 efolioUploaded byapi-403368398
- Epstein,School-Family Partnerships.pdfUploaded byAlonzo Ellison
- Risk Marketing (2016)Uploaded byWiwik Pramestia
- An Intelligent Control Strategies Implemented on Heat Exchanger System a Case StudyUploaded byEko Ariyanto Wong Plembang
- kambouri_paper1Uploaded byAW
- e Learning Theories ModelsUploaded byAdnan Puji Wahyudi
- The Family Therapy journals in 1999: a thematic reviewUploaded byFausto Adrián Rodríguez López
- SUGGESTOPAEDIAUploaded byAnnalisa Spinello
- Afterlife Beliefs and Mind-body DualismUploaded byIgor Demić
- AbigailMarsh_2016T-480pUploaded byAlan Paul Calisaya Monzon
- Principal as instructional leader pdf.pdfUploaded byHania Khaan
- lesson plan 2 ganag modelUploaded byapi-384875207
- Report Smart SchoolUploaded byEanBellross
- Teori Atribusi Weiner_1Uploaded byIVy ChuAh
- Brian Tracy - Thinking BigUploaded byapi-3771565
- Focus -The Hidden Driver of ExcellenceUploaded bykpk782
- Article Leadership and NeuroscienceUploaded byFactree
- The Science of Emotional IntelligenceUploaded bymontano2379
- Introduction to nursing theoriesUploaded byjohnjohnygeorge
- Assignment 3 : Simulation and Stella ReportUploaded bySHAFIQAH HANAFI
- School and Classroom ODD Strategies 2.24.14Uploaded bynarcis2009
- Cheryl Teacher Standards Self AssessmentUploaded byCheryl Oakes
- schema-theory-an-information-processing-model-of-perception-and-cognition-1973.pdfUploaded byAnDreez Camphoz