You are on page 1of 69

Dynamical Systems and

Hopfield Networks
 A dynamical system is a dynamic process/procedure which occurs repeatedly
(upon itself) and in the process demonstrates several interesting dynamics.
 The interesting dynamics could be fixed points, period-two cycles, period-n
cycles etc.
 The fixed point is a period 0-cycle.
 For example, on the calculator, repeated application of the cosine function for any
initial condition leads to the fixed point: 0.7931, irrespective of the initial
condition.
 A Hopfield network is a recurrent network, which, due to feedback connectivity
leads to recurrence relationship among the variables.
 There are two broad types of Hopfield networks:
Discrete time networks and
Continuous time networks
Figure 1: Discrete time hopfiled
 Fig. 1 shows a discrete-time Hopfield network.
 Note that the feedback paths from kth neuron connect to every other neuron expect the kth neuron.
 A Hebbian learning rule is used to train the discrete time Hopfield for mapping the data.
 This network has two types of updates: synchronous and asynchronous.
 In the synchronous type update, all elements of the output vector are simultaneously updated.
 In the asynchronous mode, the elements of the output vector are updated one after the other.

 For the network in Fig. 1, element-wise net input is given by the formula:
(1)

 The same equation (1) in a vector form looks like:


(2)

 where all quantities have their usual meaning.


 W is weight matrix, v is the vector of outputs.
 The weight matrix for the given connectivity looks like

(3)

 Where the self-connections do not exist and hence assume zeros.


 For a Hopfield network, the stability must be proved under the chosen update
(synchronous or asynchronous).
 We choose the Lyapunov stability method to prove that the chosen update is stable.
 In the Lyapunov method, we must demonstrate the existence of a quadratic cost function,
which is negative definite along the solution trajectories of the system.
 For the Hopfield network, in the asynchronous update mode where the elements of the v
vector are updated one after the other (or asynchronously updated) we show that the
Hopfield network is stable.
 The following function

(4)

Is a Lyapunov function for the Hopfiled network. The gradient of E (the “energy
function”) is computed using the formula:

(5)
The energy increment is then given by the formula
(6)

If an assumption is made that the neurons have a sign function (sgn(.)), for activation
function, then due to the following observation,
(7)

it follows that the energy increment at eqn. (6) is always negative.


 Therefore the asynchronous update of the Hopfield network is stable in the sense of Lyapunov.

 Dynamical systems in general exhibit several interesting dynamics – apart from periodic cycles
of various orders, they can be chaotic sometimes.
 Deterministic systems are systems which are described through physical laws and dynamical
systems are deterministic.
 Chaos is a phenomenon where the dynamical system behaves like a random system but it is still
deterministic.
 Random systems, as opposed to deterministic, are described through probabilistic measures.
 Chaos is a deterministic phenomenon even though it looks like a random phenomenon.
 The above proof of stability is for discrete time Hopfiled networks.
 Continuous time Hopfield network, on the other hand, yields a differential equation.
 The energy function plays a key role in driving the discrete Hopfield network to one of its stable
states (or least energy states).
 Figure 9 shows how a Hopfield network which, over a sequence of asynchronous updates has learned
to map the digit 4.
 In the training procedure we have set a 0 at white pixels and a 1 at black pixels.
 Therefore the discrete Hopfield can learn to store shapes – such as those of the letters of the alphabet.
 When a disturbed letter data is presented to a trained Hopfiled network, it converges over the
asynchronous updates to the correct letter corresponding to the disturbed data.

Figure 9
Self-Organizing Maps (SOM)

 SOM is also called self-organizing feature maps.


 This was discovered by Kohonen and hence also called Kohonen’s SOM.
 The weights corresponding to one or two dimensional arrays of neurons are updated as the neurons themselves
learn to map the features within the data – such as shapes of objects (pen, ball, pin, clip etc).

 This uses an enhanced version of winner take all learning algorithm.


 In addition to updating the winner’s weight, the algorithm also updates weights of a (topological)
neighborhood of neurons.
 The neighborhood is typically a square shaped, hexagon shaped.
FUZZY SETS AND FUZZY SYSTEMS

Fuzzy logic controller configuration:


A PI fuzzy controller

An example:
•Centroid method (defuzzification)
Membership functions:
•difference of two sigmoids:

0.9
0.8
0.8
0.6

0.4 0.7

0.2 0.6

0
0.5

-0.2
0.4
-0.4
0.3
-0.6

-0.8 0.2

0.1
-5 -4 -3 -2 -1 0 1 2 3 4 5

0
0 1 2 3 4 5 6 7 8 9 10
dsigmf, P=[5 2 5 7]

Sigmoid = f(x;a,c) = 1/(1 + exp(-a(x-c)))


Difference_Sigmoid = f1(x; a1, c1) - f2(x; a2, c2)
Defuzzification methods:

There is no general “best defuzzification method”.


Centroid method is normally used but other methods may equally well be used.
Operations on Fuzzy Sets
Entropy of a fuzzy set:

Some membership functions shown over a single graph:


Various Fuzzy Implications:
Interpolation of Fuzzy Rules:
Interpolation of fuzzy rules asserts that for a given function which is being modeled through
fuzzy logic, there exists a fuzzy inference system so that the given function may be
approximated with arbitrary accuracy. These results make fuzzy logic an accurate science
ANFIS or Adaptive Neuro-Fuzzy Inference System

This algorithm uses both neural network and fuzzy logic for solving problems.

It starts off by designing a fuzzy inference system (FIS) for the given problem. The FIS would consist of a
membership function and rule base along with a defuzzification method.

A neural network then improves the shapes of the membership functions by using a backpropagtion algorithm.
The FIS therefore gets refined by the ANN and the ANFIS hence designs a best feasible solution for the
problem at hand.

Connectionist Approach
A connectionist approach is to use the neural network size and hidden layer variation for seeking a best solution
for the problem. A given ANN may solve the problem but connectionist seeks to improve that solution by trying
other combinations which solve the problem better. In the process design a new algorithm.
Connectionism models mental or behavioral phenomena as the emergent processes of interconnected networks of
simple units.
Neuro-Fuzzy systems may be used for solving various kinds of control and monitoring problems.
 An expert system uses the available ANN, Fuzzy and Neuro-Fuzzy approaches (as well as support-vector machines)
for solving a problem.
 It returns the best result from among the results produced by the approaches applied. More generally, in artificial
intelligence, an expert system is a computer system that emulates the decision-making ability of a human expert.
 Expert systems are designed to solve complex problems by reasoning about knowledge, represented primarily as if–
then rules rather than through conventional procedural code.

 Artificial intelligence: Artificial intelligence (AI) is the intelligence exhibited by machines.


 In computer science, an ideal "intelligent" machine is a flexible rational agent that perceives its environment and
takes actions that maximize its chance of success at an arbitrary goal.
 The intelligence can be designed by using a database of knowledge on the one hand, as well as neuro, fuzzy, neuro-
fuzzy, machine learning systems.

 Hybrid systems: use both fuzzy (FIS) and connectionist (ANN) approaches for solving problems. The ANFIS
is an example of a hybrid system.
Fuzzy and ANFIS Examples

You might also like