4 views

Uploaded by indhu.cs

- Article Lr
- The_Possibility_of_using_Artificial_Neur.pdf
- neuralnetworkitsapplications121-120113215915-phpapp02.pptx
- Jaime Arriagada; Pernilla Olausson; Azra Selimovic -- Artificial Neural Network Simulator for SOFC Performanc
- International Journal of Computational Engineering Research(IJCER)
- Damage Identification & Condition Assessment of Building Structures using Frequency Response Functions & Neural Networks (2013) - Thesis (275).pdf
- Neural Network Toolbox Command List
- neural
- 1-s2.0-S0167268108001959-main
- Paper Saya
- 1.pdf
- Learning System
- 0898.pdf
- Basic Knowledge of ANN
- Ann 1
- 39-Development of a cutting tool condition monitoring system.pdf
- Matconvnet Manual
- 10.1109%2FIAS.1994.345442
- 00570749.pdf
- 003

You are on page 1of 72

Neural expert systems and neuro-fuzzy systems

Introduction

Neural expert systems

Neuro-fuzzy systems

ANFIS: Adaptive Neuro-Fuzzy Inference

System

Summary

A hybrid intelligent system is one that combines at least two

intelligent technologies.

For example, combining a neural network with a fuzzy system

results in a hybrid neuro-fuzzy system.

The combination of:

probabilistic reasoning,

fuzzy logic,

neural networks and

evolutionary computation forms the core of soft computing,

Soft Computing is an emerging approach to building hybrid

intelligent systems capable of reasoning and learning in an

uncertain and imprecise environment.

Hybrid Systems

Although words are less precise than numbers, precision

carries a high cost.

We use words when there is a tolerance for imprecision.

Soft computing exploits the tolerance for uncertainty and

imprecision to achieve greater tractability and

robustness, and lower the cost of solutions.

We also use words when the available data is not precise

enough to use numbers.

This is often the case with complex problems, and while

hard computing fails to produce any solution, soft

computing is still capable of finding good solutions.

Using words rather than strict

numbers

Lotfi Zadeh is reputed to have said that a good hybrid

would be British Police, German Mechanics, French

Cuisine, Swiss Banking and Italian Love.

But British Cuisine, German Police, French Mechanics,

Italian Banking and Swiss Love would be a bad one.

Likewise, a hybrid intelligent system can be good or bad

it depends on which components constitute the hybrid.

So our goal is to select the right components for building

a good hybrid system.

Comparison of Expert Systems, Fuzzy Systems,

Neural Networks and Genetic Algorithms

Knowledge representation

Uncertainty tolerance

Imprecision tolerance

Adaptability

Learning ability

Explanation ability

Knowledge discovery and data mining

Maintainability

ES FS NN GA

* The terms used for grading are:

- bad, - rather bad, - good

Neural

expert

systems

Neural expert systems

Expert systems rely on logical inferences and decision trees and

focus on modelling human reasoning.

Neural networks rely on parallel data processing and focus

on modelling a human brain.

Expert systems treat the brain as a black-box.

Neural networks look at its structure and functions,

particularly at its ability to learn.

Knowledge in a rule-based expert system is represented by IF-

THEN production rules.

Knowledge in neural networks is stored as synaptic weights

between neurons.

In expert systems, knowledge can be divided into

individual rules and the user can see and

understand the piece of knowledge applied by the

system.

In neural networks, one cannot select a single

synaptic weight as a discrete piece of knowledge.

Here knowledge is embedded in the entire

network;

it cannot be broken into individual pieces, and

any change of a synaptic weight may lead to

unpredictable results.

A neural network is, in fact, a black-box for its

user.

Can we combine advantages of expert systems

and neural networks to create a more powerful

and effective expert system?

A hybrid system that combines a neural network and

a rule-based expert system is called a neural expert

system (or a connectionist expert system).

Basic structure of a neural expert system

Inference Engine

Neural Knowledge Base Rule Extraction

Explanation Facilities

User Interface

User

Rule: IF - THEN

Training Data

New

Data

The heart of a neural expert system is the

inference engine.

The inference engine controls the information

flow in the system and initiates inference over

the neural knowledge base.

A neural inference engine also ensures

approximate reasoning.

The inference engine

Approximate reasoning

In a rule-based expert system, the inference engine compares the

condition part of each rule with data given in the database.

When the IF part of the rule matches the data in the

database, the rule is fired and its THEN part is executed.

The precise matching is required (inference engine cannot

cope with noisy or incomplete data).

Neural expert systems use a trained neural network in place of

the knowledge base.

The input data does not have to precisely match the data that

was used in network training.

This ability is called approximate reasoning.

Rule extraction

Neurons in the network are connected by links,

each of which has a numerical weight attached to it.

The weights in a trained neural network determine

the strength or importance of the associated neuron

inputs.

The neural knowledge base

-0.8

-0.2

-0.1

-1.1

2.2

0.0

-1.0

2.8

-1.6

-2.9

-1.3

Bird

Plane

Glider

+1

Wings

Tail

Beak

Feathers

Engine

+1

0

+1

+1

1

-1.6 -0.7

-1.1 1.9

1

1

Rule 1

Rule 2

Rule 3

1.0

1.0

1.0

If we set each input of the input layer to either +1

(true), 1 (false), or 0 (unknown), we can give a

semantic interpretation for the activation of any

output neuron.

For example, if the object has Wings (+1), Beak (+1)

and Feathers (+1), but does not have Engine (1),

then we can conclude that this object is Bird (+1):

0 3 . 5 ) 1 . 1 ( ) 1 ( 8 . 2 1 2 . 2 1 ) 2 . 0 ( 0 ) 8 . 0 ( 1

1

> = + + + + =

Rule

X

1

1

+ = =

Bird Rule

Y Y

How the system distinguishes a bird from an airplane?

We can similarly conclude that this object is not

Plane:

0 2 . 4 9 . 1 ) 1 ( ) 6 . 1 ( 1 0 . 0 1 ) 1 . 0 ( 0 ) 7 . 0 ( 1

2

< = + + + + =

Rule

X

1

2

= =

Plane Rule

Y Y

and not Glider:

0 2 . 4 ) 3 . 1 ( ) 1 ( ) 9 . 2 ( 1 ) 0 . 1 ( 1 ) 1 . 1 ( 0 ) 6 . 0 ( 1

3

< = + + + + =

Rule

X

1

3

= =

Glider Rule

Y Y

By attaching a corresponding question to each input

neuron, we can enable the system to prompt the user

for initial values of the input variables:

Neuron: Wings

Question: Does the object have wings?

Neuron: Tail

Question: Does the object have a tail?

Neuron: Beak

Question: Does the object have a beak?

Neuron: Feathers

Question: Does the object have feathers?

Neuron: Engine

Question: Does the object have an engine?

An inference can be made if the known net

weighted input to a neuron is greater than the

sum of the absolute values of the weights of

the unknown inputs.

= =

>

n

j

j

n

i

i i

w w x

1 1

where i e known, j e known and n is the number

of neuron inputs.

Example:

Enter initial value for the input Feathers:

> +1

KNOWN = 1

.

2.8 = 2.8

UNKNOWN = |0.8| + |0.2| + |2.2| + |1.1| = 4.3

KNOWN < UNKNOWN

Enter initial value for the input Beak:

> +1

KNOWN = 1

.

2.8 + 1

.

2.2 = 5.0

UNKNOWN = |0.8| + |0.2| + |1.1| = 2.1

KNOWN > UNKNOWN

CONCLUDE: Bird is TRUE

An example of a multi-layer knowledge base

Conjunction

Layer

Input

Layer

R1

R2

R3

R4

a1

a2

a3

a4

a5

R5

b2

b1

b3

0.2

0.8

-0.1

0.9

0.6

R6

R7

R8

c1

c2

0.1

0.9

0.7

Disjunction

Layer

Conjunction

Layer

Disjunction

Layer

Rule 1: Rule 5:

IF a1 AND a3 THEN b1 (0.8) IF a5 THEN b3 (0.6)

Rule 2: Rule 6:

IF a1 AND a4 THEN b1 (0.2) IF b1 AND b3 THEN c1 (0.7)

Rule 3: Rule 7:

IF a2 AND a5 THEN b2 (-0.1) IF b2 THEN c1 (0.1)

Rule 4: Rule 8:

IF a3 AND a4 THEN b3 (0.9) IF b2 AND b3 THEN c2 (0.9)

1.0

1.0

1.0

1.0

1.0

1.0

1.0

1.0

1.0

1.0

1.0

1.0

1.0

1.0

Neuro-

fuzzy

systems

Fuzzy logic and neural networks are natural

complementary tools in building intelligent systems.

While neural networks are low-level computational

structures that perform well when dealing with raw data,

fuzzy logic deals with reasoning on a higher level, using

linguistic information acquired from domain experts.

However, fuzzy systems lack the ability to learn and

cannot adjust themselves to a new environment.

On the other hand, although neural networks can

learn, they are opaque to the user.

Neuro-fuzzy systems

Integrated neuro-fuzzy systems can combine the

parallel computation and learning abilities of

neural networks with the human-like knowledge

representation and explanation abilities of fuzzy

systems.

As a result, neural networks become more

transparent, while fuzzy systems become capable

of learning.

Synergy of Neural and Fuzzy

A neuro-fuzzy system is a neural network which is

functionally equivalent to a fuzzy inference model.

Neuro-Fuzzy System can be trained to develop IF-

THEN fuzzy rules and determine membership functions

for input and output variables of the system.

Expert knowledge can be incorporated into the structure

of the neuro-fuzzy system.

At the same time, the connectionist structure avoids

fuzzy inference, which entails a substantial

computational burden.

The structure of a neuro-fuzzy system is similar to

a multi-layer neural network.

In general, a neuro-fuzzy system has:

input and output layers,

and three hidden layers

that represent membership functions and

fuzzy rules.

Neuro-fuzzy system

Layer 2 Layer 3 Layer 4 Layer 5 Layer 1

y

x1

A1

A2

A3

B1

B2

x2

C1

C2

x1

x1

x1

x2

x2

x2

B1

A2

B3

C2

C1

R1

R3

R5

R6

R4

R1

R5

R4

R6

R2

R3

R2

B3

A1

w

R3

w

R6

w

R1

w

R2

w

R4

w

R5

A3

B2

Each layer in the neuro-fuzzy system is associated

with a particular step in the fuzzy inference process.

Layer 1 is the input layer. Each neuron in this layer

transmits external crisp signals directly to the next

layer. That is,

Layer 2 is the fuzzification layer.

Neurons in this layer represent fuzzy sets used in the

antecedents of fuzzy rules.

A fuzzification neuron receives a crisp input and

determines the degree to which this input belongs to

the neurons fuzzy set.

) 1 ( ) 1 (

i i

x y =

Layer 2 Layer 3 Layer 4 Layer 5 Layer 1

y

x1

A1

A2

A3

B1

B2

x2

C1

C2

x1

x1

x1

x2

x2

x2

B1

A2

B3

C2

C1

R1

R3

R5

R6

R4

R1

R5

R4

R6

R2

R3

R2

B3

A1

w

R3

w

R6

w

R1

w

R2

w

R4

w

R5

A3

B2

Layer 1 is the input layer. Each neuron in this layer transmits

external crisp signals directly to the next layer. That is,

Layer 2 is the fuzzification layer.

Neurons in this layer represent fuzzy sets used in the antecedents of

fuzzy rules.

A fuzzification neuron receives a crisp input and determines the

degree to which this input belongs to the neurons fuzzy set.

1. The activation function of a membership neuron is set

to the function that specifies the neurons fuzzy set.

2. We use triangular sets, and therefore, the activation

functions for the neurons in Layer 2 are set to the

triangular membership functions.

3. A triangular membership function can be specified by

two parameters {a, b} as follows:

+ >

+ < <

s

=

2

if , 0

2 2

if ,

2

1

2

if , 0

) 2 (

) 2 (

) 2 (

) 2 (

) 2 (

b

a x

b

a x

b

a

b

a x

b

a x

y

i

i

i

i

i

Triangular activation functions

(b) Effect of parameter b.

0.2

0.4

0.6

0.8

1

a = 4, b =6

a = 4, b =4

0 4 6 8

0

1 2 3 5 7

X

(a) Effect of parameter a.

0.2

0.4

0.6

0.8

1

a = 4, b =6

a = 4.5, b =6

0 4 6 8

0

1 2 3 5 7

X

Layer 2 Layer 3 Layer 4 Layer 5 Layer 1

y

x1

A1

A2

A3

B1

B2

x2

C1

C2

x1

x1

x1

x2

x2

x2

B1

A2

B3

C2

C1

R1

R3

R5

R6

R4

R1

R5

R4

R6

R2

R3

R2

B3

A1

w

R3

w

R6

w

R1

w

R2

w

R4

w

R5

A3

B2

Layer 3 is the fuzzy rule layer.

1. Each neuron in this layer corresponds to a single fuzzy rule.

2. A fuzzy rule neuron receives inputs from the fuzzification neurons that represent

fuzzy sets in the rule antecedents.

3. For instance, neuron R1, which corresponds to Rule 1, receives inputs from neurons

A1 and B1.

Layer 3 is the fuzzy rule layer.

1. Each neuron in this layer corresponds to a single

fuzzy rule.

2. A fuzzy rule neuron receives inputs from the

fuzzification neurons that represent fuzzy sets in

the rule antecedents.

3. For instance, neuron R1, which corresponds to

Rule 1, receives inputs from neurons A1 and B1.

In a neuro-fuzzy system, intersection can be

implemented by the product operator.

Thus, the output of neuron i in Layer 3 is obtained

as:

) 3 ( ) 3 (

2

) 3 (

1

) 3 (

ki i i

i

x x x y =

1 1 1

) 3 (

1

R B A

R

y = =

Layer 2 Layer 3 Layer 4 Layer 5 Layer 1

y

x1

A1

A2

A3

B1

B2

x2

C1

C2

x1

x1

x1

x2

x2

x2

B1

A2

B3

C2

C1

R1

R3

R5

R6

R4

R1

R5

R4

R6

R2

R3

R2

B3

A1

w

R3

w

R6

w

R1

w

R2

w

R4

w

R5

A3

B2

Layer 4 is the output membership layer. Neurons in this layer represent fuzzy

sets used in the consequent of fuzzy rules.

An output membership neuron combines all its inputs by using the fuzzy

operation union.

This operation can be implemented by the probabilistic OR. That is,

The value of

C1

represents the integrated firing strength of fuzzy rule

neurons R3 and R6.

) 4 ( ) 4 (

2

) 4 (

1

) 4 (

li i i i

x x x y =

1 6 3

) 4 (

1

C R R

C

y = =

Layer 4 is the output membership layer. Neurons

in this layer represent fuzzy sets used in the

consequent of fuzzy rules.

An output membership neuron combines all its

inputs by using the fuzzy operation union.

This operation can be implemented by the

probabilistic OR. That is,

The value of

C1

represents the integrated firing

strength of fuzzy rule neurons R3 and R6.

) 4 ( ) 4 (

2

) 4 (

1

) 4 (

li i i

i

x x x y =

1 6 3

) 4 (

1

C R R

C

y = =

Layer 2 Layer 3 Layer 4 Layer 5 Layer 1

y

x1

A1

A2

A3

B1

B2

x2

C1

C2

x1

x1

x1

x2

x2

x2

B1

A2

B3

C2

C1

R1

R3

R5

R6

R4

R1

R5

R4

R6

R2

R3

R2

B3

A1

w

R3

w

R6

w

R1

w

R2

w

R4

w

R5

A3

B2

Layer 5 is the defuzzification layer. Each neuron in this layer represents a single

output of the neuro-fuzzy system. It takes the output fuzzy sets clipped by the

respective integrated firing strengths and combines them into a single fuzzy

set.

Neuro-fuzzy systems can apply standard defuzzification methods, including

the centroid technique.

We will use the sum-product composition method.

Layer 5 is the defuzzification layer. Each neuron

in this layer represents a single output of the

neuro-fuzzy system. It takes the output fuzzy sets

clipped by the respective integrated firing

strengths and combines them into a single fuzzy

set.

Neuro-fuzzy systems can apply standard

defuzzification methods, including the centroid

technique.

We will use the sum-product composition

method.

The sum-product composition calculates the crisp

output as the weighted average of the centroids of

all output membership functions.

For example, the weighted average of the centroids of

the clipped fuzzy sets C1 and C2 is calculated as,

2 2 1 1

2 2 2 1 1 1

C C C C

C C C C C C

b b

b a b a

y

+

+

=

How does a neuro-fuzzy system learn?

A neuro-fuzzy system is essentially a multi-layer

neural network,

and thus it can apply standard learning algorithms

developed for neural networks,

including the back-propagation algorithm.

When a training input-output example is presented to the system, the

back-propagation algorithm computes the system output and compares it

with the desired output of the training example.

The error is propagated backwards through the network from the

output layer to the input layer.

The neuron activation functions are modified as the error is

propagated.

To determine the necessary modifications, the back-propagation

algorithm differentiates the activation functions of the neurons.

Let us demonstrate how a neuro-fuzzy system works on a simple example.

Training patterns

0

1

0

Y

1

1

0

The data set is used for training the five-rule neuro-

fuzzy system shown below.

(a) Five-rule system.

x2

L

S

x2

L

S

0 10 20 30 40 50

0

0.2

0.4

0.6

0.8

1

y

1

2

4

5

3

L

S

Epoch

W

e

i

g

h

t

w

R2

0.99

0

0.72

0.61

0.79

(b) Training for 50 epochs.

w

R3

w

R4

w

R5

w

R1

Five-rule neuro-fuzzy system

Suppose that fuzzy IF-THEN rules incorporated into the

system structure are supplied by a domain expert.

Prior or existing knowledge can dramatically expedite

the system training.

Besides, if the quality of training data is poor, the expert

knowledge may be the only way to come to a solution at all.

However, experts do occasionally make mistakes, and

thus some rules used in a neuro-fuzzy system may be

false or redundant.

Therefore, a neuro-fuzzy system should also be capable

of identifying bad rules.

Good and Bad Rules from Experts in systems

Given input and output linguistic values, a neuro-fuzzy system

can automatically generate a complete set of fuzzy IF-THEN

rules.

Let us create the system for the XOR example.

This system consists of 2

2

2 = 8 rules.

Because expert knowledge is not embodied in the system

this time, we set all initial weights between Layer 3 and

Layer 4 to 0.5.

After training we can eliminate all rules whose certainty

factors are less than some sufficiently small number, say 0.1.

As a result, we obtain the same set of four fuzzy IF-THEN

rules that represents the XOR operation.

Example: A neuro-fuzzy system for XOR

Eight-rule neuro-fuzzy

system for XOR

(a) Eight-rule system.

S

x1

L

x2

y

L

S

L

S

1

2

3

4

5

6

7

8

0 10 20 30 40 50

Epoch

w

R1

(b) Training for 50 epochs.

w

R4

0

0.78

0.69

0.62

0

0

0.80

0

w

R2

w

R8

w

R5

w

R3

w

R6

& w

R7

0

0.2

0.3

0.5

0.7

0.1

0.4

0.6

0.8

Neuro-fuzzy systems: summary

The combination of fuzzy logic and neural networks constitutes a

powerful means for designing intelligent systems.

Domain knowledge can be put into a neuro-fuzzy system by

human experts in the form of linguistic variables and fuzzy rules.

When a representative set of examples is available, a neuro-fuzzy

system can automatically transform it into a robust set of fuzzy

IF-THEN rules, and thereby reduce our dependence on expert

knowledge when building intelligent systems.

ANFIS:

Adaptive Neuro-

Fuzzy Inference

System

ANFIS:

Adaptive Neuro-Fuzzy Inference System

The Sugeno fuzzy model was proposed for generating

fuzzy rules from a given input-output data set.

A typical Sugeno fuzzy rule is expressed in the following

form:

IF x

1

is A

1

AND x

2

is A

2

. . . . .

AND x

m

is A

m

THEN y = f (x

1

, x

2

, . . . , x

m

)

where x

1

, x

2

, . . . , x

m

are input variables; A

1

, A

2

, . . . , A

m

are fuzzy sets.

When y is a constant, we obtain a zero-order

Sugeno fuzzy model in which the consequent of

a rule is specified by a singleton.

When y is a first-order polynomial, i.e.

y = k

0

+ k

1

x

1

+ k

2

x

2

+ . . . + k

m

x

m

we obtain a first-order Sugeno fuzzy model.

Orders of Sugeno fuzzy model

N1

y

N3

N2

N4

1

x2

x1

Layer 2 Layer 5 Layer 3 Layer 4 Layer 6 Layer 1

A1

A2

B1

B2

x1 x2

1

[

2

[

3

[

4

[

2

3

4

Adaptive Neuro-Fuzzy Inference System

Layer 1 is the input layer. Neurons in this layer simply pass external crisp

signals to Layer 2.

Layer 2 is the fuzzification layer. Neurons in this layer perform

fuzzification. In Jangs model, fuzzification neurons have a bell

activation function.

N1

y

N3

N2

N4

1

x2

x1

Layer 2 Layer 5 Layer 3 Layer 4 Layer 6 Layer 1

A1

A2

B1

B2

x1 x2

1

[

2

[

3

[

4

[

2

3

4

Adaptive Neuro-Fuzzy Inference System

Layer 3 is the rule layer.

Each neuron in this layer corresponds to a single Sugeno-type fuzzy rule.

A rule neuron receives inputs from the respective fuzzification neurons and

calculates the firing strength of the rule it represents.

In an ANFIS, the conjunction of the rule antecedents is evaluated by the

operator product.

Layer 3 is the rule layer. Each neuron in this layer

corresponds to a single Sugeno-type fuzzy rule. A

rule neuron receives inputs from the respective

fuzzification neurons and calculates the firing

strength of the rule it represents.

In an ANFIS, the conjunction of the rule antecedents is

evaluated by the operator product. Thus, the output

of neuron i in Layer 3 is obtained as,

where the value of

1

represents the firing strength,

or the truth value, of Rule 1.

[

=

=

k

j

ji i

x y

1

) 3 ( ) 3 (

y

(3)

=

A1

B1

=

1

,

H1

N1

y

N3

N2

N4

1

x2

x1

Layer 2 Layer 5 Layer 3 Layer 4 Layer 6 Layer 1

A1

A2

B1

B2

x1 x2

1

[

2

[

3

[

4

[

2

3

4

Adaptive Neuro-Fuzzy Inference System

Layer 4 is the normalisation layer. Each neuron in

this layer receives inputs from all neurons in the

rule layer, and calculates the normalised firing

strength of a given rule.

Layer 4 is the normalisation layer. Each neuron in

this layer receives inputs from all neurons in the

rule layer, and calculates the normalised firing

strength of a given rule.

The normalised firing strength is the ratio of the

firing strength of a given rule to the sum of firing

strengths of all rules. It represents the contribution

of a given rule to the final result. Thus, the output

of neuron i in Layer 4 is determined as,

i

n

j

j

i

n

j

ji

ii

i

x

x

y =

= =

= = 1 1

) 4 (

) 4 (

) 4 (

1

4 3 2 1

1

) 4 (

1 N

=

+ + +

= y

Layer 5 is the defuzzification layer. Each neuron

in this layer is connected to the respective

normalisation neuron, and also receives initial

inputs, x

1

and x

2

. A defuzzification neuron

calculates the weighted consequent value of a

given rule as,

where is the input and is the output of

defuzzification neuron i in Layer 5, and k

i0

, k

i1

and k

i2

is a set of consequent parameters of rule i.

| | | | 2 1 2 1

2 1 0 2 1 0

) 5 ( ) 5 (

x k x k k x k x k k x y

i i i i i i i

i i

+ + = + + =

) 5 (

i

x

) 5 (

i

y

Layer 6 is represented by a single summation

neuron.

This neuron calculates the sum of outputs of all

defuzzification neurons and produces the overall

ANFIS output, y,

| |

= =

+ + = =

n

i

i i i i

n

i

i

x k x k k x y

1

2 1 0

1

) 6 (

2 1

Can an ANFIS deal with problems where we

do not have any prior knowledge of the rule

consequent parameters?

It is not necessary to have any prior

knowledge of rule consequent

parameters.

An ANFIS learns these parameters and

tunes membership functions.

Learning in the ANFIS model

An ANFIS uses a hybrid learning algorithm that

combines the least-squares estimator and the gradient

descent method.

In the ANFIS training algorithm, each epoch is

composed from a forward pass and a backward pass.

In the forward pass, a training set of input patterns

(an input vector) is presented to the ANFIS, neuron

outputs are calculated on the layer-by-layer basis,

and rule consequent parameters are identified.

The rule consequent parameters are identified by the

least-squares estimator.

In the Sugeno-style fuzzy inference, an output, y, is a

linear function.

Thus, given the values of the membership parameters and

a training set of P input-output patterns, we can form P

linear equations in terms of the consequent parameters as:

+

1

(1) f

1

(1) = y

d

(1)

2

(1) f

2

(1)

+

1

(2) f

1

(2) = y

d

(2)

2

(2) f

2

(2)

+

1

(p) f

1

(p) = y

d

(p)

2

(p) f

2

(p)

+

1

(P) f

1

(P) = y

d

(P)

2

(P) f

2

(P)

+ +

n

(1) f

n

(1)

+ +

n

(2) f

n

(2)

+ +

n

(p) f

n

(p)

+ +

n

(P) f

n

(P)

In the matrix notation, we have

y

d

= A k,

where y

d

is a P 1 desired output vector,

and k is an n (1 + m) 1 vector of unknown consequent

parameters,

k = [k

10

k

11

k

12

k

1m

k

20

k

21

k

22

k

2m

k

n0

k

n1

k

n2

k

n m

]

T

(

(

(

(

(

y

d

(1)

y

d

(2)

y

d

(p)

= y

d

y

d

(P)

(

(

(

= A

1

(2)

1

(1) x

1

(2)

1

(2) x

m

(2)

1

(1)

1

(1) x

1

(1)

1

(1) x

m

(1)

n

(1)

n

(1) x

1

(1)

n

(1) x

m

(1)

n

(2)

n

(2) x

1

(2)

n

(2) x

m

(2)

(

(

(

(

(

(

1

(p)

1

(p) x

1

(p)

1

(p) x

m

(p)

n

(p)

n

(p) x

1

(p)

n

(p) x

m

(p)

1

(P)

1

(P) x

1

(P)

1

(P) x

m

(P)

n

(P)

n

(P) x

1

(P)

n

(P) x

m

(P)

(

(

(

established, we compute an actual network output

vector, y, and determine the error vector, e

e = y

d

y

In the backward pass, the back-propagation algorithm

is applied.

The error signals are propagated back, and the

antecedent parameters are updated according to

the chain rule.

back-propagation algorithm

In the ANFIS training algorithm suggested by Jang, both

antecedent parameters and consequent parameters are

optimised.

In the forward pass, the consequent parameters are

adjusted while the antecedent parameters remain

fixed.

In the backward pass, the antecedent parameters are

tuned while the consequent parameters are kept fixed.

ANFIS training algorithm of Jang

Function approximation using the ANFIS model

In this example, an ANFIS is used to follow a

trajectory of the non-linear function defined by

the equation

First, we choose an appropriate architecture for

the ANFIS. An ANFIS must have two inputs x1

and x2 and one output y.

Thus, in our example, the ANFIS is defined by

four rules, and has the structure shown below.

2

) 1 2 cos(

x

e

x

y =

An ANFIS model with

four rules

N1

y

N3

N2

N4

1

x2

x1

Layer 2 Layer 5 Layer 3 Layer 4 Layer 6 Layer 1

A1

A2

B1

B2

x1 x2

1

[

2

[

3

[

4

[

2

3

4

The ANFIS training data includes 101 training

samples.

They are represented by a 101 3 matrix [x1 x2

y

d

], where x1 and x2 are input vectors, and y

d

is a

desired output vector.

The first input vector, x1, starts at 0, increments by 0.1

and ends at 10.

The second input vector, x2, is created by taking sin

from each element of vector x1, with elements of the

desired output vector, y

d

, determined by the function

equation.

Learning in an ANFIS with two membership

functions assigned to each input (one epoch)

1

-3

-2

-1

0

1

2

-1

-0.5

0

0.5

0

2

4

6

8

10

y

x1

x2

Training Data

ANFIS Output

Learning in an ANFIS with two membership

functions assigned to each input (100 epochs)

0

2

4

6

8

10

-1

-0.5

0

0.5

1

-3

-2

-1

0

1

2

x1

x2

y

Training Data

ANFIS Output

We can achieve some improvement, but much

better results are obtained when we assign three

membership functions to each input variable.

In this case, the ANFIS model will have nine

rules, as shown in figure below.

y

x1 x2

1

1

[

2

[

3

[

4

[

5

[

6

[

7

[

8

[

9

[

A3

A1

x1

A2

B3

B1

B2

N1

N2

N3

N4

N6

N5

N7

N8

N9

2

3

4

5

6

7

8

9

x2

An ANFIS model with nine rules

Learning in an ANFIS with three membership

functions assigned to each input (one epoch)

1

-3

-2

-1

0

1

2

-1

-0.5

0

0.5

0

2

4

6

8

10

y

x1

x2

Training Data

ANFIS Output

Learning in an ANFIS with three membership

functions assigned to each input (100 epochs)

0

2

4

6

8

10

-1

-0.5

0

0.5

1

-3

-2

-1

0

1

2

x1

x2

y

Training Data

ANFIS Output

Initial and final membership functions of the ANFIS

0 1 2 3 4 5 6 7 8 9 10

0

0.2

0.4

0.6

0.8

1

x1 x2

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

0

0.2

0.4

0.6

0.8

1

(a) Initial membership functions.

0 1 2 3 4 5 6 7 8 9 10

0

0.2

0.4

0.6

0.8

1

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

0

0.2

0.4

0.6

0.8

1

x1 x2

(b) Membership functions after 100 epochs of training.

- Article LrUploaded byLdvdMiix
- The_Possibility_of_using_Artificial_Neur.pdfUploaded byIndri Cantik
- neuralnetworkitsapplications121-120113215915-phpapp02.pptxUploaded bysubashree
- Jaime Arriagada; Pernilla Olausson; Azra Selimovic -- Artificial Neural Network Simulator for SOFC PerformancUploaded bybaconkho
- International Journal of Computational Engineering Research(IJCER)Uploaded byInternational Journal of computational Engineering research (IJCER)
- Damage Identification & Condition Assessment of Building Structures using Frequency Response Functions & Neural Networks (2013) - Thesis (275).pdfUploaded byJulio Humberto Díaz Rondán
- Neural Network Toolbox Command ListUploaded byAditya Chaudhary
- neuralUploaded byAli Faiz Abotiheen
- 1-s2.0-S0167268108001959-mainUploaded bySam Keays
- Paper SayaUploaded byHari Wibowo
- 1.pdfUploaded bySuraj Nunna
- Learning SystemUploaded bysamaher hussein
- 0898.pdfUploaded byILAYAPERUMAL K
- Basic Knowledge of ANNUploaded byNinni Singh
- Ann 1Uploaded byJain Deepak
- 39-Development of a cutting tool condition monitoring system.pdfUploaded byCaio Cruz
- Matconvnet ManualUploaded byFlorin Furculesteanu
- 10.1109%2FIAS.1994.345442Uploaded bysam
- 00570749.pdfUploaded byBasil
- 003Uploaded byAhmed Khazal
- Rah MathUploaded bySabin Lal
- AnnUploaded byabdulrahman
- 07 Neural Networks1Uploaded bySyed Mudassar Ali
- Oil Fired Boiler Comb ionUploaded byChethan Kumar
- ANN for WFOCRP 2012-20120928F WithoutNotesUploaded byMiguel Adrian Carretero
- Neural Networks Using C# SuccinctlyUploaded byJonathan Lazo Irus
- (Good)Ian H. Witten, Eibe Frank, Mark a. Hall Data Mining_ Practical Machine Learning Tools and Techniques, Third Edition (the Morgan Kaufmann Series in Data Management Systems) 2011Uploaded byintisar ibrahim
- Hinton 1Uploaded byElia Bruni
- Predictive Model for Ultrasonic Slitting of Glass Using Feed Forward Back Propagation Artificial Neural NetworkUploaded byInternational Journal for Scientific Research and Development - IJSRD
- Artificial Neural NetworsksUploaded byVenu Gopal

- Ec2155 Lab ManualUploaded byindhu.cs
- Cired 2013 PapersUploaded byindhu.cs
- Power GridUploaded byindhu.cs
- Transducer Engineering by Nagaraj-1Uploaded bymoney_kandan2004
- Poem-god Knows WhyUploaded byindhu.cs
- Vcd 10 Users GuideUploaded byindhu.cs
- SOFT COMPUTING APPROACHES TO FAULT DIAGNOSISUploaded byrom009
- SOFT COMPUTING APPROACHES TO FAULT DIAGNOSISUploaded byrom009
- CT SAT Theory (PSRC).docUploaded byerec87
- elplekUploaded byindhu.cs
- magnetic transducer.pdfUploaded byindhu.cs
- Signals and Systems MODEL QpUploaded byindhu.cs
- Diwali eBookUploaded byjoshitha09
- Cisco E3000Uploaded byAlex Reinhard
- LAB CODEUploaded byindhu.cs
- Expt.4Uploaded byindhu.cs
- Power System Operation and ControlUploaded byindhu.cs

- A Trading System for FTSE-100 FuturesUploaded byMarcos Luis Almeida Barbosa
- A hybrid genetic-neural architecture for stock indexes forecastingUploaded byMaurizio Idini
- IRJET-Classification of Arrhythmic ECG Data using Artificial Neural NetworkUploaded byIRJET Journal
- Content 1Uploaded bySyamsirAlam
- 25 Generative ModelsUploaded byJan Hula
- FPGA Design and Implementation Issues of Artificial Neural Network BasedUploaded byvigneshwaran50
- A Basic Introduction To Neural Networks.docUploaded byDennis Ebenezer Dhanaraj
- Natural Lanugage Processing with TensorFlow_ Teach language to machines using Python's deep learning library.pdfUploaded byLucas Alves
- 1.1.7.neuralnetworksUploaded byLe Thai Linh
- Deeplearning2015 Goodfellow Adversarial Examples 01Uploaded bygagamamapapa
- Activity Classiﬁcation Using Realistic DataUploaded byLaurencia Leny
- NEURAL NETWORK BASED CLASSIFICATION AND DETECTION OF GLAUCOMA USING OPTIC DISC AND CUP FEATURESUploaded byijsret
- Neuralnetworksanddeeplearning ConvertedUploaded byRonald Francisco Alvarado Roman
- vibration data time domainUploaded byArun Peethambaran
- An Efficient Segmentation Technique for Devanagari Offline Handwritten Scripts Using the Feedforward Neural Network - SpringerUploaded byMukta Rao
- mc5_mangUploaded bymagsreddy
- NeuralUploaded byRoberto Pinheiro Domingos
- Online Adaptive Fuzzy Neural Network Automotive Engine Control by Keith JamesUploaded byFelipe Silva
- NARX MODELUploaded bydom007thy
- Matlab CodesUploaded byonlymag4u
- This Document Contains a Step by Step Guide to Implementing a Simple Neural Network in CUploaded bycallmedugu
- An Approach Based on Wavelet Decomposition and Neural Network for ECG Noise ReductionUploaded byHoang Manh
- brain chip ReportUploaded bysrikanthkalemla
- A Neural Network in 13 Lines of Python (Part 2 - Gradient Descent) - i Am TraskUploaded byMuhammad Zaka Ud Din
- Study of Data Mining Techniques Used for Financial Data AnalysisUploaded byKhalid Ali Mohamed
- Prediction of Heart Disease Using Back Propagation Mlp AlgorithmUploaded byIJSTR Research Publication
- A Neural Network Approach Based on Agent to Predict Stock PerformanceUploaded byThoth Dwight
- Tensor flowUploaded bysmslca
- SoCPaR -Paper109-Isotonic Muscle Fatigue Prediction for Sport Training.pdfUploaded bySenthil Murugan
- EMG Report (V1).docxUploaded byPhan Nguyễn Quy Nhơn