You are on page 1of 348

Neural Fuzzy Systems

Robert Fuller
Donner Visiting professor

Abo Akademi University


ISBN 951-650-624-0, ISSN 0358-5654

Abo 1995
2
Contents
0.1 Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1 Fuzzy Systems 11
1.1 An introduction to fuzzy logic . . . . . . . . . . . . . . . . . . 11
1.2 Operations on fuzzy sets . . . . . . . . . . . . . . . . . . . . . 23
1.3 Fuzzy relations . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.3.1 The extension principle . . . . . . . . . . . . . . . . . . 42
1.3.2 Metrics for fuzzy numbers . . . . . . . . . . . . . . . . 55
1.3.3 Fuzzy implications . . . . . . . . . . . . . . . . . . . . 58
1.3.4 Linguistic variables . . . . . . . . . . . . . . . . . . . . 63
1.4 The theory of approximate reasoning . . . . . . . . . . . . . . 66
1.5 An introduction to fuzzy logic controllers . . . . . . . . . . . . 87
1.5.1 Defuzzication methods . . . . . . . . . . . . . . . . . 95
1.5.2 Inference mechanisms . . . . . . . . . . . . . . . . . . . 99
1.5.3 Construction of data base and rule base of FLC . . . . 106
1.5.4 Ball and beam problem . . . . . . . . . . . . . . . . . . 113
1.6 Aggregation in fuzzy system modeling . . . . . . . . . . . . . 117
1.6.1 Averaging operators . . . . . . . . . . . . . . . . . . . 120
1.7 Fuzzy screening systems . . . . . . . . . . . . . . . . . . . . . 133
1.8 Applications of fuzzy systems . . . . . . . . . . . . . . . . . . 141
2 Articial Neural Networks 157
2.1 The perceptron learning rule . . . . . . . . . . . . . . . . . . . 157
2.2 The delta learning rule . . . . . . . . . . . . . . . . . . . . . . 170
1
2.2.1 The delta learning rule with semilinear activation func-
tion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
2.3 The generalized delta learning rule . . . . . . . . . . . . . . . 184
2.3.1 Eectivity of neural networks . . . . . . . . . . . . . . 188
2.4 Winner-take-all learning . . . . . . . . . . . . . . . . . . . . . 191
2.5 Applications of articial neural networks . . . . . . . . . . . . 197
3 Fuzzy Neural Networks 206
3.1 Integration of fuzzy logic and neural networks . . . . . . . . . 206
3.1.1 Fuzzy neurons . . . . . . . . . . . . . . . . . . . . . . . 212
3.2 Hybrid neural nets . . . . . . . . . . . . . . . . . . . . . . . . 223
3.2.1 Computation of fuzzy logic inferences by hybrid neural
net . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
3.3 Trainable neural nets for fuzzy IF-THEN rules . . . . . . . . . 245
3.3.1 Implementation of fuzzy rules by regular FNN of Type 2254
3.3.2 Implementation of fuzzy rules by regular FNN of Type 3258
3.4 Tuning fuzzy control parameters by neural nets . . . . . . . . 264
3.5 Fuzzy rule extraction from numerical data . . . . . . . . . . . 274
3.6 Neuro-fuzzy classiers . . . . . . . . . . . . . . . . . . . . . . 279
3.7 FULLINS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
3.8 Applications of fuzzy neural systems . . . . . . . . . . . . . . 295
4 Appendix 317
4.1 Case study: A portfolio problem . . . . . . . . . . . . . . . . . 317
4.2 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
2
0.1 Preface
This Lecture Notes containes the material of the course on Neural Fuzzy
Systems delivered by the author at Turku Center for Computer Science in
1995.
Fuzzy sets were introduced by Zadeh (1965) as a means of representing and
manipulating data that was not precise, but rather fuzzy. Fuzzy logic pro-
vides an inference morphology that enables approximate human reasoning
capabilities to be applied to knowledge-based systems. The theory of fuzzy
logic provides a mathematical strength to capture the uncertainties associ-
ated with human cognitive processes, such as thinking and reasoning. The
conventional approaches to knowledge representation lack the means for rep-
resentating the meaning of fuzzy concepts. As a consequence, the approaches
based on rst order logic and classical probablity theory do not provide an
appropriate conceptual framework for dealing with the representation of com-
monsense knowledge, since such knowledge is by its nature both lexically
imprecise and noncategorical.
The developement of fuzzy logic was motivated in large measure by the need
for a conceptual framework which can address the issue of uncertainty and
lexical imprecision.
Some of the essential characteristics of fuzzy logic relate to the following
[120].
In fuzzy logic, exact reasoning is viewed as a limiting case
of approximate reasoning.
In fuzzy logic, everything is a matter of degree.
In fuzzy logic, knowledge is interpreted a collection of elastic
or, equivalently, fuzzy constraint on a collection of variables.
Inference is viewed as a process of propagation of elastic
constraints.
Any logical system can be fuzzied.
There are two main characteristics of fuzzy systems that give them better
performance for specic applications.
3
Fuzzy systems are suitable for uncertain or approximate reasoning, es-
pecially for the system with a mathematical model that is dicult to
derive.
Fuzzy logic allows decision making with estimated values under incom-
plete or uncertain information.
Articial neural systems can be considered as simplied mathematical mod-
els of brain-like systems and they function as parallel distributed computing
networks. However, in contrast to conventional computers, which are pro-
grammed to perform specic task, most neural networks must be taught, or
trained. They can learn new associations, new functional dependencies and
new patterns.
The study of brain-style computation has its roots over 50 years ago in the
work of McCulloch and Pitts (1943) and slightly later in Hebbs famous Or-
ganization of Behavior (1949). The early work in articial intelligence was
torn between those who believed that intelligent systems could best be built
on computers modeled after brains, and those like Minsky and Papert who
believed that intelligence was fundamentally symbol processing of the kind
readily modeled on the von Neumann computer. For a variety of reasons, the
symbol-processing approach became the dominant theme in artifcial intelli-
gence. The 1980s showed a rebirth in interest in neural computing: Hopeld
(1985) provided the mathematical foundation for understanding the dynam-
ics of an important class of networks; Rumelhart and McClelland (1986)
introduced the backpropagation learning algorithm for complex, multi-layer
networks and thereby provided an answer to one of the most severe criticisms
of the original perceptron work.
Perhaps the most important advantage of neural networks is their adaptiv-
ity. Neural networks can automatically adjust their weights to optimize their
behavior as pattern recognizers, decision makers, system controllers, predic-
tors, etc. Adaptivity allows the neural network to perform well even when
the environment or the system being controlled varies over time. There are
many control problems that can benet from continual nonlinear modeling
and adaptation.
While fuzzy logic performs an inference mechanism under cognitive uncer-
tainty, computational neural networks oer exciting advantages, such as
learning, adaptation, fault-tolerance, parallelism and generalization. A brief
4
comparative study between fuzzy systems and neural networks in their op-
erations in the context of knowledge acquisition, uncertainty, reasoning and
adaptation is presented in the following table [58]:
Skills Fuzzy Systems Neural Nets
Knowledge Inputs Human experts Sample sets
acquisition Tools Interaction Algorithms
Uncertainty Information Quantitive and Quantitive
Qualitive
Cognition Decision making Perception
Reasoning Mechanism Heuristic search Parallel computations
Speed Low High
Adaption Fault-tolerance Low Very high
Learning Induction Adjusting weights
Natural Implementation Explicit Implicit
language Flexibility High Low
Table 0.1 Properties of fuzzy systems and neural networks.
To enable a system to deal with cognitive uncertainties in a manner more
like humans, one may incorporate the concept of fuzzy logic into the neural
networks. The resulting hybrid system is called fuzzy neural, neural fuzzy,
neuro-fuzzy or fuzzy-neuro network.
Neural networks are used to tune membership functions of fuzzy systems
that are employed as decision-making systems for controlling equipment. Al-
though fuzzy logic can encode expert knowledge directly using rules with
linguistic labels, it usually takes a lot of time to design and tune the mem-
bership functions which quantitatively dene these linquistic labels. Neural
network learning techniques can automate this process and substantially re-
duce development time and cost while improving performance.
In theory, neural networks, and fuzzy systems are equivalent in that they
are convertible, yet in practice each has its own advantages and disadvan-
tages. For neural networks, the knowledge is automatically acquired by the
5
backpropagation algorithm, but the learning process is relatively slow and
analysis of the trained network is dicult (black box). Neither is it possi-
ble to extract structural knowledge (rules) from the trained neural network,
nor can we integrate special information about the problem into the neural
network in order to simplify the learning procedure.
Fuzzy systems are more favorable in that their behavior can be explained
based on fuzzy rules and thus their performance can be adjusted by tuning
the rules. But since, in general, knowledge acquisition is dicult and also
the universe of discourse of each input variable needs to be divided into
several intervals, applications of fuzzy systems are restricted to the elds
where expert knowledge is available and the number of input variables is
small.
To overcome the problem of knowledge acquisition, neural networks are ex-
tended to automatically extract fuzzy rules from numerical data.
Cooperative approaches use neural networks to optimize certain parameters
of an ordinary fuzzy system, or to preprocess data and extract fuzzy (control)
rules from data.
The basic processing elements of neural networks are called articial neurons,
or simply neurons. The signal ow from of neuron inputs, x
j
, is considered
to be unidirectionalas indicated by arrows, as is a neurons output signal
ow. Consider a simple neural net in Figure 0.1. All signals and weights are
real numbers. The input neurons do not change the input signals so their
output is the same as their input. The signal x
i
interacts with the weight w
i
to produce the product p
i
= w
i
x
i
, i = 1, . . . , n. The input information p
i
is
aggregated, by addition, to produce the input
net = p
1
+ +p
n
= w
1
x
1
+ +w
n
x
n
to the neuron. The neuron uses its transfer function f, which could be a
sigmoidal function,
f(t) =
1
1 +e
t
to compute the output
y = f(net) = f(w
1
x
1
+ +w
n
x
n
).
This simple neural net, which employs multiplication, addition, and sig-
moidal f, will be called as regular (or standard) neural net.
6
x1
xn
w1
wn
y = f(<w, x>)
f
Figure 0.1 A simple neural net.
If we employ other operations like a t-norm, or a t-conorm, to combine the
incoming data to a neuron we obtain what we call a hybrid neural net. These
modications lead to a fuzzy neural architecture based on fuzzy arithmetic
operations. A hybrid neural net may not use multiplication, addition, or a
sigmoidal function (because the results of these operations are not necesserily
are in the unit interval).
A hybrid neural net is a neural net with crisp signals and weights and crisp
transfer function. However, (i) we can combine x
i
and w
i
using a t-norm,
t-conorm, or some other continuous operation; (ii) we can aggregate the p
i
s
with a t-norm, t-conorm, or any other continuous function; (iii) f can be any
continuous function from input to output.
We emphasize here that all inputs, outputs and the weights of a hybrid neural
net are real numbers taken from the unit interval [0, 1]. A processing element
of a hybrid neural net is called fuzzy neuron.
It is well-known that regular nets are universal approximators, i.e. they can
approximate any continuous function on a compact set to arbitrary accuracy.
In a discrete fuzzy expert system one inputs a discrete approximation to the
fuzzy sets and obtains a discrete approximation to the output fuzzy set.
Usually discrete fuzzy expert systems and fuzzy controllers are continuous
mappings. Thus we can conclude that given a continuous fuzzy expert sys-
tem, or continuous fuzzy controller, there is a regular net that can uniformly
approximate it to any degree of accuracy on compact sets. The problem with
this result that it is non-constructive and does not tell you how to build the
net.
Hybrid neural nets can be used to implement fuzzy IF-THEN rules in a con-
structive way. Though hybrid neural nets can not use directly the standard
error backpropagation algorithm for learning, they can be trained by steepest
7
descent methods to learn the parameters of the membership functions repre-
senting the linguistic terms in the rules.
The direct fuzzication of conventional neural networks is to extend connec-
tion weigths and/or inputs and/or fuzzy desired outputs (or targets) to fuzzy
numbers. This extension is summarized in Table 0.2.
Fuzzy neural net Weights Inputs Targets
Type 1 crisp fuzzy crisp
Type 2 crisp fuzzy fuzzy
Type 3 fuzzy fuzzy fuzzy
Type 4 fuzzy crisp fuzzy
Type 5 crisp crisp fuzzy
Type 6 fuzzy crisp crisp
Type 7 fuzzy fuzzy crisp
Table 0.2 Direct fuzzication of neural networks.
Fuzzy neural networks of Type 1 are used in classication problem of a fuzzy
input vector to a crisp class. The networks of Type 2, 3 and 4 are used to
implement fuzzy IF-THEN rules. However, the last three types in Table 0.2
are unrealistic.
In Type 5, outputs are always real numbers because both inputs and
weights are real numbers.
In Type 6 and 7, the fuzzication of weights is not necessary because
targets are real numbers.
A regular fuzzy neural network is a neural network with fuzzy signals and/or
fuzzy weights, sigmoidal transfer function and all the operations are dened
by Zadehs extension principle. Consider a simple regular fuzzy neural net
in Figure 0.2.
8
X1
Xn
W1
Wn
Y = f(W1X1+ ... + WnXn)
Figure 0.2 Simple regular fuzzy neural net.
All signals and weights are fuzzy numbers. The input neurons do not change
the input signals so their output is the same as their input. The signal X
i
interacts with the weight W
i
to produce the product P
i
= W
i
X
i
, i = 1, . . . , n,
where we use the extension principle to compute P
i
. The input information
P
i
is aggregated, by standard extended addition, to produce the input
net = P
1
+ +P
n
= W
1
X
1
+ +W
n
X
n
to the neuron. The neuron uses its transfer function f, which is a sigmoidal
function, to compute the output
Y = f(net) = f(W
1
X
1
+ +W
n
X
n
)
where f is a sigmoidal function and the membership function of the output
fuzzy set Y is computed by the extension principle.
The main disadvantage of regular fuzzy neural network that they are not
universal approximators. Therefore we must abandon the extension principle
if we are to obtain a universal approximator.
A hybrid fuzzy neural network is a neural network with fuzzy signals and/or
fuzzy weights. However, (i) we can combine X
i
and W
i
using a t-norm, t-
conorm, or some other continuous operation; (ii) we can aggregate the P
i
s
with a t-norm, t-conorm, or any other continuous function; (iii) f can be any
function from input to output.
Buckley and Hayashi [28] showed that hybrid fuzzy neural networks are uni-
versal approximators, i.e. they can approximate any continuous fuzzy func-
tions on a compact domain.
This Lecture Notes is organized in four Chapters. The First Chapter is deal-
ing with inference mechanisms in fuzzy expert systems. The Second Chapter
9
provides a brief description of learning rules of feedforward multi-layer su-
pervised neural networks, and Kohonens unsupervised learning algorithm
for classication of input patterns. In the Third Chapter we explain the
basic principles of fuzzy neural hybrid systems. In the Fourth Chapter we
present some excercises for the Reader.
10
Chapter 1
Fuzzy Systems
1.1 An introduction to fuzzy logic
Fuzzy sets were introduced by Zadeh [113] as a means of representing and
manipulating data that was not precise, but rather fuzzy.
There is a strong relationship between Boolean logic and the concept of a
subset, there is a similar strong relationship between fuzzy logic and fuzzy
subset theory.
In classical set theory, a subset A of a set X can be dened by its charac-
teristic function
A
as a mapping from the elements of X to the elements of
the set {0, 1},

A
: X {0, 1}.
This mapping may be represented as a set of ordered pairs, with exactly one
ordered pair present for each element of X. The rst element of the ordered
pair is an element of the set X, and the second element is an element of the
set {0, 1}. The value zero is used to represent non-membership, and the value
one is used to represent membership. The truth or falsity of the statement
x is in A
is determined by the ordered pair (x,
A
(x)). The statement is true if the
second element of the ordered pair is 1, and the statement is false if it is 0.
Similarly, a fuzzy subset A of a set X can be dened as a set of ordered
pairs, each with the rst element from X, and the second element from
11
the interval [0, 1], with exactly one ordered pair present for each element of
X. This denes a mapping,
A
, between elements of the set X and values
in the interval [0, 1]. The value zero is used to represent complete non-
membership, the value one is used to represent complete membership, and
values in between are used to represent intermediate degrees of membership.
The set X is referred to as the universe of discourse for the fuzzy subset
A. Frequently, the mapping
A
is described as a function, the membership
function of A. The degree to which the statement
x is in A
is true is determined by nding the ordered pair (x,
A
(x)). The degree of
truth of the statement is the second element of the ordered pair. It should
be noted that the terms membership function and fuzzy subset get used in-
terchangeably.
Denition 1.1.1 [113] Let X be a nonempty set. A fuzzy set A in X is
characterized by its membership function

A
: X [0, 1]
and
A
(x) is interpreted as the degree of membership of element x in fuzzy
set A for each x X.
It is clear that A is completely determined by the set of tuples
A = {(x,
A
(x))|x X}
Frequently we will write simply A(x) instead of
A
(x). The family of all
fuzzy (sub)sets in X is denoted by F(X). Fuzzy subsets of the real line are
called fuzzy quantities.
If X = {x
1
, . . . , x
n
} is a nite set and A is a fuzzy set in X then we often
use the notation
A =
1
/x
1
+. . . +
n
/x
n
where the term
i
/x
i
, i = 1, . . . , n signies that
i
is the grade of membership
of x
i
in A and the plus sign represents the union.
Example 1.1.1 Suppose we want to dene the set of natural numbers close
to 1. This can be expressed by
A = 0.0/ 2 + 0.3/ 1 + 0.6/0 + 1.0/1 + 0.6/2 + 0.3/3 + 0.0/4.
12
-2 -1
1 2
3
1
4
0
Figure 1.1 A discrete membership function for x is close to 1.
Example 1.1.2 The membership function of the fuzzy set of real numbers
close to 1, is can be dened as
A(t) = exp((t 1)
2
)
where is a positive real number.
Figure 1.2 A membership function for x is close to 1.
Example 1.1.3 Assume someone wants to buy a cheap car. Cheap can be
represented as a fuzzy set on a universe of prices, and depends on his purse.
For instance, from Fig. 1.3. cheap is roughly interpreted as follows:
Below 3000$ cars are considered as cheap, and prices make no real
dierence to buyers eyes.
Between 3000$ and 4500$, a variation in the price induces a weak pref-
erence in favor of the cheapest car.
Between 4500$ and 6000$, a small variation in the price induces a clear
preference in favor of the cheapest car.
13
3000$
6000$
4500$
1
Beyond 6000$ the costs are too high (out of consideration).
Figure 1.3 Membership function of cheap.
Denition 1.1.2 (support) Let A be a fuzzy subset of X; the support of A,
denoted supp(A), is the crisp subset of X whose elements all have nonzero
membership grades in A.
supp(A) = {x X|A(x) > 0}.
Denition 1.1.3 (normal fuzzy set) A fuzzy subset A of a classical set X
is called normal if there exists an x X such that A(x) = 1. Otherwise A is
subnormal.
Denition 1.1.4 (-cut) An -level set of a fuzzy set A of X is a non-fuzzy
set denoted by [A]

and is dened by
[A]

=
_
{t X|A(t) } if > 0
cl(suppA) if = 0
where cl(suppA) denotes the closure of the support of A.
Example 1.1.4 Assume X = {2, 1, 0, 1, 2, 3, 4} and
A = 0.0/ 2 + 0.3/ 1 + 0.6/0 + 1.0/1 + 0.6/2 + 0.3/3 + 0.0/4,
in this case
[A]

=
_
_
_
{1, 0, 1, 2, 3} if 0 0.3
{0, 1, 2} if 0.3 < 0.6
{1} if 0.6 < 1
14

cut
Denition 1.1.5 (convex fuzzy set) A fuzzy set A of X is called convex if
[A]

is a convex subset of X [0, 1].


Figure 1.4 An -cut of a triangular fuzzy number.
In many situations people are only able to characterize numeric information
imprecisely. For example, people use terms such as, about 5000, near zero,
or essentially bigger than 5000. These are examples of what are called fuzzy
numbers. Using the theory of fuzzy subsets we can represent these fuzzy
numbers as fuzzy subsets of the set of real numbers. More exactly,
Denition 1.1.6 (fuzzy number) A fuzzy number A is a fuzzy set of the real
line with a normal, (fuzzy) convex and continuous membership function of
bounded support. The family of fuzzy numbers will be denoted by F.
Denition 1.1.7 (quasi fuzzy number) A quasi fuzzy number A is a fuzzy
set of the real line with a normal, fuzzy convex and continuous membership
function satisfying the limit conditions
lim
t
A(t) = 0, lim
t
A(t) = 0.
Figure 1.5 Fuzzy number.
15
A
a (0) a (0)
1 2
1

a ()
1
a ()
2
Let A be a fuzzy number. Then [A]

is a closed convex (compact) subset of


IRfor all [0, 1]. Let us introduce the notations
a
1
() = min[A]

, a
2
() = max[A]

In other words, a
1
() denotes the left-hand side and a
2
() denotes the right-
hand side of the -cut. It is easy to see that
If then [A]

[A]

Furthermore, the left-hand side function


a
1
: [0, 1] IR
is monoton increasing and lower semicontinuous, and the right-hand side
function
a
2
: [0, 1] IR
is monoton decreasing and upper semicontinuous. We shall use the notation
[A]

= [a
1
(), a
2
()].
The support of A is the open interval (a
1
(0), a
2
(0)).
Figure 1.5a The support of A is (a
1
(0), a
2
(0)).
If A is not a fuzzy number then there exists an [0, 1] such that [A]

is
not a convex subset of IR.
16
1
a
a-
a+
Figure 1.6 Not fuzzy number.
Denition 1.1.8 (triangular fuzzy number) A fuzzy set A is called triangu-
lar fuzzy number with peak (or center) a, left width > 0 and right width
> 0 if its membership function has the following form
A(t) =
_

_
1 (a t)/ if a t a
1 (t a)/ if a t a +
0 otherwise
and we use the notation A = (a, , ). It can easily be veried that
[A]

= [a (1 ), a + (1 )], [0, 1].


The support of A is (a , b + ).
Figure 1.7 Triangular fuzzy number.
A triangular fuzzy number with center a may be seen as a fuzzy quantity
x is approximately equal to a.
17
1
a
a-
b+
b
Denition 1.1.9 (trapezoidal fuzzy number) A fuzzy set A is called trape-
zoidal fuzzy number with tolerance interval [a, b], left width and right width
if its membership function has the following form
A(t) =
_

_
1 (a t)/ if a t a
1 if a t b
1 (t b)/ if a t b +
0 otherwise
and we use the notation A = (a, b, , ). It can easily be shown that
[A]

= [a (1 ), b + (1 )], [0, 1].


The support of A is (a , b + ).
Figure 1.8 Trapezoidal fuzzy number.
A trapezoidal fuzzy number may be seen as a fuzzy quantity
x is approximately in the interval [a, b].
Denition 1.1.10 (LR-representation of fuzzy numbers) Any fuzzy number
A F can be described as
A(t) =
_

_
L
_
a t

_
if t [a , a]
1 if t [a, b]
R
_
t b)

_
if t [b, b + ]
0 otherwise
18
where [a, b] is the peak or core of A,
L: [0, 1] [0, 1], R: [0, 1] [0, 1]
are continuous and non-increasing shape functions with L(0) = R(0) = 1 and
R(1) = L(1) = 0. We call this fuzzy interval of LR-type and refer to it by
A = (a, b, , )
LR
The support of A is (a , b + ).
Figure 1.9 Fuzzy number of type LR with nonlinear reference functions.
Denition 1.1.11 (quasi fuzzy number of type LR) Any quasi fuzzy number
A F(IR) can be described as
A(t) =
_

_
L
_
a t

_
if t a,
1 if t [a, b],
R
_
t b

_
if t b,
where [a, b] is the peak or core of A,
L: [0, ) [0, 1], R: [0, ) [0, 1]
are continuous and non-increasing shape functions with L(0) = R(0) = 1 and
lim
t
L(t) = 0, lim
t
R(t) = 0.
19
1 - x
B
A
Let A = (a, b, , )
LR
be a fuzzy number of type LR. If a = b then we use
the notation
A = (a, , )
LR
and say that A is a quasi-triangular fuzzy number. Furthermore if L(x) =
R(x) = 1 x then instead of A = (a, b, , )
LR
we simply write
A = (a, b, , ).
Figure 1.10 Nonlinear and linear reference functions.
Denition 1.1.12 (subsethood) Let A and B are fuzzy subsets of a classical
set X. We say that A is a subset of B if A(t) B(t), t X.
Figure 1.10a A is a subset of B.
Denition 1.1.13 (equality of fuzzy sets) Let A and B are fuzzy subsets of
a classical set X. A and B are said to be equal, denoted A = B, if A B
and B A. We note that A = B if and only if A(x) = B(x) for x X.
20
10
X
1
x
1
1
x
0
x
0
_
Example 1.1.5 Let A and B be fuzzy subsets of X = {2, 1, 0, 1, 2, 3, 4}.
A = 0.0/ 2 + 0.3/ 1 + 0.6/0 + 1.0/1 + 0.6/2 + 0.3/3 + 0.0/4
B = 0.1/ 2 + 0.3/ 1 + 0.9/0 + 1.0/1 + 1.0/2 + 0.3/3 + 0.2/4
It is easy to check that A B holds.
Denition 1.1.14 (empty fuzzy set) The empty fuzzy subset of X is dened
as the fuzzy subset of X such that (x) = 0 for each x X.
It is easy to see that A holds for any fuzzy subset A of X.
Denition 1.1.15 The largest fuzzy set in X, called universal fuzzy set in
X, denoted by 1
X
, is dened by 1
X
(t) = 1, t X.
It is easy to see that A 1
X
holds for any fuzzy subset A of X.
Figure 1.11 The graph of the universal fuzzy subset in X = [0, 10].
Denition 1.1.16 (Fuzzy point) Let A be a fuzzy number. If supp(A) =
{x
0
} then A is called a fuzzy point and we use the notation A = x
0
.
21
Figure 1.11a Fuzzy point.
Let A = x
0
be a fuzzy point. It is easy to see that [A]

= [x
0
, x
0
] =
{x
0
}, [0, 1].
Exercise 1.1.1 Let X = [0, 2] be the universe of discourse of fuzzy number
A dened by the membership function A(t) = 1 t if t [0, 1] and A(t) = 0,
otherwise. Interpret A linguistically.
Exercise 1.1.2 Let A = (a, b, , )
LR
and A

= (a

, b

)
LR
be fuzzy
numbers of type LR. Give necessary and sucient conditions for the subset-
hood of A in A

.
Exercise 1.1.3 Let A = (a, ) be a symmetrical triangular fuzzy number.
Calculate [A]

as a function of a and .
Exercise 1.1.4 Let A = (a, , ) be a triangular fuzzy number. Calculate
[A]

as a function of a, and .
Exercise 1.1.5 Let A = (a, b, , ) be a trapezoidal fuzzy number. Calculate
[A]

as a function of a, b, and .
Exercise 1.1.6 Let A = (a, b, , )
LR
be a fuzzy number of type LR. Cal-
culate [A]

as a function of a, b, , , L and R.
22
A
B
1.2 Operations on fuzzy sets
In this section we extend the classical set theoretic operations from ordinary
set theory to fuzzy sets. We note that all those operations which are exten-
sions of crisp concepts reduce to their usual meaning when the fuzzy subsets
have membership degrees that are drawn from {0, 1}. For this reason, when
extending operations to fuzzy sets we use the same symbol as in set theory.
Let A and B are fuzzy subsets of a nonempty (crisp) set X.
Denition 1.2.1 (intersection) The intersection of A and B is dened as
(A B)(t) = min{A(t), B(t)} = A(t) B(t), t X
Example 1.2.1 Let A and B be fuzzy subsets of X = {2, 1, 0, 1, 2, 3, 4}.
A = 0.6/ 2 + 0.3/ 1 + 0.6/0 + 1.0/1 + 0.6/2 + 0.3/3 + 0.4/4
B = 0.1/ 2 + 0.3/ 1 + 0.9/0 + 1.0/1 + 1.0/2 + 0.3/3 + 0.2/4
Then A B has the following form
A B = 0.1/ 2 + 0.3/ 1 + 0.6/0 + 1.0/1 + 0.6/2 + 0.3/3 + 0.2/4.
Figure 1.12 Intersection of two triangular fuzzy numbers.
Denition 1.2.2 (union) The union of A and B is dened as
(A B)(t) = max{A(t), B(t)} = A(t) B(t), t X
23
A
B
Example 1.2.2 Let A and B be fuzzy subsets of X = {2, 1, 0, 1, 2, 3, 4}.
A = 0.6/ 2 + 0.3/ 1 + 0.6/0 + 1.0/1 + 0.6/2 + 0.3/3 + 0.4/4
B = 0.1/ 2 + 0.3/ 1 + 0.9/0 + 1.0/1 + 1.0/2 + 0.3/3 + 0.2/4
Then A B has the following form
A B = 0.6/ 2 + 0.3/ 1 + 0.9/0 + 1.0/1 + 1.0/2 + 0.3/3 + 0.4/4.
Figure 1.13 Union of two triangular fuzzy numbers.
Denition 1.2.3 (complement) The complement of a fuzzy set A is dened
as
(A)(t) = 1 A(t)
A closely related pair of properties which hold in ordinary set theory are the
law of excluded middle
A A = X
and the law of noncontradiction principle
A A =
It is clear that 1
X
= and = 1
X
, however, the laws of excluded middle
and noncontradiction are not satised in fuzzy logic.
Lemma 1.2.1 The law of excluded middle is not valid. Let A(t) = 1/2, t
IR, then it is easy to see that
(A A)(t) = max{A(t), A(t)} = max{1 1/2, 1/2} = 1/2 = 1
24
Lemma 1.2.2 The law of noncontradiction is not valid. Let A(t) = 1/2, t
IR, then it is easy to see that
(A A)(t) = min{A(t), A(t)} = min{1 1/2, 1/2} = 1/2 = 0
However, fuzzy logic does satisfy De Morgans laws
(A B) = A B, (A B) = A B
Triangular norms were introduced by Schweizer and Sklar [91] to model the
distances in probabilistic metric spaces. In fuzzy sets theory triangular norms
are extensively used to model logical connective and.
Denition 1.2.4 (Triangular norm.) A mapping
T : [0, 1] [0, 1] [0, 1]
is a triangular norm (t-norm for short) i it is symmetric, associative, non-
decreasing in each argument and T(a, 1) = a, for all a [0, 1]. In other
words, any t-norm T satises the properties:
T(x, y) = T(y, x) (symmetricity)
T(x, T(y, z)) = T(T(x, y), z) (associativity)
T(x, y) T(x

, y

) if x x

and y y

(monotonicity)
T(x, 1) = x, x [0, 1] (one identy)
All t-norms may be extended, through associativity, to n > 2 arguments.
The t-norm MIN is automatically extended and
PAND(a
1
, . . . , a
n
) = a
1
a
2
a
n
LAND(a
1
, . . . a
n
) = max{
n

i=1
a
i
n + 1, 0}
A t-norm T is called strict if T is strictly increasing in each argument.
25
minimum MIN(a, b) = min{a, b}
Lukasiewicz LAND(a, b) = max{a +b 1, 0}
probabilistic PAND(a, b) = ab
weak WEAK(a, b) =
_
min{a, b} if max{a, b} = 1
0 otherwise
Hamacher HAND

(a, b) = ab/( + (1 )(a +b ab)), 0


Dubois and Prade DAND

(a, b) = ab/ max{a, b, }, (0, 1)


Yager Y AND
p
(a, b) = 1 min{1, [(1 a)
p
+ (1 b)
p
]
1/p
}, p > 0
Table 1.1 Basic t-norms.
Triangular conorms are extensively used to model logical connective or.
Denition 1.2.5 (Triangular conorm.) A mapping
S: [0, 1] [0, 1] [0, 1]
is a triangular co-norm (t-conorm for short) if it is symmetric, associative,
non-decreasing in each argument and S(a, 0) = a, for all a [0, 1]. In other
words, any t-conorm S satises the properties:
S(x, y) = S(y, x) (symmetricity)
S(x, S(y, z)) = S(S(x, y), z) (associativity)
S(x, y) S(x

, y

) if x x

and y y

(monotonicity)
S(x, 0) = x, x [0, 1] (zero identy)
If T is a t-norm then the equality S(a, b) := 1 T(1 a, 1 b) denes a
t-conorm and we say that S is derived from T.
26
maximum MAX(a, b) = max{a, b}
Lukasiewicz LOR(a, b) = min{a +b, 1}
probabilistic POR(a, b) = a +b ab
strong STRONG(a, b) =
_
max{a, b} if min{a, b} = 0
1 otherwise
Hamacher HOR

(x, y) = (a +b (2 )ab)/(1 (1 )ab), 0


Yager Y OR
p
(a, b) = min{1,
p

a
p
+b
p
}, p > 0
Table 1.2 Basic t-conorms.
Lemma 1.2.3 Let T be a t-norm. Then the following statement holds
WEAK(x, y) T(x, y) min{x, y}, x, y [0, 1].
Proof. From monotonicity, symmetricity and the extremal condition we get
T(x, y) T(x, 1) x, T(x, y) = T(y, x) T(y, 1) y.
This means that T(x, y) min{x, y}.
Lemma 1.2.4 Let S be a t-conorm. Then the following statement holds
max{a, b} S(a, b) STRONG(a, b), a, b [0, 1]
Proof. From monotonicity, symmetricity and the extremal condition we get
S(x, y) S(x, 0) x, S(x, y) = S(y, x) S(y, 0) y
This means that S(x, y) max{x, y}.
Lemma 1.2.5 T(a, a) = a holds for any a [0, 1] if and only if T is the
minimum norm.
Proof. If T(a, b) = MIN(a, b) then T(a, a) = a holds obviously. Suppose
T(a, a) = a for any a [0, 1], and a b 1. We can obtain the following
expression using monotonicity of T
a = T(a, a) T(a, b) min{a, b}.
27
From commutativity of T it follows that
a = T(a, a) T(b, a) min{b, a}.
These equations show that T(a, b) = min{a, b} for any a, b [0, 1].
Lemma 1.2.6 The distributive law of T on the max operator holds for any
a, b, c [0, 1].
T(max{a, b}, c) = max{T(a, c), T(b, c)}.
The operation intersection can be dened by the help of triangular norms.
Denition 1.2.6 (t-norm-based intersection) Let T be a t-norm. The T-
intersection of A and B is dened as
(A B)(t) = T(A(t), B(t)), t X.
Example 1.2.3 Let T(x, y) = max{x+y 1, 0} be the Lukasiewicz t-norm.
Then we have
(A B)(t) = max{A(t) +B(t) 1, 0} t X.
Let A and B be fuzzy subsets of X = {2, 1, 0, 1, 2, 3, 4}.
A = 0.0/ 2 + 0.3/ 1 + 0.6/0 + 1.0/1 + 0.6/2 + 0.3/3 + 0.0/4
B = 0.1/ 2 + 0.3/ 1 + 0.9/0 + 1.0/1 + 1.0/2 + 0.3/3 + 0.2/4
Then A B has the following form
A B = 0.0/ 2 + 0.0/ 1 + 0.5/0 + 1.0/1 + 0.6/2 + 0.0/3 + 0.2/4
The operation union can be dened by the help of triangular conorms.
Denition 1.2.7 (t-conorm-based union) Let S be a t-conorm. The S-union
of A and B is dened as
(A B)(t) = S(A(t), B(t)), t X.
28
Example 1.2.4 Let S(x, y) = min{x + y, 1} be the Lukasiewicz t-conorm.
Then we have
(A B)(t) = min{A(t) +B(t), 1}, t X.
Let A and B be fuzzy subsets of X = {2, 1, 0, 1, 2, 3, 4}.
A = 0.0/ 2 + 0.3/ 1 + 0.6/0 + 1.0/1 + 0.6/2 + 0.3/3 + 0.0/4
B = 0.1/ 2 + 0.3/ 1 + 0.9/0 + 1.0/1 + 1.0/2 + 0.3/3 + 0.0/4
Then A B has the following form
A B = 0.1/ 2 + 0.6/ 1 + 1.0/0 + 1.0/1 + 1.0/2 + 0.6/3 + 0.2/4.
In general, the law of the excluded middle and the noncontradiction principle
properties are not satised by t-norms and t-conorms dening the intersection
and union operations. However, the Lukasiewicz t-norm and t-conorm do
satisfy these properties.
Lemma 1.2.7 If T(x, y) = LAND(x, y) = max{x + y 1, 0} then the law
of noncontradiction is valid.
Proof. Let A be a fuzzy set in X. Then from the denition of t-norm-based
intersection we get
(AA)(t) = LAND(A(t), 1A(t)) = (A(t)+1A(t)1)0 = 0, t X.
Lemma 1.2.8 If S(x, y) = LOR(x, y) = min{1, x + y} then the law of
excluded middle is valid.
Proof.Let A be a fuzzy set in X. Then from the denition of t-conorm-based
union we get
(A A)(t) = LOR(A(t), 1 A(t)) = (A(t) + 1 A(t)) 1 = 1, t X.
Exercise 1.2.1 Let A and B be fuzzy subsets of X = {2, 1, 0, 1, 2, 3, 4}.
A = 0.5/ 2 + 0.4/ 1 + 0.6/0 + 1.0/1 + 0.6/2 + 0.3/3 + 0.4/4
B = 0.1/ 2 + 0.7/ 1 + 0.9/0 + 1.0/1 + 1.0/2 + 0.3/3 + 0.2/4
Suppose that their intersection is dened by the probabilistic t-norm PAND(a, b) =
ab. What is then the membership function of A B?
29
Exercise 1.2.2 Let A and B be fuzzy subsets of X = {2, 1, 0, 1, 2, 3, 4}.
A = 0.5/ 2 + 0.4/ 1 + 0.6/0 + 1.0/1 + 0.6/2 + 0.3/3 + 0.4/4
B = 0.1/ 2 + 0.7/ 1 + 0.9/0 + 1.0/1 + 1.0/2 + 0.3/3 + 0.2/4
Suppose that their union is dened by the probabilistic t-conorm PAND(a, b) =
a +b ab. What is then the membership function of A B?
Exercise 1.2.3 Let A and B be fuzzy subsets of X = {2, 1, 0, 1, 2, 3, 4}.
A = 0.7/ 2 + 0.4/ 1 + 0.6/0 + 1.0/1 + 0.6/2 + 0.3/3 + 0.4/4
B = 0.1/ 2 + 0.2/ 1 + 0.9/0 + 1.0/1 + 1.0/2 + 0.3/3 + 0.2/4
Suppose that their intersection is dened by the Hamachers t-norm with
= 0. What is then the membership function of A B?
Exercise 1.2.4 Let A and B be fuzzy subsets of X = {2, 1, 0, 1, 2, 3, 4}.
A = 0.7/ 2 + 0.4/ 1 + 0.6/0 + 1.0/1 + 0.6/2 + 0.3/3 + 0.4/4
B = 0.1/ 2 + 0.2/ 1 + 0.9/0 + 1.0/1 + 1.0/2 + 0.3/3 + 0.2/4
Suppose that their intersection is dened by the Hamachers t-conorm with
= 0. What is then the membership function of A B?
Exercise 1.2.5 Show that if

then HAND

(x, y) HAND

(x, y)
holds for all x, y [0, 1], i.e. the family HAND

is monoton decreasing.
30
b
a
c
1.3 Fuzzy relations
A classical relation can be considered as a set of tuples, where a tuple is an
ordered pair. A binary tuple is denoted by (u, v), an example of a ternary
tuple is (u, v, w) and an example of n-ary tuple is (x
1
, . . . , x
n
).
Denition 1.3.1 (classical n-ary relation) Let X
1
, . . . , X
n
be classical sets.
The subsets of the Cartesian product X
1
X
n
are called n-ary relations.
If X
1
= = X
n
and R X
n
then R is called an n-ary relation in X.
Let R be a binary relation in IR. Then the characteristic function of R is
dened as

R
(u, v) =
_
1 if (u, v) R
0 otherwise
Example 1.3.1 Let X be the domain of men {John, Charles, James} and
Y the domain of women {Diana, Rita, Eva}, then the relation married to
on X Y is, for example
{(Charles, Diana), (John, Eva), (James, Rita) }
Example 1.3.2 Consider the following relation (u, v) R i u [a, b] and v
[0, c]:

R
(u, v) =
_
1 if (u, v) [a, b] [0, c]
0 otherwise
Figure 1.14 Graph of a crisp relation.
Let R be a binary relation in a classical set X. Then
31
Denition 1.3.2 (reexivity) R is reexive if u U : (u, u) R
Denition 1.3.3 (anti-reexivity) R is anti-reexive if u U : (u, u) / R
Denition 1.3.4 (symmetricity) R is symmetric if from (u, v) R then
(v, u) R
Denition 1.3.5 (anti-symmetricity)
R is anti-symmetric if (u, v) R and (v, u) R then u = v
Denition 1.3.6 (transitivity)
R is transitive if (u, v) R and (v, w) R then (u, w) R, u, v, w U
Example 1.3.3 Consider the classical inequality relations on the real line
IR. It is clear that is reexive, anti-symmetric and transitive, while <
is anti-reexive, anti-symmetric and transitive.
Other important classes of binary relations are the following:
Denition 1.3.7 (equivalence) R is an equivalence relation if, R is reexive,
symmetric and transitive
Denition 1.3.8 (partial order) R is a partial order relation if it is reexive,
anti-symmetric and transitive
Denition 1.3.9 (total order) R is a total order relation if it is partial order
and (u, v) R or (v, u) R hold for any u and v.
Example 1.3.4 Let us consider the binary relation subset of. It is clear
that it is a partial order relation. The relation on natural numbers is a
total order relation.
Example 1.3.5 Consider the relation mod 3 on natural numbers
{(m, n) | (n m) mod 3 0}
This is an equivalence relation.
32
Denition 1.3.10 (fuzzy relation) Let X and Y be nonempty sets. A fuzzy
relation R is a fuzzy subset of X Y . In other words, R F(X Y ). If
X = Y then we say that R is a binary fuzzy relation in X.
Let R be a binary fuzzy relation on IR. Then R(u, v) is interpreted as the
degree of membership of (u, v) in R.
Example 1.3.6 A simple example of a binary fuzzy relation on U = {1, 2, 3},
called approximately equal can be dened as
R(1, 1) = R(2, 2) = R(3, 3) = 1, R(1, 2) = R(2, 1) = R(2, 3) = R(3, 2) = 0.8
R(1, 3) = R(3, 1) = 0.3
The membership function of R is given by
R(u, v) =
_

_
1 if u = v
0.8 if |u v| = 1
0.3 if |u v| = 2
In matrix notation it can be represented as
R =
_
_
_
_
_
_
_
_
1 2 3
1 1 0.8 0.3
2 0.8 1 0.8
3 0.3 0.8 1
_
_
_
_
_
_
_
_
Fuzzy relations are very important because they can describe interactions
between variables. Let R and S be two binary fuzzy relations on X Y .
Denition 1.3.11 (intersection) The intersection of R and G is dened by
(R G)(u, v) = min{R(u, v), G(u, v)} = R(u, v) G(u, v), (u, v) X Y.
Note that R: X Y [0, 1], i.e. the domain of R is the whole Cartesian
product X Y .
33
Denition 1.3.12 (union) The union of R and S is dened by
(R G)(u, v) = max{R(u, v), G(u, v)} = R(u, v) G(u, v), (u, v) X Y.
Example 1.3.7 Let us dene two binary relations R = x is considerable
smaller than y and G = x is very close to y
R =
_
_
_
_
_
y
1
y
2
y
3
y
4
x
1
0.5 0.1 0.1 0.7
x
2
0 0.8 0 0
x
3
0.9 1 0.7 0.8
_
_
_
_
_
G =
_
_
_
_
_
y
1
y
2
y
3
y
4
x
1
0.4 0 0.9 0.6
x
2
0.9 0.4 0.5 0.7
x
3
0.3 0 0.8 0.5
_
_
_
_
_
The intersection of R and G means that x is considerable smaller than y
and x is very close to y.
(R G)(x, y) =
_
_
_
_
_
y
1
y
2
y
3
y
4
x
1
0.4 0 0.1 0.6
x
2
0 0.4 0 0
x
3
0.3 0 0.7 0.5
_
_
_
_
_
The union of R and G means that x is considerable smaller than y or x
is very close to y.
(R G)(x, y) =
_
_
_
_
_
y
1
y
2
y
3
y
4
x
1
0.5 0.1 0.9 0.7
x
2
0.9 0.8 0.5 0.7
x
3
0.9 1 0.8 0.8
_
_
_
_
_
Consider a classical relation R on IR.
R(u, v) =
_
1 if (u, v) [a, b] [0, c]
0 otherwise
It is clear that the projection (or shadow) of R on the X-axis is the closed
interval [a, b] and its projection on the Y -axis is [0, c].
34
Denition 1.3.13 (projection of classical relations)
Let R be a classical relation on X Y . The projection of R on X, denoted
by
X
(R) , is dened as

X
(R) = {x X | y Y such that (x, y) R}
similarly, the projection of R on Y , denoted by
Y
(R) , is dened as

Y
(R) = {y Y | x X such that (x, y) R}
Denition 1.3.14 (projection of binary fuzzy relations)
Let R be a binary fuzzy relation on X Y . The projection of R on X is a
fuzzy subset of X, denoted by
X
(R), dened as

X
(R)(x) = sup{R(x, y) | y Y }
and the projection of R on Y is a fuzzy subset of Y , denoted by
Y
(R),
dened as

Y
(R)(y) = sup{R(x, y) | x X}
If R is xed then instead of
X
(R)(x) we write simply
X
(x).
Example 1.3.8 Consider the fuzzy relation R = x is considerable smaller
than y
R =
_
_
_
_
_
y
1
y
2
y
3
y
4
x
1
0.5 0.1 0.1 0.7
x
2
0 0.8 0 0
x
3
0.9 1 0.7 0.8
_
_
_
_
_
then the projection on X means that
x
1
is assigned the highest membership degree from the tuples (x
1
, y
1
),
(x
1
, y
2
),
(x
1
, y
3
), (x
1
, y
4
), i.e.
X
(x
1
) = 0.7, which is the maximum of the rst
row.
x
2
is assigned the highest membership degree from the tuples (x
2
, y
1
),
(x
2
, y
2
),
(x
2
, y
3
), (x
2
, y
4
), i.e.
X
(x
2
) = 0.8, which is the maximum of the
second row.
35
A x B
A
B
x
3
is assigned the highest membership degree from the tuples (x
3
, y
1
),
(x
3
, y
2
),
(x
3
, y
3
), (x
3
, y
4
), i.e.
X
(x
3
) = 1, which is the maximum of the third
row.
Denition 1.3.15 (Cartesian product of fuzzy sets)
The Cartesian product of two fuzzy sets A F(X) and B F(Y ) is dened
by
(A B)(u, v) = min{A(u), B(v)}, (u, v) X Y.
It is clear that the Cartesian product of two fuzzy sets A F(X) and
B F(Y ) is a binary fuzzy relation in X Y , i.e.
A B F(X Y ).
Figure 1.15 Cartesian product of fuzzy numbers A and B.
Assume A and B are normal fuzzy sets. An interesting property of AB is
that

Y
(A B) = B
36
and

X
(A B) = A.
Really,

X
(x) = sup{(A B)(x, y) | y Y } = sup{min{A(x), B(y)} | y Y } =
min{A(x), sup{B(y)} | y Y }} = A(x).
Similarly to the one-dimensional case, intersection and union operations on
fuzzy relations can be dened via t-norms and t-conorms, respectively.
Denition 1.3.16 (t-norm-based intersection) Let T be a t-norm and let R
and G be binary fuzzy relations in X Y . Their T-intersection is dened
by
(R S)(u, v) = T(R(u, v), G(u, v)), (u, v) X Y.
Denition 1.3.17 (t-conorm-based union) Let S be a t-conorm and let R
and G be binary fuzzy relations in X Y . Their S-union is dened by
(R S)(u, v) = S(R(u, v), G(u, v)), (u, v) X Y.
Denition 1.3.18 (sup-min composition) Let R F(X Y ) and G
F(Y Z). The sup-min composition of R and G, denoted by R G is
dened as
(R S)(u, w) = sup
vY
min{R(u, v), S(v, w)}
It is clear that R G is a binary fuzzy relation in X Z.
Example 1.3.9 Consider two fuzzy relations R = x is considerable smaller
than y and G = y is very close to z
R =
_
_
_
_
_
y
1
y
2
y
3
y
4
x
1
0.5 0.1 0.1 0.7
x
2
0 0.8 0 0
x
3
0.9 1 0.7 0.8
_
_
_
_
_
G =
_
_
_
_
_
_
_
_
z
1
z
2
z
3
y
1
0.4 0.9 0.3
y
2
0 0.4 0
y
3
0.9 0.5 0.8
y
4
0.6 0.7 0.5
_
_
_
_
_
_
_
_
37
Then their sup min composition is
R G =
_
_
_
_
_
_
z
1
z
2
z
3
x
1
0.6 0.8 0.5
x
2
0 0.4 0
x
3
0.7 0.9 0.7
_
_
_
_
_
_
Formally,
_
_
_
_
_
_
y
1
y
2
y
3
y
4
x
1
0.5 0.1 0.1 0.7
x
2
0 0.8 0 0
x
3
0.9 1 0.7 0.8
_
_
_
_
_
_

_
_
_
_
_
_
_
_
_
z
1
z
2
z
3
y
1
0.4 0.9 0.3
y
2
0 0.4 0
y
3
0.9 0.5 0.8
y
4
0.6 0.7 0.5
_
_
_
_
_
_
_
_
_
=
_
_
_
_
_
_
z
1
z
2
z
3
x
1
0.6 0.7 0.5
x
2
0 0.4 0
x
3
0.7 0.9 0.7
_
_
_
_
_
_
i.e., the composition of R and G is nothing else, but the classical product
of the matrices R and G with the dierence that instead of addition we use
maximum and instead of multiplication we use minimum operator. For ex-
ample,
(R G)(x
1
, z
1
) = max{0.5 0.4, 0.1 0, 0.1 0.9, 0.7 0.6} = 0.6
(R G)(x
1
, z
2
) = max{0.5 0.9, 0.1 0.4, 0.1 0.5, 0.7 0.7} = 0.7
(R G)(x
1
, z
3
) = max{0.5 0.3, 0.1 0, 0.1 0.8, 0.7 0.5} = 0.5
Denition 1.3.19 (sup-T composition) Let T be a t-norm and let R
F(X Y ) and G F(Y Z). The sup-T composition of R and G, de-
noted by R G is dened as
(R S)(u, w) = sup
vY
T(R(u, v), S(v, w))
Following Zadeh [115] we can dene the sup-min composition of a fuzzy set
and fuzzy relation as follows
Denition 1.3.20 Let C F(X) and R F(X Y ). The membership
function of the composition of a fuzzy set C and a fuzzy relation R is dened
by
(C R)(y) = sup
xX
min{C(x), R(x, y)}, y Y.
38
Y
X
y'
R(x,y)
(C o R)(y')
R(x,y')
C(x)
The composition of a fuzzy set C and a fuzzy relation R can be considered
as the shadow of the relation R on the fuzzy set C.
Figure 1.16 Composition of a fuzzy number and a fuzzy relation.
In the above denition we can use any t-norm for modeling the compositional
operator.
Denition 1.3.21 Let T be a t-norm C F(X) and R F(X Y ). The
membership function of the composition of a fuzzy set C and a fuzzy relation
R is dened by
(C R)(y) = sup
xX
T(C(x), R(x, y)),
for all y Y .
For example, if PAND(x, y) = xy is the product t-norm then the sup-T
compositiopn of a fuzzy set C and a fuzzy relation R is dened by
(C R)(y) = sup
xX
PAND(C(x), R(x, y)) = sup
xX
C(x)R(x, y)
39
and if LAND(x, y) = max{0, x + y 1} is the Lukasiewicz t-norm then we
get
(C R)(y) = sup
xX
LAND(C(x), R(x, y)) = sup
xX
max{0, C(x) +R(x, y) 1}
for all y Y .
Example 1.3.10 Let A and B be fuzzy numbers and let R = AB a fuzzy
relation. Observe the following property of composition
A R = A (A B) = B, B R = B (A B) = A.
This fact can be interpreted as: if A and B have relation A B and then
the composition of A and AB is exactly B, and then the composition of B
and A B is exactly A.
Example 1.3.11 Let C be a fuzzy set in the universe of discourse {1, 2, 3}
and let R be a binary fuzzy relation in {1, 2, 3}. Assume that C = 0.2/1 +
1/2 + 0.2/3 and
R =
_
_
_
_
_
_
_
_
1 2 3
1 1 0.8 0.3
2 0.8 1 0.8
3 0.3 0.8 1
_
_
_
_
_
_
_
_
Using Denition 1.3.20 we get
CR = (0.2/1+1/2+0.2/3)
_
_
_
_
_
_
_
_
1 2 3
1 1 0.8 0.3
2 0.8 1 0.8
3 0.3 0.8 1
_
_
_
_
_
_
_
_
= 0.8/1+1/2+0.8/3.
Example 1.3.12 Let C be a fuzzy set in the universe of discourse [0, 1]
and let R be a binary fuzzy relation in [0, 1]. Assume that C(x) = x and
40
R(x, y) = 1 |x y|. Using the denition of sup-min composition (1.3.20)
we get
(C R)(y) = sup
x[0,1]
min{x, 1 |x y|} =
1 +y
2
for all y [0, 1].
Example 1.3.13 Let C be a fuzzy set in the universe of discourse {1, 2, 3}
and let R be a binary fuzzy relation in {1, 2, 3}. Assume that C = 1/1 +
0.2/2 + 1/3 and
R =
_
_
_
_
_
_
_
_
1 2 3
1 0.4 0.8 0.3
2 0.8 0.4 0.8
3 0.3 0.8 0
_
_
_
_
_
_
_
_
Then the sup-PAND composition of C and R is calculated by
CR = (1/1+0.2/2+1/3)
_
_
_
_
_
_
_
_
1 2 3
1 0.4 0.8 0.3
2 0.8 0.4 0.8
3 0.3 0.8 0
_
_
_
_
_
_
_
_
= 0.4/1+0.8/2+0.3/3.
41
1.3.1 The extension principle
In order to use fuzzy numbers and relations in any intellgent system we must
be able to perform arithmetic operations with these fuzzy quantities. In
particular, we must be able to to add, subtract, multiply and divide with fuzzy
quantities. The process of doing these operations is called fuzzy arithmetic.
We shall rst introduce an important concept from fuzzy set theory called the
extension principle. We then use it to provide for these arithmetic operations
on fuzzy numbers.
In general the extension principle pays a fundamental role in enabling us
to extend any point operations to operations involving fuzzy sets. In the
following we dene this principle.
Denition 1.3.22 (extension principle) Assume X and Y are crisp sets and
let f be a mapping from X to Y ,
f : X Y
such that for each x X, f(x) = y Y . Assume A is a fuzzy subset of X,
using the extension principle, we can dene f(A) as a fuzzy subset of Y such
that
f(A)(y) =
_
sup
xf
1
(y)
A(x) if f
1
(y) =
0 otherwise
(1.1)
where f
1
(y) = {x X | f(x) = y}.
It should be noted that if f is strictly increasing (or strictly decreasing) then
(1.1) turns into
f(A)(y) =
_
A(f
1
(y)) if y Range(f)
0 otherwise
where Range(f) = {y Y | x X such that f(x) = y}.
42
0
f
A
f(A)
Figure 1.17 Extension of a monoton increasing function.
Example 1.3.14 Let f(x) = x
2
and let A F be a symmetric triangular
fuzzy number with membership function
A(x) =
_
1 |a x|/ if |a x|
0 otherwise
Then using the extension principle we get
f(A)(y) =
_
A(

y) if y 0
0 otherwise
that is
f(A)(y) =
_
1 |a

y|/ if |a

y| and y 0
0 otherwise
43
A
f(A)
1
1
f(x) = x
2
A
2A
1
a
2a
Figure 1.18 The quadratic image of a symmetric triangular fuzzy number.
Example 1.3.15 Let f(x) = 1/(1 +e
x
) be a sigmoidal function and let A
be a fuzzy number. Then from
f
1
(y) =
_
ln(y/(1 y)) if 0 y 1
0 otherwise
it follows that
f(A)(y) =
_
A(ln(y/(1 y))) if 0 y 1
0 otherwise
Example 1.3.16 Let = 0 be a real number and let f(x) = x be a linear
function. Suppose A F is a fuzzy number. Then using the extension
principle we obtain
f(A)(y) = sup{A(x) | x = y} = A(y/).
44
1
A
-A
a -a
a+ -a-
a-
-a+
Figure 1.19 The fuzzy number A for = 2.
For = 0 then we get
f(A)(y) = (0 A)(y) = sup{A(x) | 0x = y} =
_
0 if y = 0
1 if y = 0
That is 0 A =

0 for all A F.
Figure 1.20 0 A is equal to

0.
If f(x) = x and A F then we will write f(A) = A. Especially, if = 1
then we have
(1A)(x) = (A)(x) = A(x), x IR
Figure 1.21 Fuzzy number A and A.
The extension principle can be generalized to n-place functions.
Denition 1.3.23 (sup-min extension n-place functions) Let X
1
, X
2
, . . . , X
n
and Y be a family of sets. Assume f is a mapping from the Cartesian product
X
1
X
2
X
n
into Y , that is, for each n-tuple (x
1
, . . . , x
n
) such that
x
i
X
i
, we have
f(x
1
, x
2
, . . . , x
n
) = y Y.
45
Let A
1
, . . . , A
n
be fuzzy subsets of X
1
, . . . , X
n
, respectively; then the exten-
sion principle allows for the evaluation of f(A
1
, . . . , A
n
). In particular,
f(A
1
, . . . , A
n
) = B, where B is a fuzzy subset of Y such that
f(A
1
, . . . , A
n
)(y) =
_
sup{min{A
1
(x
1
), . . . , A
n
(x
n
)} | x f
1
(y)} if f
1
(y) =
0 otherwise
For n = 2 then the extension principle reads
f(A
1
, A
2
)(y) = sup{A
1
(x
1
) A
2
(x
2
) | f(x
1
, x
2
) = y}
Example 1.3.17 (extended addition) Let f : X X X be dened as
f(x
1
, x
2
) = x
1
+x
2
,
i.e. f is the addition operator. Suppose A
1
and A
2
are fuzzy subsets of X.
Then using the extension principle we get
f(A
1
, A
2
)(y) = sup
x
1
+x
2
=y
min{A
1
(x
1
), A
2
(x
2
)}
and we use the notation f(A
1
, A
2
) = A
1
+A
2
.
Example 1.3.18 (extended subtraction) Let f : X X X be dened as
f(x
1
, x
2
) = x
1
x
2
,
i.e. f is the subtraction operator. Suppose A
1
and A
2
are fuzzy subsets of
X. Then using the extension principle we get
f(A
1
, A
2
)(y) = sup
x
1
x
2
=y
min{A
1
(x
1
), A
2
(x
2
)}
and we use the notation f(A
1
, A
2
) = A
1
A
2
.
We note that from the equality
sup
x
1
x
2
=y
min{A
1
(x
1
), A
2
(x
2
)} = sup
x
1
+x
2
=y
min{A
1
(x
1
), A
2
(x
2
)}
it follows that A
1
A
2
= A
1
+ (A
2
) holds. However, if A F is a fuzzy
number then
(A A)(y) = sup
x
1
x
2
=y
min{A(x
1
), A(x
2
)}, y IR
46
a
A
A - A
2
2
a +
a
is not equal to the fuzzy number

0, where

0(t) = 1 if t = 0 and

0(t) = 0
otherwise.
Figure 1.22 The memebership function of A A.
Example 1.3.19 Let f : X X X be dened as
f(x
1
, x
2
) =
1
x
1
+
2
x
2
,
Suppose A
1
and A
2
are fuzzy subsets of X. Then using the extension principle
we get
f(A
1
, A
2
)(y) = sup

1
x
1
+
2
x
2
=y
min{A
1
(x
1
), A
2
(x
2
)}
and we use the notation f(A
1
, A
2
) =
1
A
1
+
2
A
2
.
Example 1.3.20 (extended multiplication) Let f : X X X be dened
as
f(x
1
, x
2
) = x
1
x
2
,
i.e. f is the multiplication operator. Suppose A
1
and A
2
are fuzzy subsets of
X. Then using the extension principle we get
f(A
1
, A
2
)(y) = sup
x
1
x
2
=y
min{A
1
(x
1
), A
2
(x
2
)}
and we use the notation f(A
1
, A
2
) = A
1
A
2
.
Example 1.3.21 (extended division) Let f : X X X be dened as
f(x
1
, x
2
) = x
1
/x
2
,
47
i.e. f is the division operator. Suppose A
1
and A
2
are fuzzy subsets of X.
Then using the extension principle we get
f(A
1
, A
2
)(y) = sup
x
1
/x
2
=y, x
2
=0
min{A
1
(x
1
), A
2
(x
2
)}
and we use the notation f(A
1
, A
2
) = A
1
/A
2
.
Denition 1.3.24 Let X = and Y = be crisp sets and let f be a function
from F(X) to F(Y ). Then f is called a fuzzy function (or mapping) and we
use the notation
f : F(X) F(Y ).
It should be noted, however, that a fuzzy function is not necessarily dened
by Zadehs extension principle. It can be any function which maps a fuzzy
set A F(X) into a fuzzy set B := f(A) F(Y ).
Denition 1.3.25 Let X = and Y = be crisp sets. A fuzzy mapping
f : F(X) F(Y ) is said to be monoton increasing if from A, A

F(X)
and A A

it follows that f(A) f(A

).
Theorem 1.3.1 Let X = and Y = be crisp sets. Then every fuzzy
mapping f : F(X) F(Y ) dened by the extension principle is monoton
increasing.
Proof Let A, A

F(X) such that A A

. Then using the denition of


sup-min extension principle we get
f(A)(y) = sup
xf
1
(y)
A(x) sup
xf
1
(y)
A

(x) = f(A

)(y)
for all y Y .
Lemma 1.3.1 Let A, B F be fuzzy numbers and let f(A, B) = A + B be
dened by sup-min extension principle. Then f is monoton increasing.
Proof Let A, A

, B, B

F such that A A

and B B

. Then using the


denition of sup-min extension principle we get
(A+B)(z) = sup
x+y=z
min{A(x), B(y)} sup
x+y=z
min{A

(x), B

(y)} = (A

+B

)(z)
Which ends the proof.
48
The following lemma can be proved in a similar way.
Lemma 1.3.2 Let A, B F be fuzzy numbers, let
1
,
2
be real numbers
and let
f(A, B) =
1
A +
2
B
be dened by sup-min extension principle. Then f is a monoton increasing
fuzzy function.
Let A = (a
1
, a
2
,
1
,
2
)
LR
and B = (b
1
, b
2
,
1
,
2
)
LR
be fuzzy numbers of LR-
type. Using the (sup-min) extension principle we can verify the following
rules for addition and subtraction of fuzzy numbers of LR-type.
A +B = (a
1
+b
1
, a
2
+b
2
,
1
+
1
,
2
+
2
)
LR
A B = (a
1
b
2
, a
2
b
1
,
1
+
1
,
2
+
2
)
LR
furthermore, if IR is a real number then A can be represented as
A =
_
(a
1
, a
2
,
1
,
2
)
LR
if 0
(a
2
, a
1
, ||
2
, ||
1
)
LR
if < 0
In particular if A = (a
1
, a
2
,
1
,
2
) and B = (b
1
, b
2
,
1
,
2
) are fuzzy numbers
of trapezoidal form then
A +B = (a
1
+b
1
, a
2
+b
2
,
1
+
1
,
2
+
2
)
A B = (a
1
b
2
, a
2
b
1
,
1
+
2
,
2
+
1
).
If A = (a,
1
,
2
) and B = (b,
1
,
2
) are fuzzy numbers of triangular form
then
A +B = (a +b,
1
+
1
,
2
+
2
)
A B = (a b,
1
+
2
,
2
+
1
)
and if A = (a, ) and B = (b, ) are fuzzy numbers of symmetrical triangular
form then
A +B = (a +b, + )
A B = (a b, + )
A = (a, ||).
The above results can be generalized to linear combinations of fuzzy numbers.
49
a b
a + b
A
A + B B
Lemma 1.3.3 Let A
i
= (a
i
,
i
) be a fuzzy number of symmetrical triangular
form and let
i
be a real number, i = 1, . . . , n. Then their linear combination
n

i=1

i
A
i
:=
1
A
1
+ +
n
A
n
can be represented as
n

i=1

i
A
i
= (
1
a
1
+ +
n
a
n
, |
1
|
1
+ + |
n
|
n
)
Figure 1.23 Addition of triangular fuzzy numbers.
Assume A
i
= (a
i
, ), i = 1, . . . , n are fuzzy numbers of symmetrical trian-
gular form and
i
[0, 1], such that
1
+ . . . +
n
= 1. Then their convex
linear combination can be represented as
n

i=1

i
A
i
= (
1
a
1
+ +
n
a
n
,
1
+ +
n
) = (
1
a
1
+ +
n
a
n
, )
Let A and B be fuzzy numbers with [A]

= [a
1
(), a
2
()] and [B]

=
[b
1
(), b
2
()]. Then it can easily be shown that
[A +B]

= [a
1
() +b
1
(), a
2
() +b
2
()]
[A]

= [a
2
(), a
1
()]
[A B]

= [a
1
() b
2
(), a
2
() b
1
()]
50
[A]

= [a
1
(), a
2
()], 0
[A]

= [a
2
(), a
1
()], < 0
for all [0, 1], i.e. any -level set of the extended sum of two fuzzy numbers
is equal to the sum of their -level sets. The following two theorems show
that this property is valid for any continuous function.
Theorem 1.3.2 [87] Let f : X X be a continuous function and let A be
fuzzy numbers. Then
[f(A)]

= f([A]

)
where f(A) is dened by the extension principle (1.1) and
f([A]

) = {f(x) | x [A]

}.
If [A]

= [a
1
(), a
2
()] and f is monoton increasing then from the above
theorem we get
[f(A)]

= f([A]

) = f([a
1
(), a
2
()]) = [f(a
1
()), f(a
2
())].
Theorem 1.3.3 [87] Let f : X X X be a continuous function and let
A and B be fuzzy numbers. Then
[f(A, B)]

= f([A]

, [B]

)
where
f([A]

, [B]

) = {f(x
1
, x
2
) | x
1
[A]

, x
2
[B]

}.
Let f(x, y) = xy and let [A]

= [a
1
(), a
2
()] and [B]

= [b
1
(), b
2
()] be
two fuzzy numbers. Applying Theorem 1.3.3 we get
[f(A, B)]

= f([A]

, [B]

) = [A]

[B]

.
However the equation
[AB]

= [A]

[B]

= [a
1
()b
1
(), a
2
()b
2
()]
holds if and only if A and B are both nonnegative, i.e. A(x) = B(x) = 0 for
x 0.
51
A
B
fuzzy max
If B is nonnegative then we have
[A]

[B]

= [min{a
1
()b
1
(), a
1
()b
2
()}, max{a
2
()b
1
(), a
2
()b
2
()]
In general case we obtain a very complicated expression for the level sets
of the product AB
[A]

[B]

= [min{a
1
()b
1
(), a
1
()b
2
(), a
2
()b
1
(), a
2
()b
2
()},
max{a
1
()b
1
(), a
1
()b
2
(), a
2
()b
1
(), a
2
()b
2
()]
The above properties of extended operations addition, subtraction and multi-
plication by scalar of fuzzy fuzzy numbers of type LR are often used in fuzzy
neural networks.
Denition 1.3.26 (fuzzy max)
Let f(x, y) = max{x, y} and let [A]

= [a
1
(), a
2
()] and [B]

= [b
1
(), b
2
()]
be two fuzzy numbers. Applying Theorem 1.3.3 we get
[f(A, B)]

= f([A]

, [B]

) = max{[A]

, [B]

} = [a
1
() b
1
(), a
2
() b
2
()]
and we use the notation max{A, B}.
Figure 1.24 Fuzzy max of triangular fuzzy numbers.
Denition 1.3.27 (fuzzy min)
Let f(x, y) = min{x, y} and let [A]

= [a
1
(), a
2
()] and [B]

= [b
1
(), b
2
()]
be two fuzzy numbers. Applying Theorem 1.3.3 we get
[f(A, B)]

= f([A]

, [B]

) = min{[A]

, [B]

} = [a
1
() b
1
(), a
2
() b
2
()]
and we use the notation min{A, B}.
52
A
B
fuzzy min
Figure 1.25 Fuzzy min of triangular fuzzy numbers.
The fuzzy max and min are commutative and associative operations. Fur-
thermore, if A, B and C are fuzzy numbers then
max{A, min{B, C}} = min{max{A, B}, max{A, C}}
min{A, max{B, C}} = max{min{A, B}, min{A, C}}
i.e. min and max are distributive.
In the denition of the extension principle one can use any t-norm for mod-
eling the compositional operator.
Denition 1.3.28 (sup-T extension principle) Let T be a t-norm and let f
be a mapping from X
1
X
2
X
n
to Y , Assume f(A
1
, . . . , A
n
) is a fuzzy
subset of X
1
X
2
X
n
, using the extension principle, we can dene
f(A) as a fuzzy subset of Y such that
f(A
1
, . . . , A
n
)(y) =
_
sup{T(A
1
(x), . . . , A
n
(x)) | x f
1
(y)} if f
1
(y) =
0 otherwise
Example 1.3.22 Let PAND(u, v) = uv be the product t-norm and let f(x
1
, x
2
) =
x
1
+x
2
be the addition operation on the real line. If A and B are fuzzy num-
bers then their sup-T extended sum, denoted by A B, is dened by
f(A, B)(y) = sup
x
1
+x
2
=y
PAND(A
1
(x
1
), A
2
(x
2
)) = sup
x
1
+x
2
=y
A
1
(x
1
)A
2
(x
2
)
Example 1.3.23 Let T(u, v) = max{0, u+v 1} be the Lukasiewicz t-norm
and let f(x
1
, x
2
) = x
1
+ x
2
be the addition operation on the real line. If A
and B are fuzzy numbers then their sup-T extended sum, denoted by AB,
is dened by
f(A, B)(y) = sup
x
1
+x
2
=y
LAND(A
1
(x
1
), A
2
(x
2
)) = sup
x
1
+x
2
=y
max{0, A
1
(x
1
)+A
2
(x
2
)1}
53
The reader can nd some results on t-norm-based operations on fuzzy num-
bers in [45, 46, 52].
Exercise 1.3.1 Let A
1
= (a
1
, ) and A
2
= (a
2
, ) be fuzzy numbers of sym-
metric triangular form. Compute analytically the membership function of
their product-sum, A
1
A
2
, dened by
(A
1
A
2
)(y) = sup
x
1
+x
2
=y
PAND(A
1
(x
1
), A
2
(x
2
)) = sup
x
1
+x
2
=y
A
1
(x
1
)A
2
(x
2
).
54
A
B
a
b
a-
a+
b+
b-
1
D(A,B) = |a-b|
A
B
1
C(A,B) = 1
1.3.2 Metrics for fuzzy numbers
Let A and B be fuzzy numbers with [A]

= [a
1
(), a
2
()] and [B]

=
[b
1
(), b
2
()]. We metricize the set of fuzzy numbers by the metrics
Hausdor distance
D(A, B) = sup
[0,1]
max{|a
1
() b
1
()|, |a
2
() b
2
()|}.
i.e. D(A, B) is the maximal distance between level sets of A and B.
Figure 1.26 Hausdor distance between symmetric triangular
fuzzy numbers A and B.
C

distance
C

(A, B) = A B

= sup{|A(u) B(u)| : u IR}.


i.e. C

(A, B) is the maximal distance between the membership grades


of A and B.
55
Figure 1.27 C(A, B) = 1 whenever the supports of A and B are disjunctive.
Hamming distance Suppose A and B are fuzzy sets in X. Then their
Hamming distance, denoted by H(A, B), is dened by
H(A, B) =
_
X
|A(x) B(x)| dx.
Discrete Hamming distance Suppose A and B are discrete fuzzy
sets
A =
1
/x
1
+. . . +
n
/x
n
, B =
1
/x
1
+. . . +
n
/x
n
Then their Hamming distance is dened by
H(A, B) =
n

j=1
|
j

j
|.
It should be noted that D(A, B) is a better measure of similarity than
C

(A, B), because C

(A, B) 1 holds even though the supports of A and


B are very far from each other.
Denition 1.3.29 Let f be a fuzzy function from F to F. Then f is said
to be continuous in metric D if > 0 there exists > 0 such that if
D(A, B)
then
D(f(A), f(B))
In a similar way we can dene the continuity of fuzzy functions in metric
C

.
Denition 1.3.30 Let f be a fuzzy function from F(IR) to F(IR). Then f
is said to be continuous in metric C

if > 0 there exists > 0 such that


if
C

(A, B)
then
C

(f(A), f(B)) .
56
We note that in the denition of continuity in metric C

the domain and


the range of f can be the family of all fuzzy subsets of the real line, while in
the case of continuity in metric D the the domain and the range of f is the
set of fuzzy numbers.
Exercise 1.3.2 Let f(x) = sin x and let A = (a, ) be a fuzzy number of
symmetric triangular form. Calculate the membership function of the fuzzy
set f(A).
Exercise 1.3.3 Let B
1
= (b
1
,
1
) and B
2
= (b
2
,
2
) be fuzzy number of sym-
metric triangular form. Calculate the -level set of their product B
1
B
2
.
Exercise 1.3.4 Let B
1
= (b
1
,
1
) and B
2
= (b
2
,
2
) be fuzzy number of
symmetric triangular form. Calculate the -level set of their fuzzy max
max{B
1
, B
2
}.
Exercise 1.3.5 Let B
1
= (b
1
,
1
) and B
2
= (b
2
,
2
) be fuzzy number of
symmetric triangular form. Calculate the -level set of their fuzzy min
min{B
1
, B
2
}.
Exercise 1.3.6 Let A = (a, ) and B = (b, ) be fuzzy numbers of symmetri-
cal triangular form. Calculate the distances D(A, B), H(A, B) and C

(A, B)
as a function of a, b, and .
Exercise 1.3.7 Let A = (a,
1
,
2
) and B = (b,
1
,
2
) be fuzzy numbers of
triangular form. Calculate the distances D(A, B), H(A, B) and C

(A, B)
as a function of a, b,
1
,
2
,
1
and
2
.
Exercise 1.3.8 Let A = (a
1
, a
2
,
1
,
2
) and B = (b
1
, b
2
,
1
,
2
) be fuzzy
numbers of trapezoidal form. Calculate the distances D(A, B), H(A, B) and
C

(A, B).
Exercise 1.3.9 Let A = (a
1
, a
2
,
1
,
2
)
LR
and B = (b
1
, b
2
,
1
,
2
)
LR
be fuzzy
numbers of type LR. Calculate the distances D(A, B), H(A, B) and C

(A, B).
Exercise 1.3.10 Let A and B be discrete fuzzy subsets of X = {2, 1, 0, 1, 2, 3, 4}.
A = 0.7/ 2 + 0.4/ 1 + 0.6/0 + 1.0/1 + 0.6/2 + 0.3/3 + 0.4/4
B = 0.1/ 2 + 0.2/ 1 + 0.9/0 + 1.0/1 + 1.0/2 + 0.3/3 + 0.2/4
Calculate the Hamming distance between A and B.
57
1.3.3 Fuzzy implications
Let p = x is in A and q = y is in B be crisp propositions, where A and
B are crisp sets for the moment. The implication p q is interpreted as
(p q). The full interpretation of the material implication p q is that
the degree of truth of p q quanties to what extend q is at least as true
as p, i.e.
(p q) =
_
1 if (p) (q)
0 otherwise
where (.) denotes the truth value of a proposition.
(p) (q) (p q)
1 1 1
0 1 1
0 0 1
1 0 0
Table 1.3 Truth table for the material implication.
Example 1.3.24 Let p = x is bigger than 10 and let q = x is bigger than 9.
It is easy to see that p q is true, because it can never happen that x is
bigger than 10 and at the same time x is not bigger than 9.
Consider the implication statement: if pressure is high then volume is
small. The membership function of the fuzzy set A = big pressure,
A(u) =
_

_
1 if u 5
1 (5 u)/4 if 1 u 5
0 otherwise
can be interpreted as
x is in the fuzzy set big pressure with grade of membership zero, for all
0 x 1
2 is in the fuzzy set big pressure with grade of membership 0.25
4 is in the fuzzy set big pressure with grade of membership 0.75
58
1
5
x
1 5
y
x is in the fuzzy set big pressure with grade of membership one, for all
x 5
The membership function of the fuzzy set B, small volume,
B(v) =
_

_
1 if v 1
1 (v 1)/4 if 1 v 5
0 otherwise
can be interpreted as
y is in the fuzzy set small volume with grade of membership zero, for
all y 5
4 is in the fuzzy set small volume with grade of membership 0.25
2 is in the fuzzy set small volume with grade of membership 0.75
y is in the fuzzy set small volume with grade of membership one, for
all y 1
Figure 1.28 x is big pressure and y is small volume.
If p is a proposition of the form x is A where A is a fuzzy set, for example,
big pressure and q is a proposition of the form y is B for example,
small volume then one encounters the following problem: How to dene
the membership function of the fuzzy implication A B? It is clear that
(A B)(x, y) should be dened pointwise i.e. (A B)(x, y) should be a
function of A(x) and B(y). That is
(A B)(u, v) = I(A(u), B(v)).
59
We shall use the notation
(A B)(u, v) = A(u) B(v).
In our interpretation A(u) is considered as the truth value of the proposi-
tion u is big pressure, and B(v) is considered as the truth value of the
proposition v is small volume.
u is big pressure v is small volume A(u) B(v)
One possible extension of material implication to implications with interme-
diate truth values is
A(u) B(v) =
_
1 if A(u) B(v)
0 otherwise
This implication operator is called Standard Strict.
4 is big pressure 1 is small volume = A(4) B(1) = 0.75 1 = 1
However, it is easy to see that this fuzzy implication operator is not appro-
priate for real-life applications. Namely, let A(u) = 0.8 and B(v) = 0.8.
Then we have
A(u) B(v) = 0.8 0.8 = 1
Let us suppose that there is a small error of measurement or small rounding
error of digital computation in the value of B(v), and instead 0.8 we have to
proceed with 0.7999. Then from the denition of Standard Strict implication
operator it follows that
A(u) B(v) = 0.8 0.7999 = 0
This example shows that small changes in the input can cause a big deviation
in the output, i.e. our system is very sensitive to rounding errors of digital
computation and small errors of measurement.
A smoother extension of material implication operator can be derived from
the equation
X Y = sup{Z|X Z Y }
where X, Y and Z are classical sets.
60
Using the above principle we can dene the following fuzzy implication op-
erator
A(u) B(v) = sup{z| min{A(u), z} B(v)}
that is,
A(u) B(v) =
_
1 if A(u) B(v)
B(v) otherwise
This operator is called G odel implication. Using the denitions of negation
and union of fuzzy subsets the material implication p q = p q can be
extended by
A(u) B(v) = max{1 A(u), B(v)}
This operator is called Kleene-Dienes implication.
In many practical applications one uses Mamdanis implication operator to
model causal relationship between fuzzy variables. This operator simply
takes the minimum of truth values of fuzzy predicates
A(u) B(v) = min{A(u), B(v)}
It is easy to see this is not a correct extension of material implications,
because 0 0 yields zero. However, in knowledge-based systems, we are
usually not interested in rules, in which the antecedent part is false.
There are three important classes of fuzzy implication operators:
S-implications: dened by
x y = S(n(x), y)
where S is a t-conorm and n is a negation on [0, 1]. These implications
arise from the Boolean formalism p q = p q. Typical examples
of S-implications are the Lukasiewicz and Kleene-Dienes implications.
R-implications: obtained by residuation of continuous t-norm T, i.e.
x y = sup{z [0, 1] | T(x, z) y}
These implications arise from the Intutionistic Logic formalism. Typi-
cal examples of R-implications are the G odel and Gaines implications.
61
t-norm implications: if T is a t-norm then
x y = T(x, y)
Although these implications do not verify the properties of material im-
plication they are used as model of implication in many applications of
fuzzy logic. Typical examples of t-norm implications are the Mamdani
(x y = min{x, y}) and Larsen (x y = xy) implications.
The most often used fuzzy implication operators are listed in the following
table.
Name Denition
Early Zadeh x y = max{1 x, min(x, y)}
Lukasiewicz x y = min{1, 1 x +y}
Mamdani x y = min{x, y}
Larsen x y = xy
Standard Strict x y =
_
1 if x y
0 otherwise
Godel x y =
_
1 if x y
y otherwise
Gaines x y =
_
1 if x y
y/x otherwise
Kleene-Dienes x y = max{1 x, y}
Kleene-Dienes- Lukasiewicz x y = 1 x +xy
Yager x y = y
x
Table 1.4 Fuzzy implication operators.
62
1.3.4 Linguistic variables
The use of fuzzy sets provides a basis for a systematic way for the manipu-
lation of vague and imprecise concepts. In particular, we can employ fuzzy
sets to represent linguistic variables. A linguistic variable can be regarded
either as a variable whose value is a fuzzy number or as a variable whose
values are dened in linguistic terms.
Denition 1.3.31 (linguistic variable) A linguistic variable is characterized
by a quintuple
(x, T(x), U, G, M)
in which
x is the name of variable;
T(x) is the term set of x, that is, the set of names of linguistic values
of x with each value being a fuzzy number dened on U;
G is a syntactic rule for generating the names of values of x;
and M is a semantic rule for associating with each value its meaning.
For example, if speed is interpreted as a linguistic variable, then its term set
T (speed) could be
T = {slow, moderate, fast, very slow, more or less fast, sligthly slow, . . . }
where each term in T (speed) is characterized by a fuzzy set in a universe of
discourse U = [0, 100]. We might interpret
slow as a speed below about 40 mph
moderate as a speed close to 55 mph
fast as a speed above about 70 mph
These terms can be characterized as fuzzy sets whose membership functions
are shown in the gure below.
63
speed
slow
medium fast
40
55
70
1
NB
PB
PM
PS ZE
NS NM
-1
1
Figure 1.29 Values of linguistic variable speed.
In many practical applications we normalize the domain of inputs and use
the following type of fuzzy partition
NB (Negative Big), NM (Negative Medium)
NS (Negative Small), ZE (Zero)
PS (Positive Small), PM (Positive Medium)
PB (Positive Big)
Figure 1.30 A possible fuzzy partition of [1, 1].
If A a fuzzy set in X then we can modify the meaning of A with the help of
words such as very, more or less, slightly, etc. For example, the membership
function of fuzzy sets very A and more or less A can be dened by
(very A)(x) = (A(x))
2
, (more or less A)(x) =
_
A(x), x X
64
old
very old
30
60
30
60
old
more or less old
Figure 1.31 Membership functions of fuzzy sets old and very old.
Figure 1.32 Membership function of fuzzy sets old and more or less old.
65
x
y
y=f(x)
y=f(x)
x=x'
1.4 The theory of approximate reasoning
In 1979 Zadeh introduced the theory of approximate reasoning [118]. This
theory provides a powerful framework for reasoning in the face of imprecise
and uncertain information. Central to this theory is the representation of
propositions as statements assigning fuzzy sets as values to variables.
Suppose we have two interactive variables x X and y Y and the causal
relationship between x and y is completely known. Namely, we know that y
is a function of x
y = f(x)
Then we can make inferences easily
premise y = f(x)
fact x = x

consequence y = f(x

)
This inference rule says that if we have y = f(x), x X and we observe
that x = x

then y takes the value f(x

).
Figure 1.33 Simple crisp inference.
More often than not we do not know the complete causal link f between x
and y, only we now the values of f(x) for some particular values of x
66

1
: If x = x
1
then y = y
1
also

2
: If x = x
2
then y = y
2
also
. . .
also

n
: If x = x
n
then y = y
n
Suppose that we are given an x

X and want to nd an y

Y which
correponds to x

under the rule-base.

1
: If x = x
1
then y = y
1
also

2
: If x = x
2
then y = y
2
also
. . . . . .
also

n
: If x = x
n
then y = y
n
fact: x = x

consequence: y = y

This problem is frequently quoted as interpolation.


Let x and y be linguistic variables, e.g. x is high and y is small. The
basic problem of approximate reasoning is to nd the membership function
of the consequence C from the rule-base {
1
, . . . ,
n
} and the fact A.

1
: if x is A
1
then y is C
1
,

2
: if x is A
2
then y is C
2
,

n
: if x is A
n
then y is C
n
fact: x is A
consequence: y is C
In [118] Zadeh introduces a number of translation rules which allow us to
represent some common linguistic statements in terms of propositions in our
language. In the following we describe some of these translation rules.
67
Denition 1.4.1 Entailment rule:
x is A
A B
x is B
Mary is very young
very young young
Mary is young
Denition 1.4.2 Conjuction rule:
x is A
x is B
x is A B
pressure is not very high
pressure is not very low
pressure is not very high and not very low
Denition 1.4.3 Disjunction rule:
x is A
or x is B
x is A B
pressure is not very high vspace4pt
or pressure is not very low
pressure is not very high or not very low
Denition 1.4.4 Projection rule:
(x, y) have relation R
x is
X
(R)
(x, y) have relation R
y is
Y
(R)
(x, y) is close to (3, 2)
x is close to 3
(x, y) is close to (3, 2)
y is close to 2
Denition 1.4.5 Negation rule:
not (x is A)
x is A
not (x is high)
x is not high
In fuzzy logic and approximate reasoning, the most important fuzzy impli-
cation inference rule is the Generalized Modus Ponens (GMP). The classical
Modus Ponens inference rule says:
68
premise if p then q
fact p
consequence q
This inference rule can be interpreted as: If p is true and p q is true then
q is true.
The fuzzy implication inference is based on the compositional rule of inference
for approximate reasoning suggested by Zadeh [115].
Denition 1.4.6 (compositional rule of inference)
premise if x is A then y is B
fact x is A

consequence: y is B

where the consequence B

is determined as a composition of the fact and the


fuzzy implication operator
B

= A

(A B)
that is,
B

(v) = sup
uU
min{A

(u), (A B)(u, v)}, v V.


The consequence B

is nothing else but the shadow of A B on A

.
The Generalized Modus Ponens, which reduces to calssical modus ponens
when A

= A and B

= B, is closely related to the forward data-driven


inference which is particularly useful in the Fuzzy Logic Control.
In many practical cases instead of sup-min composition we use sup-T com-
position, where T is a t-norm.
Denition 1.4.7 (sup-T compositional rule of inference)
premise if x is A then y is B
fact x is A

consequence: y is B

69
y
A x B
A(x)
min{A(x), B(y)}
B'(y) = B(y)
B(y)
where the consequence B

is determined as a composition of the fact and the


fuzzy implication operator
B

= A

(A B)
that is,
B

(v) = sup{T(A

(u), (A B)(u, v)) | u U}, v V.


It is clear that T can not be chosen independently of the implication operator.
Figure 1.34 A A B = B.
The classical Modus Tollens inference rule says: If p q is true and q is
false then p is false. The Generalized Modus Tollens,
premise if x is A then y is B
fact y is B

consequence: x is A

70
A' = A B'= B
which reduces to Modus Tollens when B = B and A

= A, is closely
related to the backward goal-driven inference which is commonly used in
expert systems, especially in the realm of medical diagnosis.
Suppose that A, B and A

are fuzzy numbers. The Generalized Modus


Ponens should satisfy some rational properties
Property 1.4.1 Basic property:
if x is A then y is B
x is A
y is B
if pressure is big then volume is small
pressure is big
volume is small
Figure 1.35 Basic property.
Property 1.4.2 Total indeterminance:
if x is A then y is B
x is A
y is unknown
if pres. is big then volume is small
pres. is not big
volume is unknown
71
B
B'
A
A'
A
A'
B' = B
Figure 1.36 Total indeterminance.
Property 1.4.3 Subset:
if x is A then y is B
x is A

A
y is B
if pres. is big then volume is small
pres. is very big
volume is small
Figure 1.37 Subset property.
Property 1.4.4 Superset:
if x is A then y is B
x is A

y is B

B
72
A
A'
B
B'
Figure 1.38 Superset property.
Suppose that A, B and A

are fuzzy numbers. We show that the Generalized


Modus Ponens with Mamdanis implication operator does not satisfy all the
four properties listed above.
Example 1.4.1 (The GMP with Mamdani implication)
if x is A then y is B
x is A

y is B

where the membership function of the consequence B

is dened by
B

(y) = sup{A

(x) A(x) B(y)|x IR}, y IR.


Basic property. Let A

= A and let y IR be arbitrarily xed. Then we


have
B

(y) = sup
x
min{A(x), min{A(x), B(y)} = sup
x
min{A(x), B(y)} =
min{B(y), sup
x
A(x)} = min{B(y), 1} = B(y).
So the basic property is satised.
Total indeterminance. Let A

= A = 1 A and let y IR be arbitrarily


xed. Then we have
B

(y) = sup
x
min{1A(x), min{A(x), B(y)} = sup
x
min{A(x), 1A(x), B(y)} =
min{B(y), sup
x
min{A(x), 1 A(x)}} = min{B(y), 1/2} = 1/2B(y) < 1
73
A B
x
A(x)
B'
this means that the total indeterminance property is not satised.
Subset. Let A

A and let y IR be arbitrarily xed. Then we have


B

(y) = sup
x
min{A

(x), min{A(x), B(y)} = sup


x
min{A(x), A

(x), B(y)} =
min{B(y), sup
x
A

(x)} = min{B(y), 1} = B(y)


So the subset property is satised.
Superset. Let y IR be arbitrarily xed. Then we have
B

(y) = sup
x
min{A

(x), min{A(x), B(y)} = sup


x
min{A(x), A

(x), B(y)} B(y).


So the superset property of GMP is not satised by Mamdanis implication
operator.
Figure 1.39 The GMP with Mamdanis implication operator.
Example 1.4.2 (The GMP with Larsens product implication)
if x is A then y is B
x is A

y is B

where the membership function of the consequence B

is dened by
B

(y) = sup min{A

(x), A(x)B(y)|x IR}, y IR.


Basic property. Let A

= A and let y IR be arbitrarily xed. Then we


have
B

(y) = sup
x
min{A(x), A(x)B(y)} = B(y).
74
A
B
x
B'
A'
So the basic property is satised.
Total indeterminance. Let A

= A = 1 A and let y IR be arbitrarily


xed. Then we have
B

(y) = sup
x
min{1 A(x), A(x)B(y)} =
B(y)
1 +B(y)
< 1
this means that the total indeterminance property is not satised.
Subset. Let A

A and let y IR be arbitrarily xed. Then we have


B

(y) = sup
x
min{A

(x), A(x)B(y)} = sup


x
min{A(x), A

(x)B(y)} = B(y)
So the subset property is satised.
Superset. Let y IR be arbitrarily xed. Then we have
B

(y) = sup
x
min{A

(x), A(x)B(y)} B(y).


So, the superset property is not satised.
Figure 1.39a The GMP with Larsens implication operator.
Suppose we are given one block of fuzzy rules of the form

1
: if x is A
1
then z is C
1
,

2
: if x is A
2
then z is C
2
,

n
: if x is A
n
then z is C
n
fact: x is A
consequence: z is C
75
The i-th fuzzy rule from this rule-base

i
: if x is A
i
then z is C
i
is implemented by a fuzzy implication R
i
and is dened as
R
i
(u, w) = A
i
(u) C
i
(w)
There are two main approaches to determine the membership function of
consequence C.
Combine the rules rst. In this approach, we rst combine all the
rules by an aggregation operator Agg into one rule which used to obtain
C from A.
R = Agg(
1
,
2
, ,
n
)
If the sentence connective also is interpreted as and then we get
R =
n

i=1
R
i
that is
R(u, w) =
n

i=1
R
i
(u, w) = min(A
i
(u) C
i
(w))
or by using a t-norm T for modeling the connective and
R(u, w) = T(R
1
(u, w), . . . , R
n
(u, w))
If the sentence connective also is interpreted as or then we get
R =
n
_
i=1
R
i
that is
R(u, w) =
n
_
i=1
R
i
(u, v, w) = max(A
i
(u) C
i
(w))
or by using a t-conorm S for modeling the connective or
R(u, w) = S(R
1
(u, w), . . . , R
n
(u, w))
Then we compute C from A by the compositional rule of inference as
C = A R = A Agg(R
1
, R
2
, , R
n
)
76
Fire the rules rst. Fire the rules individually, given A, and then
combine their results into C.
We rst compose A with each R
i
producing intermediate result
C

i
= A R
i
for i = 1, . . . , n and then combine the C

i
component wise into C

by
some aggregation operator Agg
C

= Agg (C

1
, . . . , C

n
) = Agg (A R
1
, . . . , A R
n
).
We show that the sup-min compositional operator and the connective also
interpreted as the union operator are commutative. Thus the consequence,
C, inferred from the complete set of rules is equivalent to the aggregated
result, C

, derived from individual rules.


Lemma 1.4.1 Let
C = A
n
_
i=1
R
i
be dened by standard sup-min composition as
C(w) = sup
u
min{A(u), max{R
1
(u, w), . . . , R
n
(u, w)}}
and let
C

=
n
_
i=1
A R
i
dened by the sup-min composition as
C

(w) = max{sup
u
A(u) R
i
(u, w), . . . , sup
u
A(u) R
n
(u, w)}.
Then C(w) = C

(w) for all w from the universe of discourse W.


Proof. Using the distributivity of over we get
C(w) = sup
u
{A(u)(R
1
(u, w). . .R
n
(u, w))} = sup
u
{(A(u)R
1
(u, w)). . .
(A(u)R
n
(u, w))} = max{sup
u
A(u)R
1
(u, w), . . . , sup
u
A(u)R
n
(u, w)} = C

(w).
77
Which ends the proof.
Similar statement holds for the sup-product compositional rule of inference,
i.e the sup-product compositional operator and the connective also as the
union operator are commutative.
Lemma 1.4.2 Let
C = A
n
_
i=1
R
i
be dened by sup-product composition as
C(w) = sup
u
A(u) max{R
1
(u, w), . . . , R
n
(u, w)}
and let
C

=
n
_
i=1
A R
i
dened by the sup-product composition as
C

(w) = max{sup
u
A(u)R
i
(u, w), . . . , sup
u
A(u)R
n
(u, w)}
Then C(w) = C

(w) holds for each w from the universe of discourse W.


Proof. Using the distributivity of multiplication over we have
C(w) = sup
u
{A(u)(R
1
(u, w) . . . R
n
(u, w))} = sup
u
{A(u)R
1
(u, w) . . .
A(u)R
n
(u, w)} = max{sup
u
A(u)R
1
(u, w), . . . , sup
u
A(u)R
n
(u, w)} = C

(w).
Which ends the proof.
However, the sup-min compositional operator and the connective also inter-
preted as the intersection operator are not usually commutative. In this case,
the consequence, C, inferred from the complete set of rules is included in the
aggregated result, C

, derived from individual rules.


Lemma 1.4.3 Let
C = A
n

i=1
R
i
78
be dened by standard sup-min composition as
C(w) = sup
u
min{A(u), min{R
1
(u, w), . . . , R
n
(u, w)}}
and let
C

=
n

i=1
A R
i
dened by the sup-min composition as
C

(w) = min{sup
u
{A(u) R
i
(u, w)}, . . . , sup
u
{A(u) R
n
(u, w)}}.
Then C C

, i.e C(w) C

(w) holds for all w from the universe of discourse


W.
Proof. From the relationship
A
n

i=1
R
i
A R
i
for each i = 1, . . . , n, we get
A
n

i=1
R
i

n

i=1
A R
i
.
Which ends the proof.
Similar statement holds for the sup-t-norm compositional rule of inference,
i.e the sup-product compositional operator and the connective also inter-
preted as the intersection operator are not commutative. In this case, the
consequence, C, inferred from the complete set of rules is included in the
aggregated result, C

, derived from individual rules.


Lemma 1.4.4 Let
C = A
n

i=1
R
i
be dened by sup-T composition as
C(w) = sup
u
T(A(u), min{R
1
(u, w), . . . , R
n
(u, w)})
79
and let
C

=
n

i=1
A R
i
dened by the sup-T composition as
C

(w) = min{sup
u
T(A(u), R
i
(u, w)), . . . , sup
u
T(A(u), R
n
(u, w))}.
Then C C

, i.e C(w) C

(w) holds for all w from the universe of discourse


W.
Example 1.4.3 We illustrate Lemma 1.4.3 by a simple example. Assume
we have two fuzzy rules of the form

1
: if x is A
1
then z is C
1

2
: if x is A
2
then z is C
2
where A
1
, A
2
and C
1
, C
2
are discrete fuzzy numbers of the universe of dis-
courses {x
1
, x
2
} and {z
1
, z
2
}, respectively. Suppose that we input a fuzzy set
A = a
1
/x
1
+a
2
/x
2
to the system and let
R
1
=
_
_
_
z
1
z
2
x
1
0 1
x
2
1 0
_
_
_
, R
2
=
_
_
_
z
1
z
2
x
1
1 0
x
2
0 1
_
_
_
represent the fuzzy rules. We rst compute the consequence C by
C = A (R
1
R
2
).
Using the denition of intersection of fuzzy relations we get
C = (a
1
/x
1
+a
2
/x
2
)
_

_
_
_
_
z
1
z
2
x
1
0 1
x
2
1 0
_
_
_

_
_
_
z
1
z
2
x
1
1 0
x
2
0 1
_
_
_
_

_
=
(a
1
/x
1
+a
2
/x
2
)
_
_
_
z
1
z
2
x
1
0 0
x
2
0 0
_
_
_
=
80
1
x
0
x
0
_
Let us compute now the membership function of the consequence C

by
C

= (A R
1
) (A R
2
)
Using the denition of sup-min composition we get
A R
1
= (a
1
/x
1
+a
2
/x
2
)
_
_
_
z
1
z
2
x
1
0 1
x
2
1 0
_
_
_
.
Plugging into numerical values
(AR
1
)(z
1
) = max{a
1
0, a
2
1} = a
2
, (AR
1
)(z
2
) = max{a
1
1, a
2
0} = a
1
,
So,
A R
1
= a
2
/z
1
+a
1
/z
2
and from
A R
2
= (a
1
/x
1
+a
2
/x
2
)
_
_
_
z
1
z
2
x
1
1 0
x
2
0 1
_
_
_
=
we get
A R
2
= a
1
/z
1
+a
2
/z
2
.
Finally,
C

= a
2
/z
1
+a
1
/z
2
a
1
/z
1
+a
2
/z
2
= a
1
a
2
/z
1
+a
1
a
2
/z
2
.
Which means that C is a proper subset of C

whenever min{a
1
, a
2
} = 0.
Suppose now that the fact of the GMP is given by a fuzzy singleton. Then
the process of computation of the membership function of the consequence
becomes very simple.
81
A1 C1
x
0
w
u
A1(x0)
C
Figure 1.39b Fuzzy singleton.
For example, if we use Mamdanis implication operator in the GMP then
rule 1: if x is A
1
then z is C
1
fact: x is x
0
consequence: z is C
where the membership function of the consequence C is computed as
C(w) = sup
u
min{ x
0
(u), (A
1
C
1
)(u, w)} =
sup
u
min{ x
0
(u), min{A
1
(u), C
1
(w)}}, w W.
Observing that x
0
(u) = 0, u = x
0
the supremum turns into a simple mini-
mum
C(w) = min{ x
0
(x
0
) A
1
(x
0
) C
1
(w)s} = min{A
1
(x
0
), C
1
(w)}, w W.
Figure 1.40 Inference with Mamdanis implication operator.
and if we use Godel implication operator in the GMP then
C(w) = sup
u
min{ x
0
(u), (A
1
C
1
)(u, w)} = A
1
(x
0
) C
1
(w)
So,
C(w) =
_
1 if A
1
(x
0
) C
1
(w)
C
1
(w) otherwise
82
A1
u w
C1
C
x
0
Figure 1.41 Inference with Godel implication operator.
Lemma 1.4.5 Consider one block of fuzzy rules of the form

i
: if x is A
i
then z is C
i
, 1 i n
and suppose that the input to the system is a fuzzy singleton. Then the con-
sequence, C, inferred from the complete set of rules is equal to the aggregated
result, C

, derived from individual rules. This statements holds for any kind
of aggregation operators used to combine the rules.
Proof. Suppose that the input of the system A = x
0
is a fuzzy singleton.
On the one hand we have
C(w) = (A Agg [R
1
, . . . , R
n
])(w) = sup
u
{ x
0
(u) Agg [R
1
, . . . , R
n
](u, w)} =
Agg [R
1
, . . . , R
n
](x
0
, w) = Agg [R
1
(x
0
, w), . . . , R
n
(x
0
, w)].
On the other hand
C

(w) = Agg [A R
1
, . . . , A R
n
](w) = Agg [ sup
u
min{ x
0
(u), R
1
(u, w)}, . . . ,
sup
u
min{ x
0
(u), R
1
(u, w)}] = Agg [R
1
(x
0
, w), . . . , R
n
(x
0
, w)] = C(w).
Which ends the proof.
Consider one block of fuzzy rules of the form
= {A
i
C
i
, 1 i n}
where A
i
and C
i
are fuzzy numbers.
83
A1
A2
A3 A4
Lemma 1.4.6 Suppose that in the supports of A
i
are pairwise disjunctive:
supp(A
i
) supp(A
j
) = , for i = j.
If the implication operator is dened by
x z =
_
1 if x z
z otherwise
(G odel implication) then
n

i=1
A
i
(A
i
C
i
) = C
i
holds for i = 1, . . . , n
Proof. Since the GMP with Godel implication satises the basic property
we get
A
i
(A
i
C
i
) = A
i
.
From supp(A
i
) supp(A
j
) = , for i = j it follows that
A
i
(A
j
C
j
) = 1, i = j
where 1 is the universal fuzzy set. So,
n

i=1
A
i
(A
i
C
i
) = C
i
1 = C
i
Which ends the proof.
Figure 1.41a Pairwise disjunctive supports.
84
A1
A2 A3
Denition 1.4.8 The rule-base is said to be separated if the core of A
i
,
dened by
core(A
i
) = {x | A
i
(x) = 1},
is not contained in
j=i
supp(A
j
) for i = 1, . . . , n.
This property means that deleting any of the rules from leaves a point x
to which no rule applies. It means that every rule is useful.
Figure 1.41b Separated rule-base.
The following theorem shows that Lemma 1.4.6 remains valid for separated
rule-bases.
Theorem 1.4.1 [23] Let be separated. If the implication is modelled by
the G odel implication operator then
n

i=1
A
i
(A
i
C
i
) = C
i
holds for i = 1, . . . , n
Proof. Since the Godel implication satises the basic property of the GMP
we get
A
i
(A
i
C
i
) = A
i
.
Since core(A
i
) supp(A
j
) = , for i = j there exists an element x such
that x core(A
i
) and x / supp(A
j
), i = j. That is A
i
( x) = 1 and A
j
( x) =
0, i = j. Applying the compositional rule of inference with G odel implication
operator we get
(A
i
A
j
C
j
)(z) = sup
x
min{A
i
(x), A
j
(x) C
j
(x))}
85
min{A
i
( x), A
j
( x) C
j
( x))} = min{1, 1} = 1, i = j
for any z. So,
n

i=1
A
i
(A
i
C
i
) = C
i
1 = C
i
Which ends the proof.
Exercise 1.4.1 Show that the GMP with G odel implication operator satises
properties (1)-(4).
Exercise 1.4.2 Show that the GMP with Lukasiewicz implication operator
satises properties (1)-(4).
Exercise 1.4.3 Show that the statement of Lemma 1.4.6 also holds for Lukasiewicz
implication operator.
Exercise 1.4.4 Show that the statement of Theorem 1.4.1 also holds for
Lukasiewicz implication operator.
86
Controller
System
y*
e
u
y
1.5 An introduction to fuzzy logic controllers
Conventional controllers are derived from control theory techniques based
on mathematical models of the open-loop process, called system, to be con-
trolled.
The purpose of the feedback controller is to guarantee a desired response of
the output y. The process of keeping the output y close to the setpoint (ref-
erence input) y

, despite the presence disturbances of the system parameters,


and noise measurements, is called regulation. The output of the controller
(which is the input of the system) is the control action u. The general form
of the discrete-time control law is
u(k) = f(e(k), e(k 1), . . . , e(k ), u(k 1), . . . , u(k )) (1.2)
providing a control action that describes the relationship between the input
and the output of the controller. In (1.2), e represents the error between the
desired setpoint y

and the output of the system y; parameter denes the


order of the controller, and f is in general a nonlinear function.
Figure 1.42 A basic feedback control system..
Dierent control algorithms
proportional (P)
integral (I)
derivative (D)
and their combinations can be derived from control law (1.2) for dierent
values of parameter and for dierent functions f.
Example 1.5.1 A conventional proportional-integral (PI) controller can be
described by the function
u = K
p
e +K
i
_
e dt =
_
(K
p
e +K
i
e) dt
87
or by its dierential form
du = (K
p
e +K
i
e) dt
The proportional term provides control action equal to some multiple of the
error, while the integral term forces the steady state error to zero.
The discrete-time equivalent expression for the above PI controller is
u(k) = K
p
e(k) +K
i

i=1
e(i)
where denes the order of the controller.
The seminal work by L.A. Zadeh on fuzzy algorithms [115] introduced the
idea of formulating the control algorithm by logical rules.
In a fuzzy logic controller (FLC), the dynamic behavior of a fuzzy system is
characterized by a set of linguistic description rules based on expert knowl-
edge. The expert knowledge is usually of the form
IF (a set of conditions are satised) THEN (a set of consequences can be inferred).
Since the antecedents and the consequents of these IF-THEN rules are as-
sociated with fuzzy concepts (linguistic terms), they are often called fuzzy
conditional statements. In our terminology, a fuzzy control rule is a fuzzy
conditional statement in which the antecedent is a condition in its appli-
cation domain and the consequent is a control action for the system under
control.
Basically, fuzzy control rules provide a convenient way for expressing control
policy and domain knowledge. Furthermore, several linguistic variables might
be involved in the antecedents and the conclusions of these rules. When
this is the case, the system will be referred to as a multi-input-multi-output
(MIMO) fuzzy system. For example, in the case of two-input-single-output
(MISO) fuzzy systems, fuzzy control rules have the form

1
: if x is A
1
and y is B
1
then z is C
1
also

2
: if x is A
2
and y is B
2
then z is C
2
also
. . .
also

n
: if x is A
n
and y is B
n
then z is C
n
88
where x and y are the process state variables, z is the control variable, A
i
,
B
i
, and C
i
are linguistic values of the linguistic vatiables x, y and z in the
universes of discourse U, V , and W, respectively, and an implicit sentence
connective also links the rules into a rule set or, equivalently, a rule-base.
We can represent the FLC in a form similar to the conventional control law
(1.2)
u(k) = F(e(k), e(k 1), . . . , e(k ), u(k 1), . . . , u(k )) (1.3)
where the function F is described by a fuzzy rule-base. However it does not
mean that the FLC is a kind of transfer function or dierence equation. The
knowledge-based nature of FLC dictates a limited usage of the past values
of the error e and control u because it is rather unreasonable to expect
meaningful linguistic statements for e(k 3), e(k 4), . . . , e(k ). A typical
FLC describes the relationship between the change of the control
u(k) = u(k) u(k 1)
on the one hand, and the error e(k) and its change
e(k) = e(k) e(k 1).
on the other hand. Such control law can be formalized as
u(k) = F(e(k), (e(k)) (1.4)
and is a manifestation of the general FLC expression (1.3) with = 1. The
actual output of the controller u(k) is obtained from the previous value of
control u(k 1) that is updated by u(k)
u(k) = u(k 1) +u(k).
This type of controller was suggested originally by Mamdani and Assilian in
1975 [81] and is called the Mamdani-type FLC. A prototypical rule-base of a
simple FLC realising the control law (1.4) is listed in the following
89
N
P
ZE
error

1
: If e is positive and e is near zero then u is positive
also

2
: If e is negative and e is near zero then u is negative
also

3
: If e is near zero and e is near zero then u is near zero
also

4
: If e is near zero and e is positive then u is positive
also

5
: If e is near zero and e is negative then u is negative
Figure 1.43 Membership functions for the error.
So, our task is the nd a crisp control action z
0
from the fuzzy rule-base and
from the actual crisp inputs x
0
and y
0
:

1
: if x is A
1
and y is B
1
then z is C
1
also

2
: if x is A
2
and y is B
2
then z is C
2
also
. . . . . .
also

n
: if x is A
n
and y is B
n
then z is C
n
input x is x
0
and y is y
0
output z
0
Of course, the inputs of fuzzy rule-based systems should be given by fuzzy
sets, and therefore, we have to fuzzify the crisp inputs. Furthermore, the
output of a fuzzy system is always a fuzzy set, and therefore to get crisp
value we have to defuzzify it.
90
Defuzzifier
fuzzy set in U
fuzzy set in V
Fuzzifier
Fuzzy
Inference
Engine
crisp x in U
crisp y in V
Fuzzy
Rule
Base
1
x
0
x
0
_
Fuzzy logic control systems usually consist from four major parts: Fuzzica-
tion interface, Fuzzy rule-base, Fuzzy inference machine and Defuzzication
interface.
Figure 1.44 Fuzzy logic controller.
A fuzzication operator has the eect of transforming crisp data into fuzzy
sets. In most of the cases we use fuzzy singletons as fuzziers
fuzzifier(x
0
) := x
0
where x
0
is a crisp input value from a process.
Figure 1.44a Fuzzy singleton as fuzzier.
91
Suppose now that we have two input variables x and y. A fuzzy control rule

i
: if (x is A
i
and y is B
i
) then (z is C
i
)
is implemented by a fuzzy implication R
i
and is dened as
R
i
(u, v, w) =
_
A
i
(u) and B
i
(v)

C
i
(w)
where the logical connective and is implemented by Cartesian product, i.e.
_
A
i
(u) and B
i
(v)

C
i
(w) =
_
A
i
(u)B
i
(v)

C
i
(w) = min{A
i
(u), B
i
(v)} C
i
(w)
Of course, we can use any t-norm to model the logical connective and.
An FLC consists of a set of fuzzy control rules which are related by the
dual concepts of fuzzy implication and the supt-norm compositional rule
of inference. These fuzzy control rules are combined by using the sentence
connective also. Since each fuzzy control rule is represented by a fuzzy rela-
tion, the overall behavior of a fuzzy system is characterized by these fuzzy
relations. In other words, a fuzzy system can be characterized by a single
fuzzy relation which is the combination in question involves the sentence
connective also. Symbolically, if we have the collection of rules

1
: if x is A
1
and y is B
1
then z is C
1
also

2
: if x is A
2
and y is B
2
then z is C
2
also

also

n
: if x is A
n
and y is B
n
then z is C
n
The procedure for obtaining the fuzzy output of such a knowledge base con-
sists from the following three steps:
Find the ring level of each of the rules.
Find the output of each of the rules.
Aggregate the individual rule outputs to obtain the overall system out-
put.
92
To infer the output z from the given process states x, y and fuzzy relations
R
i
, we apply the compositional rule of inference:

1
: if x is A
1
and y is B
1
then z is C
1

2
: if x is A
2
and y is B
2
then z is C
2

n
: if x is A
n
and y is B
n
then z is C
n
fact : x is x
0
and y is y
0
consequence : z is C
where the consequence is computed by
consequence = Agg (fact
1
, . . . , fact
n
).
That is,
C = Agg( x
0
y
0
R
1
, . . . , x
0
y
0
R
n
)
taking into consideration that x
0
(u) = 0, u = x
0
and y
0
(v) = 0, v = y
0
, the
computation of the membership function of C is very simple:
C(w) = Agg{A
1
(x
0
) B
1
(y
0
) C
1
(w), . . . , A
n
(x
0
) B
n
(y
0
) C
n
(w)}
for all w W.
The procedure for obtaining the fuzzy output of such a knowledge base can
be formulated as
The ring level of the i-th rule is determined by
A
i
(x
0
) B
i
(y
0
).
The output of of the i-th rule is calculated by
C

i
(w) := A
i
(x
0
) B
i
(y
0
) C
i
(w)
for all w W.
The overall system output, C, is obtained from the individual rule
outputs C

i
by
C(w) = Agg{C

1
, . . . , C

n
}
for all w W.
93
Example 1.5.2 If the sentence connective also is interpreted as anding the
rules by using minimum-norm then the membership function of the conse-
quence is computed as
C = ( x
0
y
0
R
1
) . . . ( x
0
y
0
R
n
).
That is,
C(w) = min{A
1
(x
0
) B
1
(y
0
) C
1
(w), . . . , A
n
(x
0
) B
n
(y
0
) C
n
(w)}
for all w W.
Example 1.5.3 If the sentence connective also is interpreted as oring the
rules by using minimum-norm then the membership function of the conse-
quence is computed as
C = ( x
0
y
0
R
1
) . . . ( x
0
y
0
R
n
).
That is,
C(w) = max{A
1
(x
0
) B
1
(y
0
) C
1
(w), . . . , A
n
(x
0
) B
n
(y
0
) C
n
(w)}
for all w W.
Example 1.5.4 Suppose that the Cartesian product and the implication op-
erator are implemented by the t-norm T(u, v) = uv. If the sentence connec-
tive also is interpreted as oring the rules by using minimum-norm then the
membership function of the consequence is computed as
C = ( x
0
y
0
R
1
) . . . ( x
0
y
0
R
n
).
That is,
C(w) = max{A
1
(x
0
)B
1
(y
0
)C
1
(w), . . . , A
n
(x
0
)B
n
(y
0
)C
n
(w)}
for all w W.
94
1.5.1 Defuzzication methods
The output of the inference process so far is a fuzzy set, specifying a possi-
bility distribution of control action. In the on-line control, a nonfuzzy (crisp)
control action is usually required. Consequently, one must defuzzify the fuzzy
control action (output) inferred from the fuzzy control algorithm, namely:
z
0
= defuzzifier(C),
where z
0
is the nonfuzzy control output and defuzzier is the defuzzication
operator.
Denition 1.5.1 (defuzzication) Defuzzication is a process to select a
representative element from the fuzzy output C inferred from the fuzzy control
algorithm.
The most often used defuzzication operators are
Center-of-Area/Gravity. The defuzzied value of a fuzzy set C is
dened as its fuzzy centroid:
z
0
=
_
W
zC(z) dz
_
W
C(z) dz
.
The calculation of the Center-of-Area defuzzied value is simplied if
we consider nite universe of discourse W and thus discrete membership
function C(w)
z
0
=

z
j
C(z
j
)

C(z
j
)
.
Center-of-Sums, Center-of-Largest-Area
First-of-Maxima. The defuzzied value of a fuzzy set C is its smallest
maximizing element, i.e.
z
0
= min{z | C(z) = max
w
C(w)}.
95
z
0
z
0
Figure 1.45 First-of-Maxima defuzzication method.
Middle-of-Maxima. The defuzzied value of a discrete fuzzy set C
is dened as a mean of all values of the universe of discourse, having
maximal membership grades
z
0
=
1
N
N

j=1
z
j
where {z
1
, . . . , z
N
} is the set of elements of the universe W which attain
the maximum value of C. If C is not discrete then defuzzied value of
a fuzzy set C is dened as
z
0
=
_
G
z dz
_
G
dz
where G denotes the set of maximizing element of C.
Figure 1.46 Middle-of-Maxima defuzzication method.
Max-Criterion. This method chooses an arbitrary value, from the
set of maximizing elements of C, i.e.
z
0
{z | C(z) = max
w
C(w)}.
96
C
z
0
Height defuzzication The elements of the universe of discourse W
that have membership grades lower than a certain level are com-
pletely discounted and the defuzzied value z
0
is calculated by the
application of the Center-of-Area method on those elements of W that
have membership grades not less than :
z
0
=
_
[C]

zC(z) dz
_
[C]

C(z) dz
.
where [C]

denotes the -level set of C as usually.


Example 1.5.5 [128] Consider a fuzzy controller steering a car in a way
to avoid obstacles. If an obstacle occurs right ahead, the plausible control
action depicted in Figure 1.46a could be interpreted as turn right or left.
Both Center-of-Area and Middle-of-Maxima defuzzication methods results
in a control action drive ahead straightforward which causes an accident.
Figure 1.46a Undisered result by Center-of-Area and Middle-of-Maxima
defuzzication methods.
A suitable defuzzication method would have to choose between dierent con-
trol actions (choose one of two triangles in the Figure) and then transform
the fuzzy set into a crisp value.
Exercise 1.5.1 Let the overall system output, C, have the following mem-
bership function
C(x) =
_
_
_
x
2
if 0 x 1
2

x if 1 x 4
0 otherwise
97
a a-
a+
z
0
a a-
b+ b
z
0
Compute the defuzzied value of C using the Center-of-Area and Height-
Defuzzication with = 0.7 methods.
Exercise 1.5.2 Let C = (a, b, ) be a triangular fuzzy number. Compute
the defuzzied value of C using the Center-of-Area and Middle-of-Maxima
methods.
Figure 1.46b z
0
is the defuzzied value of C.
Exercise 1.5.3 Let C = (a, b, , ) be a trapezoidal fuzzy number. Compute
the defuzzied value of C using the Center-of-Area and Middle-of-Maxima
methods.
Figure 1.46c z
0
is the defuzzied value of C.
Exercise 1.5.4 Let C = (a, b, , )
LR
be a fuzzy number of type LR. Com-
pute the defuzzied value of C using the Center-of-Area and Middle-of-Maxima
methods.
98
1.5.2 Inference mechanisms
We present four well-known inference mechanisms in fuzzy logic control sys-
tems. For simplicity we assume that we have two fuzzy control rules of the
form

1
: if x is A
1
and y is B
1
then z is C
1
also

2
: if x is A
2
and y is B
2
then z is C
2
fact : x is x
0
and y is y
0
consequence : z is C
Mamdani. The fuzzy implication is modelled by Mamdanis mini-
mum operator and the sentence connective also is interpreted as oring
the propositions and dened by max operator.
The ring levels of the rules, denoted by
i
, i = 1, 2, are computed by

1
= A
1
(x
0
) B
1
(y
0
),
2
= A
2
(x
0
) B
2
(y
0
)
The individual rule outputs are obtained by
C

1
(w) = (
1
C
1
(w)), C

2
(w) = (
2
C
2
(w))
Then the overall system output is computed by oring the individual
rule outputs
C(w) = C

1
(w) C

2
(w) = (
1
C
1
(w)) (
2
C
2
(w))
Finally, to obtain a deterministic control action, we employ any de-
fuzzication strategy.
99
u
v
C1
w
u
xo
v
C2
w
yo
min
A1
A2
B1
B2
Figure 1.47 Making inferences with Mamdanis implication operator.
Tsukamoto. All linguistic terms are supposed to have monotonic
membership functions.
The ring levels of the rules, denoted by
i
, i = 1, 2, are computed by

1
= A
1
(x
0
) B
1
(y
0
),
2
= A
2
(x
0
) B
2
(y
0
)
In this mode of reasoning the individual crisp control actions z
1
and z
2
are computed from the equations

1
= C
1
(z
1
),
2
= C
2
(z
2
)
and the overall crisp control action is expressed as
z
0
=

1
z
1
+
2
z
2

1
+
2
i.e. z
0
is computed by the discrete Center-of-Gravity method.
100
u v
w
u
xo
v
yo
w min
A1
B2
C1
A2
B1
C2
0.3
0.6
0.8
0.6
z2 = 4
z1 = 8
0.7
0.3
If we have n rules in our rule-base then the crisp control action is
computed as
z
0
=
n

i=1

i
z
i
_
n

i=1

i
,
where
i
is the ring level and z
i
is the (crisp) output of the i-th rule,
i = 1, . . . , n
Figure 1.48 Tsukamotos inference mechanism.
Example 1.5.6 We illustrate Tsukamotos reasoning method by the follow-
ing simple example

1
: if x is A
1
and y is B
1
then z is C
1
also

2
: if x is A
2
and y is B
2
then z is C
2
fact : x is x
0
and y is y
0
consequence : z is C
Then according to Fig.1.48 we see that
A
1
(x
0
) = 0.7, B
1
(y
0
) = 0.3
101
therefore, the ring level of the rst rule is

1
= min{A
1
(x
0
), B
1
(y
0
)} = min{0.7, 0.3} = 0.3
and from
A
2
(x
0
) = 0.6, B
2
(y
0
) = 0.8
it follows that the ring level of the second rule is

2
= min{A
2
(x
0
), B
2
(y
0
)} = min{0.6, 0.8} = 0.6
the individual rule outputs z
1
= 8 and z
2
= 4 are derived from the equations
C
1
(z
1
) = 0.3, C
2
(z
2
) = 0.6
and the crisp control action is
z
0
= (8 0.3 + 4 0.6)/(0.3 + 0.6) = 6.
Sugeno. Sugeno and Takagi use the following architecture [93]

1
: if x is A
1
and y is B
1
then z
1
= a
1
x +b
1
y
also

2
: if x is A
2
and y is B
2
then z
2
= a
2
x +b
2
y
fact : x is x
0
and y is y
0
consequence : z
0
The ring levels of the rules are computed by

1
= A
1
(x
0
) B
1
(y
0
),
2
= A
2
(x
0
) B
2
(y
0
)
then the individual rule outputs are derived from the relationships
z

1
= a
1
x
0
+b
1
y
0
, z

2
= a
2
x
0
+b
2
y
0
and the crisp control action is expressed as
z
0
=

1
z

1
+
2
z

1
+
2
102
If we have n rules in our rule-base then the crisp control action is computed
as
z
0
=
n

i=1

i
z

i
_
n

i=1

i
,
where
i
denotes the ring level of the i-th rule, i = 1, . . . , n
Example 1.5.7 We illustrate Sugenos reasoning method by the following
simple example

1
: if x is BIG and y is SMALL then z
1
= x +y
also

2
: if x is MEDIUM and y is BIG then z
2
= 2x y
fact : x
0
is 3 and y
0
is 2
conseq : z
0
Then according to Fig.1.49 we see that

BIG
(x
0
) =
BIG
(3) = 0.8,
SMALL
(y
0
) =
SMALL
(2) = 0.2
therefore, the ring level of the rst rule is

1
= min{
BIG
(x
0
),
SMALL
(y
0
)} = min{0.8, 0.2} = 0.2
and from

MEDIUM
(x
0
) =
MEDIUM
(3) = 0.6,
BIG
(y
0
) =
BIG
(2) = 0.9
it follows that the ring level of the second rule is

2
= min{
MEDIUM
(x
0
),
BIG
(y
0
)} = min{0.6, 0.9} = 0.6.
the individual rule outputs are computed as
z

1
= x
0
+y
0
= 3 + 2 = 5, z

2
= 2x
0
y
0
= 2 3 2 = 4
so the crisp control action is
z
0
= (5 0.2 + 4 0.6)/(0.2 + 0.6) = 4.25.
103
1
u v
1
u
v 2
min
3
1= 0.2
2=0.6
x+y=5
2x-y=4
0.2
0.6
0.8
0.9
Figure 1.49 Sugenos inference mechanism.
Larsen. The fuzzy implication is modelled by Larsens prduct opera-
tor and the sentence connective also is interpreted as oring the propo-
sitions and dened by max operator. Let us denote
i
the ring level
of the i-th rule, i = 1, 2

1
= A
1
(x
0
) B
1
(y
0
),
2
= A
2
(x
0
) B
2
(y
0
).
Then membership function of the inferred consequence C is pointwise
given by
C(w) = (
1
C
1
(w)) (
2
C
2
(w)).
To obtain a deterministic control action, we employ any defuzzication
strategy.
If we have n rules in our rule-base then the consequence C is computed
as
C(w) =
n

i=1
(
i
C
1
(w))
where
i
denotes the ring level of the i-th rule, i = 1, . . . , n
104
u v
C1
w
u
xo
v
C2
w
yo
A1
A2
B1
B2
min
Figure 1.50 Making inferences with Larsens product operation rule.
105
= 0.5
1
1.5.3 Construction of data base and rule base of FLC
The knowledge base of an fuzzy logic controller is compromised of two com-
ponents, namely, a data base and a fuzzy rule base. The concepts associated
with a data base are used to characterize fuzzy control rules and fuzzy data
manipulation in an FLC. These concepts are subjectively dened and based
on experience and engineering judgment. It should be noted that the correct
choice of the membership functions of a linguistic term set plays an essential
role in the success of an application.
Drawing heavily on [79] we discuss some of the important aspects relating to
the construction of the data base and rule base in an FLC.
Data base strategy.
The data base strategy is concerned with the supports on which pri-
mary fuzzy sets are dened. The union of these supports should cover
the related universe of discourse in relation to some level set . This
property of an FLC is called -completeness. In general, we choose the
level at the crossover point, implying that we have a strong belief in
the positive sense of the fuzzy control rules which are associated with
the FLC.
Figure 1.50a -complet fuzzy partition of [0, 1] with = 0.5.
In this sense, a dominant rule always exists and is associated with the
degree of belief greater than 0.5. In the extreme case, two dominant
rules are activated with equal belief 0.5.
Discretization/normalization of universes of discourse.
Discretization of universes of discourse is frequently referred to as quan-
tization. In eect, quantization discretizes a universe into a certain
106
number of segments (quantization levels). Each segment is labeled as a
generic element, and forms a discrete universe. A fuzzy set is then de-
ned by assigning grade of membership values to each generic element
of the new discrete universe.
In the case of an FLC with continuous universes, the number of quan-
tization levels should be large enough to provide an adequate approxi-
mation and yet be small to save memory storage. The choice of quan-
tization levels has an essential inuence on how ne a control can be
obtained.
For example, if a universe is quantized for every ve units of measure-
ment instead of ten units, then the controller is twice as sensitive to
the observed variables.
A look-up table based on discrete universes, which denes the output
of a controller for all possible combinations of the input signals, can
be implemented by o-line processing in order to shorten the running
time of the controller.
Range NB NM NS ZE PS PM PB
x 3 1.0 0.3 0.0 0.0 0.0 0.0 0.0
3 < x 1.6 0.7 0.7 0.0 0.0 0.0 0.0 0.0
1.6 < x 0.8 0.3 1.0 0.3 0.0 0.0 0.0 0.0
0.8 < x 0.4 0.0 0.7 0.7 0.0 0.0 0.0 0.0
0.4 < x 0.2 0.0 0.3 0.1 0.3 0.0 0.0 0.0
0.2 < x 0.1 0.0 0.0 0.7 0.7 0.0 0.0 0.0
0.1 < x 0.1 0.0 0.0 0.3 0.1 0.3 0.0 0.0
0.1 < x 0.2 0.0 0.0 0.0 0.7 0.7 0.0 0.0
0.2 < x 0.4 0.0 0.0 0.0 0.3 0.1 0.3 0.0
0.4 < x 0.8 0.0 0.0 0.0 0.0 0.7 0.7 0.0
0.8 < x 1.6 0.0 0.0 0.0 0.0 0.3 1.0 0.3
1.6 < x 3.0 0.0 0.0 0.0 0.0 0.0 0.7 0.7
3.0 x 0.0 0.0 0.0 0.0 0.0 0.3 1.0
Table 1.5 Quantization.
107
NB
ZE
PM
N
Z
P
Figure 1.51 Discretization of the universe of discourses.
However, these ndings have purely empirical nature and so far no
formal analysis tools exist for studying how the quantization aects
controller performance. This explains the preference for continuous
domains, since quantization is a source of instability and oscillation
problems.
Fuzzy partition of the input and output spaces.
A linguistic variable in the antecedent of a fuzzy control rule forms
a fuzzy input space with respect to a certain universe of discourse,
while that in the consequent of the rule forms a fuzzy output space. In
general, a linguistic variable is associated with a term set, with each
term in the term set dened on the same universe of discourse. A fuzzy
partition, then, determines how many terms should exist in a term set.
The primary fuzzy sets usually have a meaning, such as NB, NM, NS,
ZE, PS, PM and PB.
Figure 1.52 A coarse fuzzy partition of the input space.
Since a normalized universe implies the knowledge of the input/output
108
NB
PB
PM
PS ZE
NS NM
-1
1
space via appropriate scale mappings, a well-formed term set can be
achieved as shown. If this is not the case, or a nonnormalized universe
is used, the terms could be asymmetrical and unevenly distributed in
the universe. Furthermore, the cardinality of a term set in a fuzzy input
space determines the maximum number of fuzzy control rules that we
can construct.
Figure 1.53 A ner fuzzy partition of [1, 1].
In the case of two-input-one-output fuzzy systems, if the linguistic vari-
ables x and y can take 7 dierent values, respectively, the maximum
rule number is 7 7. It should be noted that the fuzzy partition of
the fuzzy input/output space is not deterministic and has no unique
solution. A heuristic cut and trial procedure is usually needed to nd
the optimal fuzzy partition.
Completeness.
Intuitively, a fuzzy control algorithm should always be able to infer a
proper control action for every state of process. This property is called
completeness. The completeness of an FLC relates to its data base,
rule base, or both.
Choice of the membership function of a primary fuzzy sets.
There are two methods used for dening fuzzy sets, depending on
whether the universe of discourse is discrete or continuous: functional
and numerical.
Functional A functional denition expresses the membership function
of a fuzzy set in a functional form, typically a bell-shaped function,
triangle-shaped function, trapezoid-shaped function, etc.
Such functions are used in FLC because they lead themselves to manip-
ulation through the use of fuzzy arithmetic. The functional denition
109
can readily be adapted to a change in the normalization of a universe.
Either a numerical denition or functional denition may be used to
assign the grades of membership is based on the subjective criteria of
the decision.
Figure 1.54 Bell-shaped membership function.
Numerical In this case, the grade of membership function of a fuzzy
set is represented as a vector of numbers whose dimention depends on
the degree of discretization.
In this case, the membership function of each primary fuzzy set has the
form
A(x) {0.3, 0.7, 1.0}.
Rule base.
A fuzzy system is characterized by a set of linguistic statements based
on expert knowledge. The expert knowledge is usually in the form of
IF-THEN rules, which are easily implemented by fuzzy conditional
statements in fuzzy logic. The collection of fuzzy control rules that are
expressed as fuzzy conditional statements forms the rule base or the
rule set of an FLC.
Choice of process state (input) variables and control (output)
variables of fuzzy control rules.
Fuzzy control rules are more conveniently formulated in linguistic rather
than numerical terms. Typically, the linguistic variables in an FLC are
the state, state error, state error derivative, state error integral, etc.
Source and derivation of fuzzy control rules.
There are four modes of derivation of fuzzy control rules.
110
Expert Experience and Control Engineering Knowledge
Fuzzy control rules have the form of fuzzy conditional statements
that relate the state variables in the antecedent and process con-
trol variables in the consequents. In this connection, it should
be noted that in our daily life most of the information on which
our decisions are based is linguistic rather than numerical in na-
ture. Seen in this perspective, fuzzy control rules provide a natural
framework for the characterization of human behavior and deci-
sions analysis. Many experts have found that fuzzy control rules
provide a convenient way to express their domain knowledge.
Operators Control Actions
In many industrial man-machine control system, the input-output
relations are not known with sucient precision to make it possible
to employ classical control theory for modeling and simulation.
And yet skilled human operators can control such systems quite
successfully without having any quantitative models in mind. In
eect, a human operator employs-consciously or subconsciously -
a set of fuzzy IF-THEN rules to control the process.
As was pointed out by Sugeno, to automate such processes, it
is expedient to express the operators control rules as fuzzy IF-
THEN rules employing linguistic variables. In practice, such rules
can be deduced from the observation of human controllers actions
in terms of the input-output operating data.
Fuzzy Model of a Process
In the linguistic approach, the linguistic description of the dy-
namic characteristics of a controlled process may be viewed as a
fuzzy model of the process.Based on the fuzzy model, we can gen-
erate a set of fuzzy control rules for attaining optimal performance
of a dynamic system.
The set of fuzzy control rules forms the rule base of an FLC.
Although this approach is somewhat more complicated, it yields
better performance and reliability, and provides a FLC.
Learning
Many fuzzy logic controllers have been built to emulate human
decision-making behavior, but few are focused on human learning,
namely, the ability to create fuzzy control rules and to modify
111
them based on experience. A very interesting example of a fuzzy
rule based system which has a learning capability is Sugenos fuzzy
car. Sugenos fuzzy car can be trained to park by itself.
Types of fuzzy control rules.
Consistency, interactivity, completeness of fuzzy control rules.
Decision making logic: Denition of a fuzzy implication, Interpretation of
the sentence connective and, Interpretation of the sentence connective also,
Denitions of a compositional operator, Inference mechanism.
112
1.5.4 Ball and beam problem
We illustrate the applicability of fuzzy logic control systems by the ball and
beam problem.
The ball and beam system can be found in many undergraduate control
laboratories. The beam is made to rotate in a vertical plane by applying a
torque at the center of rotation and the ball is free to roll along the beam. We
require that the ball remain in contact with the beam. Let x = (r, r, ,

)
T
be the state of the system, and y = r be the output of the system. Then the
system can be represented by the state-space model
_

_
x
1
x
2
x
3
x
4
_

_
=
_

_
x
2
B(x
1
x
2
4
Gsin x
3
)
x
4
0
_

_
+
_

_
0
0
0
1
_

_
u
y = x
1
where the control u is the acceleration of . The purpose of control is to
determine u(x) such that the closed-loop system output y will converge to
zero from certain initial conditions. The input-output linearization algorithm
determines the control law u(x) as follows: For state x compute
u(x) =
3

4
(x)
2

3
(x)
1

2
(x)
0

1
(x)
where
1
= x
1
,
2
= x
2

3
(x) = BGsin x
3
,
4
(x) = BGx
4
cos x
3
and the
i
are chosen so that
s
4
+
3
s
3
+
2
s
2
+
1
s +
0
is a Hurwitz polynomial. Compute a(x) = BGcos x
3
and b(x) = BGx
2
4
sin x
3
;
then u(x) = (v(x) b(x))/a(x).
113
u

r
origin
beam
ball
Figure 1.55 The beam and ball problem.
Wang and Mendel [98] use the following four common-sense linguistic
control rules for the beam and ball problem:

1
: if x
1
is positive and x
2
is near zero and x
3
is positive
and x
4
is near zero then u is negative

2
: if x
1
is positive and x
2
is near zero and x
3
is negative
and x
4
is near zero then u is positive big

3
: if x
1
is negative and x
2
is near zero and x
3
is positive
and x
4
is near zerothen u is negative big

4
: if x
1
is negative and x
2
is near zero and x
3
is negative
and x
4
is near zero then u is positive
where all fuzzy numbers have Gaussian membership function, e.g. the value
near zero of the linguistic variable x
2
is dened by exp(x
2
/2).
Figure 1.56 Gaussian membership function for near zero
114
Using the Stone-Weierstrass theorem Wang [99] showed that fuzzy logic con-
trol systems of the form

i
: if x is A
i
and y is B
i
then z is C
i
, i = 1, . . . , n
with
Gaussian membership functions
A
i
(u) = exp
_

1
2
_
u
i1

i1
_
2
_
,
B
i
(v) = exp
_

1
2
_
v
i2

i2
_
2
_
,
C
i
(w) = exp
_

1
2
_
w
i3

i3
_
2
_
,
Singleton fuzzier
fuzzifier(x) := x, fuzzifier(y) := y,
Product fuzzy conjunction
_
A
i
(u) and B
i
(v)

= A
i
(u)B
i
(v)
Product fuzzy implication (Larsen implication)
_
A
i
(u) and B
i
(v)

C
i
(w) = A
i
(u)B
i
(v)C
i
(w)
Centroid defuzzication method [80]
z =

n
i=1

i3
A
i
(x)B
i
(y)

n
i=1
A
i
(x)B
i
(y)
where
i3
is the center of C
i
.
are universal approximators, i.e. they can approximate any continuous func-
tion on a compact set to arbitrary accuracy. Namely, he proved the following
theorem
115
Theorem 1.5.1 For a given real-valued continuous function g on the com-
pact set U and arbitrary > 0, there exists a fuzzy logic control system with
output function f such that
sup
xU
g(x) f(x) .
Castro in 1995 [15] showed that Mamdanis fuzzy logic controllers

i
: if x is A
i
and y is B
i
then z is C
i
, i = 1, . . . , n
with
Symmetric triangular membership functions
A
i
(u) =
_
1 |a
i
u|/
i
if |a
i
u|
i
0 otherwise
B
i
(v) =
_
1 |b
i
v|/
i
if |b
i
v|
i
0 otherwise
C
i
(w) =
_
1 |c
i
w|/
i
if |c
i
w|
i
0 otherwise
Singleton fuzzier
fuzzifier(x
0
) := x
0
Minimum norm fuzzy conjunction
_
A
i
(u) and B
i
(v)

= min{A
i
(u), B
i
(v)}
Minimum-norm fuzzy implication
_
A
i
(u) and B
i
(v)

C
i
(w) = min{A
i
(u), B
i
(v), C
i
(w)}
Maximum t-conorm rule aggregation
Agg(
1
,
2
, ,
n
) = max(
1
,
2
, ,
n
)
Centroid defuzzication method
z =

n
i=1
c
i
min{A
i
(x), B
i
(y)}

n
i=1
min{A
i
(x)B
i
(y)}
where c
i
is the center of C
i
.
are also universal approximators.
116
1.6 Aggregation in fuzzy system modeling
Many applications of fuzzy set theory involve the use of a fuzzy rule base
to model complex and perhaps ill-dened systems. These applications in-
clude fuzzy logic control, fuzzy expert systems and fuzzy systems modeling.
Typical of these situations are set of n rules of the form

1
: if x is A
1
then y is C
1
also

2
: if x is A
2
then y is C
2
also

also

n
: if x is A
n
then y is C
n
The fuzzy inference process consists of the following four step algorithm [107]:
Determination of the relevance or matching of each rule to the current
input value.
Determination of the output of each rule as fuzzy subset of the output
space. We shall denote these individual rule outputs as R
j
.
Aggregation of the individual rule outputs to obtain the overall fuzzy
system output as fuzzy subset of the output space. We shall denote
this overall output as R.
Selection of some action based upon the output set.
Our purpose here is to investigate the requirements for the operations that
can be used to implement this reasoning process. We are particularly con-
cerned with the third step, the rule output aggregation.
Let us look at the process for combining the individual rule outputs. A basic
assumption we shall make is that the operation is pointwise and likewise.
By pointwise we mean that for every y, R(y) just depends upon R
j
(y), j =
1, . . . , n. By likewise we mean that the process used to combine the R
j
is the
same for all of the y.
117
Let us denote the pointwise process we use to combine the individual rule
outputs as
F(y) = Agg(R
1
(y), . . . , R
n
(y))
In the above Agg is called the aggregation operator and the R
j
(y) are the
arguments. More generally, we can consider this as an operator
a = Agg(a
1
, . . . , a
n
)
where the a
i
and a are values from the membership grade space, normally
the unit interval.
Let us look at the minimal requirements associated with Agg. We rst note
that the combination of of the individual rule outputs should be independent
of the choice of indexing of the rules. This implies that a required property
that we must associate with th Agg operator is that of commutativity, the
indexing of the arguments does not matter. We note that the commutativ-
ity property allows to represent the arguments of the Agg operator, as an
unordered collection of possible duplicate values; such an object is a bag.
For an individual rule output, R
j
, the membership grade R
j
(y) indicates
the degree or sterength to which this rule suggests that y is the appropriate
solution. In particular if for a pair of elements y

and y

it is the case that


R
i
(y

) R
i
(y

),
then we are saying that rule j is preferring y

as the system output over y

.
From this we can reasonably conclude that if all rules prefer y

over y

as
output then the overall system output should prefer y

over y

. This obser-
vation requires us to impose a monotonicity condition on the Agg operation.
In particular if
R
j
(y

) R
j
(y

),
for all j, then
R(y

) R(y

).
There appears one other condition we need to impose upon the aggregation
operator. Assume that there exists some rule whose ring level is zero. The
implication of this is that the rule provides no information regarding what
should be the output of the system. It should not aect the nal R. The rst
observation we can make is that whatever output this rule provides should
118
not make make any distinction between the potential outputs. Thus, we see
that the aggregation operator needs an identy element.
In summary, we see that the aggregation operator, Agg must satisfy three
conditions: commutativity, monotonicity, must contain a xed identity. These
conditions are based on the three requirements: that the indexing of the rules
be unimportant, a positive association between individual rule output and
total system output, and non-ring rules play no role in the decision process.
These operators are called MICA (Monotonic Identity Commutative Ag-
gregation) operators [107]. MICA operators are the most general class for
aggregation in fuzzy modeling. They include t-norms, t-conorms, averaging
and compensatory operators.
Assume X is a set of elements. A bag drawn from X is any collection of
elements which is contained in X. A bag is dierent from a subset in that
it allows multiple copies of the same element. A bag is similar to a set in
that the ordering of the elements in the bag does not matter. If A is a bag
consisiting of a, b, c, d we denote this as A =< a, b, c, d >. Assume A and B
are two bags. We denote the sum of the bags
C = A B
where C is the bag consisting of the members of both A and B.
Example 1.6.1 Let A =< a, b, c, d > and B =< b, c, c > then
A B =< a, b, c, d, b, c, c >
In the following we let Bag(X) indicate the set of all bags of the set X.
Denition 1.6.1 A function
F : Bag(X) X
is called a bag mapping from Bag(X) into the set X.
An important property of bag mappings are that they are commutative in
the sense that the ordering of the elements does not matter.
Denition 1.6.2 Assume A =< a
1
, . . . , a
n
> and B =< b
1
, . . . , b
n
> are
two bags of the same cardinality n. If the elements in A and B can be indexed
in such way that a
i
b
i
for all i then we shall denote this A B.
119
Denition 1.6.3 (MICA operator) A bag mapping
M: Bag([0, 1]) [0, 1]
is called MICA operator if it has the following two properties
If A B then M(A) M(B) (monotonicity)
For every bag A there exists an element, u [0, 1], called the identity
of A such that if C = A < u > then M(C) = M(A) (identity)
Thus the MICA operator is endowed with two properties in addition to the
inherent commutativity of the bag operator, monotonicity and identity:
The requirement of monotonicity appears natural for an aggregation
operator in that it provides some connection between the arguments
and the aggregated value.
The property of identity allows us to have the facility for aggregating
data which does not aect the overall result. This becomes useful for
enabling us to include importances among other characteristics.
Fuzzy set theory provides a host of attractive aggregation connectives for
integrating membership values representing uncertain information. These
connectives can be categorized into the following three classes union, inter-
section and compensation connectives.
Union produces a high output whenever any one of the input values repre-
senting degrees of satisfaction of dierent features or criteria is high. Inter-
section connectives produce a high output only when all of the inputs have
high values. Compensative connectives have the property that a higher de-
gree of satisfaction of one of the criteria can compensate for a lower degree
of satisfaction of another criteria to a certain extent.
In the sense, union connectives provide full compensation and intersection
connectives provide no compensation.
1.6.1 Averaging operators
In a decision process the idea of trade-os corresponds to viewing the global
evaluation of an action as lying between the worst and the best local ratings.
120
This occurs in the presence of conicting goals, when a compensation be-
tween the corresponding compabilities is allowed.
Averaging operators realize trade-os between objectives, by allowing a pos-
itive compensation between ratings.
Denition 1.6.4 (averaging operator) An averaging (or mean) operator M
is a function
M: [0, 1] [0, 1] [0, 1]
satisfying the following properties
M(x, x) = x, x [0, 1], (idempotency)
M(x, y) = M(y, x), x, y [0, 1], (commutativity)
M(0, 0) = 0, M(1, 1) = 1, (extremal conditions)
M(x, y) M(x

, y

) if x x

and y y

(monotonicity)
M is continuous
Lemma 1.6.1 If M is an averaging operator then
min{x, y} M(x, y) max{x, y}, x, y [0, 1]
Proof. From idempotency and monotonicity of M it follows that
min{x, y} = M(min{x, y}, min{x, y}) M(x, y)
and
M(x, y) M(max{x, y}, max{x, y}) = max{x, y}.
Which ends the proof. The interesting properties averagings are the following
[25]:
Property 1.6.1 A strictly increasing averaging operator cannot be associa-
tive.
121
Property 1.6.2 The only associative averaging operators are dened by
M(x, y, ) = med(x, y, ) =
_

_
y if x y p
if x p y
x if p x y
where (0, 1).
An important family of averaging operators is formed by quasi-arithmetic
means
M(a
1
, . . . , a
n
) = f
1
(
1
n
n

i=1
f(a
i
))
This family has been characterized by Kolmogorov as being the class of all
decomposable continuous averaging operators.
Example 1.6.2 For example, the quasi-arithmetic mean of a
1
and a
2
is de-
ned by
M(a
1
, a
2
) = f
1
_f(a
1
) +f(a
2
)
2

.
The next table shows the most often used mean operators.
Name M(x, y)
harmonic mean 2xy/(x +y)
geometric mean

xy
arithmetic mean (x +y)/2
dual of geometric mean 1
_
(1 x)(1 y)
dual of harmonic mean (x +y 2xy)/(2 x y)
median med(x, y, ), (0, 1)
generalized p-mean ((x
p
+y
p
)/2)
1/p
, p 1
Table 1.6 Mean operators.
122
The process of information aggregation appears in many applications related
to the development of intelligent systems. One sees aggregation in neural
networks, fuzzy logic controllers, vision systems, expert systems and multi-
criteria decision aids. In [104] Yager introduced a new aggregation technique
based on the ordered weighted averaging (OWA) operators.
Denition 1.6.5 An OWA operator of dimension n is a mapping
F : IR
n
IR,
that has an associated n vector W = (w
1
, w
2
, . . . , w
n
)
T
such as w
i
[0, 1], 1
i n,
n

i=1
w
i
= 1.
Furthermore
F(a
1
, . . . , a
n
) =
n

j=1
w
j
b
j
where b
j
is the j-th largest element of the bag < a
1
, . . . , a
n
>.
Example 1.6.3 Assume W = (0.4, 0.3, 0.2, 0.1)
T
then
F(0.7, 1, 0.2, 0.6) = 0.4 1 + 0.3 0.7 + 0.2 0.6 + 0.1 0.2 = 0.75.
A fundamental aspect of this operator is the re-ordering step, in particular
an aggregate a
i
is not associated with a particular weight w
i
but rather a
weight is associated with a particular ordered position of aggregate.
When we view the OWA weights as a column vector we shall nd it convenient
to refer to the weights with the low indices as weights at the top and those
with the higher indices with weights at the bottom.
It is noted that dierent OWA operators are distinguished by their weighting
function. In [104] Yager pointed out three important special cases of OWA
aggregations:
F

: In this case W = W

= (1, 0 . . . , 0)
T
and
F

(a
1
, . . . , a
n
) = max
i=1,...,n
(a
i
),
123
F

: In this case W = W

= (0, 0 . . . , 1)
T
and
F

(a
1
, . . . , a
n
) = min
i=1,...,n
(a
i
),
F
A
: In this case W = W
A
= (1/n, . . . , 1/n)
T
and
F
A
(a
1
, . . . , a
n
) = 1/n
n

i=1
a
i
,
A number of important properties can be associated with the OWA operators.
We shall now discuss some of these.
For any OWA operator F
F

(a
1
, . . . , a
n
) F(a
1
, . . . , a
n
) F

(a
1
, . . . , a
n
).
Thus the upper an lower star OWA operator are its boundaries. From the
above it becomes clear that for any F
max(a
i
) F(a
1
, . . . , a
n
) max
i
(a
i
).
The OWA operator can be seen to be commutative. Let {a
1
, . . . , a
n
} be a
bag of aggregates and let {d
1
, . . . , d
n
} be any permutation of the a
i
. Then
for any OWA operator
F(a
1
, . . . , a
n
) = F(d
1
, . . . , d
n
).
A third characteristic associated with these operators is monotonicity. As-
sume a
i
and c
i
are a collection of aggregates, i = 1, . . . , n such that for each
i, a
i
c
i
. Then
F(a
1
, . . . , a
n
) F(c
1
, c
2
, . . . , c
n
)
where F is some xed weight OWA operator.
Another characteristic associated with these operators is idempotency. If
a
i
= a for all i then for any OWA operator
F(a
1
, . . . , a
n
) = a.
From the above we can see the OWA operators have the basic properties
associated with an averaging operator.
124
k
k+m-1
1
n
1/m
Example 1.6.4 A window type OWA operator takes the average of the m
arguments about the center. For this class of operators we have
w
i
=
_
_
_
0 if i < k
1/m if k i < k +m
0 if i k +m
Figure 1.57 Window type OWA operator.
In order to classify OWA operators in regard to their location between and
and or, a measure of orness, associated with any vector W is introduce by
Yager [104] as follows
orness(W) =
1
n 1
n

i=1
(n i)w
i
It is easy to see that for any W the orness(W) is always in the unit interval.
Furthermore, note that the nearer W is to an or, the closer its measure is to
one; while the nearer it is to an and, the closer is to zero.
Lemma 1.6.2 Let us consider the the vectors W

= (1, 0 . . . , 0)
T
, W

=
(0, 0 . . . , 1)
T
and W
A
= (1/n, . . . , 1/n)
T
. Then it can easily be shown that
orness(W

) = 1
orness(W

) = 0
orness(W
A
) = 0.5
A measure of andness is dened as
andness(W) = 1 orness(W)
125
Generally, an OWA opeartor with much of nonzero weights near the top will
be an orlike operator,
orness(W) 0.5
and when much of the weights are nonzero near the bottom, the OWA oper-
ator will be andlike
andness(W) 0.5.
Example 1.6.5 Let W = (0.8, 0.2, 0.0)
T
. Then
orness(W) =
1
3
(2 0.8 + 0.2) = 0.6
and
andness(W) = 1 orness(W) = 1 0.6 = 0.4.
This means that the OWA operator, dened by
F(a
1
, a
2
, a
3
) = 0.8b
1
+ 0.2b
2
+ 0.0b
3
= 0.8b
1
+ 0.2b
2
where b
j
is the j-th largest element of the bag < a
1
, a
2
, a
3
>, is an orlike
aggregation.
The following theorem shows that as we move weight up the vector we in-
crease the orness, while moving weight down causes us to decrease orness(W).
Theorem 1.6.1 [105] Assume W and W

are two n-dimensional OWA vec-


tors such that
W = (w
1
, . . . , w
n
)
T
, W

= (w
1
, . . . , w
j
+ , . . . , w
k
, . . . , w
n
)
T
where > 0, j < k. Then orness(W

) > orness(W).
Proof. From the denition of the measure of orness we get
orness(W

) =
1
n 1

i
(ni)w

i
=
1
n 1

i
(n1)w
i
+(nj) (nk),
orness(W

) = orness(W) +
1
n 1
(k j).
Since k > j, orness(W

) > orness(W).
126
In [104] Yager dened the measure of dispersion (or entropy) of an OWA
vector by
disp(W) =

i
w
i
ln w
i
.
We can see when using the OWA operator as an averaging operator Disp(W)
measures the degree to which we use all the aggregates equally.
If F is an OWA aggregation with weights w
i
the dual of F denoted

F, is an
OWA aggregation of the same dimention where with weights w
i
w
i
= w
ni+1
.
We can easily see that if F and

F are duals then
disp(

F) = disp(F)
orness(

F) = 1 orness(F) = andness(F)
Thus is F is orlike its dual is andlike.
Example 1.6.6 Let W = (0.3, 0.2, 0.1, 0.4)
T
. Then

W = (0.4, 0.1, 0.2, 0.3)


T
.
and
orness(F) = 1/3(3 0.3 + 2 0.2 + 0.1) 0.466
orness(

F) = 1/3(3 0.4 + 2 0.1 + 0.2) 0.533
An important application of the OWA operators is in the area of quantier
guided aggregations [104]. Assume
{A
1
, . . . , A
n
}
is a collection of criteria. Let x be an object such that for any criterion A
i
,
A
i
(x) [0, 1] indicates the degree to which this criterion is satised by x. If
we want to nd out the degree to which x satises all the criteria denoting
this by D(x), we get following Bellman and Zadeh [2].
D(x) = min{A
1
(x), . . . , A
n
(x)}
127
In this case we are essentially requiring x to satisfy A
1
and A
2
and . . . and
A
n
.
If we desire to nd out the degree to which x satises at least one of the
criteria, denoting this E(x), we get
E(x) = max{A
1
(x), . . . , A
n
(x)}
In this case we are requiring x to satisfy A
1
or A
2
or . . . or A
n
.
In many applications rather than desiring that a solution satises one of these
extreme situations, all or at least one, we may require that x satises
most or at least half of the criteria. Drawing upon Zadehs concept [119]
of linguistic quantiers we can accomplish these kinds of quantier guided
aggregations.
Denition 1.6.6 A quantier Q is called
regular monotonically non-decreasing if
Q(0) = 0, Q(1) = 1, if r
1
> r
2
then Q(r
1
) Q(r
2
).
regular monotonically non-increasing if
Q(0) = 1, Q(1) = 0, if r
1
< r
2
then Q(r
1
) Q(r
2
).
Figure 1.58 Monoton linguistic quantiers.
regular unimodal if
Q(0) = Q(1) = 0, Q(r) = 1 for a r b,
r
2
r
1
a then Q(r
1
) Q(r
2
), r
2
r
1
b then Q(r
2
) Q(r
1
).
128
1/n
2/n
3/n
w1
w2
w3
Figure 1.58a Unimodal linguistic quantier.
With a
i
= A
i
(x) the overall valuation of x is F
Q
(a
1
, . . . , a
n
) where F
Q
is an
OWA operator. The weights associated with this quantied guided aggrega-
tion are obtained as follows
w
i
= Q(
i
n
) Q(
i 1
n
), i = 1, . . . , n. (1.5)
The next gure graphically shows the operation involved in determining the
OWA weights directly from the quantier guiding the aggregation.
Figure 1.59 Determining weights from a quantier.
Theorem 1.6.2 If we construct w
i
via the method (1.5) we always get
(1)

w
i
= 1;
(2) w
i
[0, 1].
129
1
1
for any function
Q: [0, 1] [0, 1]
satisfying the conditions of a regular nondecreasing quantier.
Proof. We rst see that from the non-decreasing property Q(i/n) Q(i
1/n) hence w
i
0 and since Q(r) 1 then w
i
1. Furthermore we see

i
w
i
=

i
(Q(
i
n
) Q(
i
n 1
)) Q(
n
n
) Q(
0
n
) = 1 0 = 1.
we call any function satisfying the conditions of a regular non-decreasing
quantier an acceptable OWA weight generating function.
Let us look at the weights generated from some basic types of quantiers.
The quantier, for all Q

, is dened such that


Q

(r) =
_
0 for r < 1,
1 for r = 1.
Using our method for generating weights
w
i
= Q

(
i
n
) Q

(
i 1
n
)
we get
w
i
=
_
0 for i < n,
1 for i = n.
This is exactly what we previously denoted as W

.
Figure 1.59a The quantier all.
130
1
1
1
1
For the quantier there exists we have
Q

(r) =
_
0 for r = 0,
1 for r > 0.
In this case we get
w
1
= 1, w
i
= 0, for i = 1.
This is exactly what we denoted as W

.
Figure 1.60 The quantier there exists.
Consider next the quantier dened by
Q(r) = r.
This is an identity or linear type quantier.
In this case we get
w
i
= Q(
i
n
) Q(
i 1
n
) =
i
n

i 1
n
=
1
n
.
This gives us the pure averaging OWA aggregation operator.
131
Figure 1.60a The identity quantier.
Recapitulating using the approach suggested by Yager if we desire to calculate
F
Q
(a
1
, . . . , a
n
)
for Q being a regular non-decreasing quantier we proceed as follows:
(1) Calculate
w
i
= Q(
i
n
) Q(
i 1
n
),
(2) Calculate
F
Q
(a
i
, . . . , a
n
) =
n

i=1
w
i
b
i
where b
i
is the i-th largest of the a
j
.
Exercise 1.6.1 Let W = (0.4, 0.2, 0.1, 0.1, 0.2)
T
. Calculate disp(W).
Exercise 1.6.2 Let W = (0.3, 0.3, 0.1, 0.1, 0.2)
T
. Calculate orness(F), where
the OWA operator F is derived from W.
Exercise 1.6.3 Prove that 0 disp(W) ln(n) for any n-dimensional
weight vector W.
Exercise 1.6.4 Let Q(x) = x
2
be a linguistic quintier. Assume the weights
of an OWA operator F are derived from Q. Calculate the value F(a
1
, a
2
, a
3
, a
4
)
for a
1
= a
2
= 0.6, a
3
= 0.4 and a
4
= 0.2. What is the orness measure of F?
Exercise 1.6.5 Let Q(x) =

x be a linguistic quintier. Assume the weights


of an OWA operator F are derived from Q. Calculate the value F(a
1
, a
2
, a
3
, a
4
)
for a
1
= a
2
= 0.6, a
3
= 0.4 and a
4
= 0.2. What is the orness measure of F?
132
1.7 Fuzzy screening systems
In screening problems one usually starts with a large subset, X, of possible
alternative solutions. Each alternative is essentially represented by a minimal
amount of information supporting its appropriateness as the best solution.
This minimal amount of information provided by each alternative is used to
help select a subset A of X to be further investigated.
Two prototypical examples of this kind of problem can be mentioned.
Job selection problem. Here a large number of candidates, X, sub-
mit a resume, minimal information, to a job announcement. Based
upon these resumes a small subset of X, A, are called in for inter-
views. These interviews, which provide more detailed information, are
the basis of selecting winning candidate from A.
Proposal selection problem. Here a large class of candidates, X,
submit preliminary proposals, minimal information. Based upon these
preliminary proposals a small subset of X, A, are requested to sub-
mit full detailed proposals. These detailed proposals are the basis of
selecting winning candidate from A.
In the above examples the process of selecting the subset A, required to
provide further information, is called a screening process. In [106] Yager sug-
gests a technique, called fuzzy screening system, for managing this screening
process.
This kinds of screening problems described above besides being character-
ized as decision making with minimal information general involve multiple
participants in the selection process. The people whose opinion must be con-
sidered in the selection process are called experts. Thus screening problems
are a class of multiple expert decision problems. In addition each individ-
ual experts decision is based upon the use of multiple criteria. So we have
ME-MCDM (Multi Expert-Multi Criteria Decision Making) problem with
minimal information.
The fact that we have minimal information associated with each of the al-
ternatives complicates the problem because it limits the operations which
can be performed in the aggregation processes needed to combine the multi-
experts as well as multi-criteria. The Arrow impossibility theorem [1] is a
reection of this diculty.
133
Yager [106] suggests an approach to the screening problem which allows for
the requisite aggregations but which respects the lack of detail provided by
the information associated with each alternative. The technique only re-
quires that preference information be expressed in by elements draw from
a scale that essentially only requires a linear ordering. This property al-
lows the experts to provide information about satisfactions in the form of
a linguistic values such as high, medium, low. This ability to perform the
necessary operations will only requiring imprecise linguistic preference val-
uations will enable the experts to comfortably use the kinds of minimally
informative sources of information about the objects described above. The
fuzzy screening system is a two stage process.
In the rst stage, individual experts are asked to provide an evalua-
tion of the alternatives. This evaluation consists of a rating for each
alternative on each of the criteria.
In the second stage, the methodology introduced in [104] is used to
aggregate the individual experts evaluations to obtain an overall lin-
guistic value for each object. This overall evaluation can then be used
by the decision maker as an aid in the selection process.
The problem consists of three components.
The rst component is a collection
X = {X
1
, . . . , X
p
}
of alternative solutions from amongst which we desire to select some
subset to be investigated further.
The second component is a group
A = {A
1
, . . . , A
r
}
of experts or panelists whose opinion solicited in screening the alterna-
tives.
The third component is a collection
C = {C
1
, . . . , C
n
}
of criteria which are considered relevant in the choice of the objects to
be further considered.
134
For each alternative each expert is required to provided his opinion. In
particular for each alternative an expert is asked to evaluate how well that
alternative satises each of the criteria in the set C. These evaluations of
alternative satisfaction to criteria will be given in terms of elements from the
following scale S:
Outstanding (OU) S
7
Very High (VH) S
6
High (H) S
5
Medium (M) S
4
Low S
3
Very Low S
2
None S
1
The use of such a scale provides a natural ordering, S
i
> S
j
if i > j and the
maximum and minimum of any two scores re dened by
max(S
i
, S
j
) = S
i
if S
i
S
j
, min(S
i
, S
j
) = S
j
if S
j
S
i
We shall denote the max by and the min by . Thus for an alternative an
expert provides a collection of n values
{P
1
, . . . , P
n
}
where P
j
is the rating of the alternative on the j-th criteria by the expert.
Each P
j
is an element in the set of allowable scores S.
Assuming n = 5, a typical scoring for an alternative from one expert would
be:
(medium, low, outstanding, very high, outstanding).
Independent of this evaluation of alternative satisfaction to criteria each ex-
pert must assign a measure of importance to each of the criteria. An expert
uses the same scale, S, to provide the importance associated with the criteria.
The next step in the process is to nd the overall valuation for a alternative
by a given expert.
In order to accomplish this overall evaluation, we use a methodology sug-
gested by Yager [102]. A crucial aspect of this approach is the taking of the
negation of the importances as
Neg(S
i
) = S
qi+1
135
For the scale that we are using, we see that the negation operation provides
the following:
Neg(OU) = N
Neg(V H) = V L
Neg(H) = L
Neg(M) = M
Neg(L) = H
Neg(V L) = V H
Neg(N) = OU
Then the unit score of each alternative by each expert, denoted by U, is
calculated as follows
U = min
j
{Neg(I
j
) P
j
)} (1.6)
where I
j
denotes the importance of the j-th critera.
In the above indicates the max operation. We note that (1.6) essentially
is an anding of the criteria satisfactions modied by the importance of the
criteria. The formula (1.6) can be seen as a measure of the degree to which
an alternative satises the following proposition:
All important criteria are satised.
Example 1.7.1 Consider some alternative with the following scores on ve
criteria
Criteria: C
1
C
2
C
3
C
4
C
5
Importance: VH VH M L VL
Score: M L OU VH OU
In this case we have
U = min{Neg(V H)M, Neg(V H)L, Neg(M)OU, Neg(L)V H, Neg(V L)OU}
= min{V LM, V LL, MOU, HV H, V HOU} = min{M, L, OU, V H, OU} = L
The essential reason for the low performance of this object is that it per-
formed low on the second criteria which has a very high importance. The
136
formulation of Equation 1.6 can be seen as a generalization of a weighted
averaging. Linguistically, this formulation is saying that
If a criterion is important then an should score well on it.
As a result of the rst stage, we have for each alternative a collection of
evaluations
{X
i1
, X
i2
, . . . , X
ir
}
where X
ik
is the unit evaluation of the i-th alternative by the k-th expert.
In the second stage the technique for combining the experts evaluation to
obtain an overall evaluation for each alternative is based upon the OWA
operators.
The rst step in this process is for the decision making body to provide an
aggregation function which we shall denote as Q. This function can be seen
as a generalization of the idea of how many experts it feels need to agree
on an alternative for it to be acceptable to pass the screening process. In
particular for each number i, where i runs from 1 to r, the decision making
body must provide a value Q(i) indicating how satised it would be in passing
an alternative that i of the experts where satised with. The values for Q(i)
should be drawn from the scale S described above.
It should be noted that Q should have certain characteristics to make it
rational:
As more experts agree the decision makers satisfaction or condence
should increase
Q(i) Q(j), i > j.
If all the experts are satised then his satisfaction should be the highest
possible
Q(r) = Outstanding.
A number for special forms for Q are worth noting:
If the decision making body requires all experts to support a alternative
then we get
Q(i) = None for i > r
Q(r) = Outstanding
137
If the support of just one expert is enough to make a alternative worthy
of consideration then
Q(i) = Outstanding for all i
If at least m experts support is needed for consideration then
Q(i) = None i < j
Q(i) = Outstanding i m
In order to dene function Q, we introduce the operation Int[a] as returning
the integer value that is closest to the number a. In the following, we shall
let q be the number of points on the scale and r be the number of experts
participating. This function which emulates the average is denoted as Q
A
(k)
and is dened by
Q
A
(k) = S
b(k)
where
b(k) = Int[1 + (k
q 1
r
)]
for all k = 0, 1, . . . , r.
We note that whatever the values of q and r it is always the case that
Q
A
(0) = S
1
, Q
A
(r) = S
q
As an example of this function if r = 3 and q = 7 then
b(k) = Int[1 + (k
6
3
)] = Int[1 + 2k]
and
Q
A
(0) = S
1
, Q
A
(1) = S
3
, Q
A
(2) = S
5
, Q
A
(3) = S
7
.
If r = 4 and q = 7 then
b(k) = Int[1 +k 1.5]
and
Q
A
(0) = S
1
, Q
A
(1) = S
3
, Q
A
(2) = S
4
, Q
A
(3) = S
6
, Q
A
(4) = S
7
.
138
Having appropriately selected Q we are now in the position to use the OWA
method for aggregating the expert opinions. Assume we have r experts, each
of which has a unit evaluation for the i-th projected denoted X
ik
.
The rst step in the OWA procedure is to order the X
ik
s in descending
order, thus we shall denote B
j
as the j-th highest score among the experts
unit scores for the project. To nd the overall evaluation for the ith project,
denoted X
i
, we calculate
X
i
= max
j
{Q(j) B
j
}.
In order to appreciate the workings for this formulation we must realize that
B
j
can be seen as the worst of the j-th top scores.
Q(j) B
j
can be seen as an indication of how important the decision
maker feels that the support of at least j experts is.
The term Q(j) B
j
can be seen as a weighting of an objects j best
scores, B
j
, and the decision maker requirement that j people support
the project, Q(j).
The max operator plays a role akin to the summation in the usual
numeric averaging procedure.
Example 1.7.2 Assume we have four experts each providing a unit eval-
uation for project i obtained by the methodology discussed in the previous
section.
X
i1
= M
X
i2
= H
X
i3
= H
X
i4
= V H
Reording these scores we get
B
1
= V H
B
2
= H
B
3
= H
B
4
= M
139
Furthermore, we shall assume that our decision making body chooses as its
aggregation function the average like function Q
A
. Then with r = 4 and scale
cardinality q = 7, we obtain:
Q
A
(1) = L (S
3
)
Q
A
(2) = M (S
4
)
Q
A
(3) = V H (S
6
)
Q
A
(4) = OU (S
7
)
We calculate the overall evaluation as
X
i
= max{L V H, M H, V H H, OU M}
X
i
= max{L, M, H, M}
X
i
= H
Thus the overall evaluation of this alternative is high.
Using the methodology suggested thus far we obtain for each alternative an
overall rating X
i
. These ratings allow us to obtain a evaluation of all the
alternative without resorting to a numeric scale. The decision making body
is now in the position to make its selection of alternatives that are be passed
through the screening process. A level S

from the scale S is selected and all


those alternatives that have an overall evaluation of S

or better are passed


to the next step in the decision process.
Exercise 1.7.1 Consider some alternative with the following scores on ve
criteria
Criteria: C
1
C
2
C
3
C
4
C
5
C
6
Importance: H VH M L VL M
Score: L VH OU VH OU M
Calculate the unit score of this alternative.
140
1.8 Applications of fuzzy systems
For the past few years, particularly in Japan, USA and Germany, approxi-
mately 1,000 commercial and industrial fuzzy systems have been successfully
developed. The number of industrial and commercial applications worldwide
appears likely to increase signicantly in the near future.
The rst application of fuzzy logic is due to Mamdani of the University
of London, U.K., who in 1974 designed an experimental fuzzy control for
a steam engine. In 1980, a Danish company (F.L. Smidth & Co. A/S)
used fuzzy theory in cement kiln control. Three years later, Fuji Electric
Co., Ltd. (Japan) implemented fuzzy control of chemical injection for water
purication plants.
The rst fuzzy controller was exhibited at Second IFSA Congress in 1987.
This controller originated from Omron Corp., a Japanese company which be-
gan research in fuzzy logic in 1984 and has since applied for over 700 patents.
Also in 1987, the Sendai Subway Automatic Train Operations Controller, de-
signed by the Hitachi team, started operating in Sendai, Japan. The fuzzy
logic in this subway system makes the journey more comfortable with smooth
braking and acceleration. In 1989, Omron Corp. demonstrated fuzzy work-
stations at the Business Show in Harumi, Japan. Such a workstation is just
a RISCbased computer, equipped with a fuzzy inference board. This fuzzy
inference board is used to store and retrieve fuzzy information, and to make
fuzzy inferences.
141
Product Company
Washing Machine AEG, Sharp, Goldstar
Rice Cooker Goldstar
Cooker/Fryer Tefal
Microwave Oven Sharp
Electric Shaver Sharp
Refrigerator Whirlpool
Battery Charger Bosch
Vacuum Cleaner Philips, Siemens
Camcorders Canon, Sanyo, JVC
Transmission Control GM(Saturn), Honda, Mazda
Climate Control Ford
Temp control NASA in space shuttle
Credit Card GE Corporation
Table 1.7 Industrial applications of fuzzy logic controllers.
The application of fuzzy theory in consumer products started in 1990 in
Japan. An example is the fuzzy washing machine, which automatically
judges the material, the volume and the dirtiness of the laundry and chooses
the optimum washing program and water ow. Another example is the fuzzy
logic found in the electronic fuel injection controls and automatic cruise con-
trol systems of cars, making complex controls more ecient and easier to use.
Fuzzy logic is also being used in vacuum cleaners, camcorders, television sets
etc. In 1993, Sony introduced the Sony PalmTop, which uses a fuzzy logic
decision tree algorithm to perform handwritten (using a computer lightpen)
Kanji character recognition. For instance, if one would write 253, then the
Sony Palmtop can distinguish the number 5 from the letter S.
There are many products based on Fuzzy Logic in the market today. Most of
the consumer products in SEA/Japan advertise Fuzzy Logic based products
for consumers. We are beginning to see many automotive applications based
on Fuzzy logic. Here are few examples seen in the market. By no means this
list includes all possible fuzzy logic based products in the market.
The most successful domain has been in fuzzy control of various physical
or chemical characteristics such as temperature, electric current, ow of liq-
uid/gas, motion of machines, etc. Also, fuzzy systems can be obtained by
142
applying the principles of fuzzy sets and logic to other areas, for example,
fuzzy knowledge-based systems such as fuzzy expert systems which may use
fuzzy IF-THEN rules; fuzzy software engineering which may incorporate
fuzziness in programs and data; fuzzy databases which store and retrieve
fuzzy information: fuzzy pattern recognition which deals with fuzzy visual
or audio signals; applications to medicine, economics, and management prob-
lems which involve fuzzy information processing.
Year Number
1986 . . . 8
1987 . . . 15
1988 . . . 50
1989 . . . 100
1990 . . . 150
1991 . . . 300
1992 . . . 800
1993 . . . 1500
Table 1.8 The growing number of fuzzy logic applications [85].
When fuzzy systems are applied to appropriate problems, particularly the
type of problems described previously, their typical characteristics are faster
and smoother response than with conventional systems. This translates to
ecient and more comfortable operations for such tasks as controlling tem-
perature, cruising speed, for example. Furthermore, this will save energy,
reduce maintenance costs, and prolong machine life. In fuzzy systems, de-
scribing the control rules is usually simpler and easier, often requiring fewer
rules, and thus the systems execute faster than conventional systems. Fuzzy
systems often achieve tractability, robustness, and overall low cost. In turn,
all these contribute to better performance. In short, conventional methods
are good for simple problems, while fuzzy systems are suitable for complex
problems or applications that involve human descriptive or intuitive thinking.
However we have to note some problems and limitations of fuzzy systems
which include [85]
143
Stability: a major issue for fuzzy control.
There is no theoretical guarantee that a general fuzzy system does not
go chaotic and remains stable, although such a possibility appears to
be extremely slim from the extensive experience.
Learning capability: Fuzzy systems lack capabilities of learning and
have no memory as stated previously.
This is why hybrid systems, particularly neuro-fuzzy systems, are be-
coming more and more popular for certain applications.
Determining or tuning good membership functions and fuzzy rules are
not always easy.
Even after extensive testing, it is dicult to say how many membership
functions are really required. Questions such as why a particular fuzzy
expert system needs so many rules or when can a developer stop adding
more rules are not easy to answer.
There exists a general misconception of the term fuzzy as meaning
imprecise or imperfect.
Many professionals think that fuzzy logic represents some magic with-
out rm mathematical foundation.
Verication and validation of a fuzzy expert system generally requires
extensive testing with hardware in the loop.
Such luxury may not be aordable by all developers.
The basic steps for developing a fuzzy system are the following
Determine whether a fuzzy system is a right choice for the problem. If
the knowledge about the system behavior is described in approximate
form or heuristic rules, then fuzzy is suitable. Fuzzy logic can also be
useful in understanding and simplifying the processing when the system
behavior requires a complicated mathematical model.
Identify inputs and outputs and their ranges. Range of sensor mea-
surements typically corresponds to the range of input variable, and the
range of control actions provides the range of output variable.
144
Dene a primary membership function for each input and output pa-
rameter. The number of membership functions required is a choice of
the developer and depends on the system behavior.
Construct a rule base. It is up to the designer to determine how many
rules are necessary.
Verify that rule base output within its range for some sample inputs,
and further validate that this output is correct and proper according
to the rule base for the given set of inputs.
Several studies show that fuzzy logic is applicable in Management Science
(see e.g. [7]).
145
Bibliography
[1] K.J. Arrow, Social Choice and Individual Values (John Wiley &
Sons, New York, 1951).
[2] R.A.Bellman and L.A.Zadeh, Decision-making in a fuzzy environ-
ment, Management Sciences, Ser. B 17(1970) 141-164.
[3] D. Butnariu and E.P. Klement, Triangular Norm-Based Measures
and Games with Fuzzy Coalitions (Kluwer, Dordrecht, 1993).
[4] D. Butnariu, E.P. Klement and S. Zafrany, On triangular norm-
based propositional fuzzy logics, Fuzzy Sets and Systems, 69(1995)
241-255.
[5] E. Canestrelli and S. Giove, Optimizing a quadratic function with
fuzzy linear coecients, Control and Cybernetics, 20(1991) 25-36.
[6] E. Canestrelli and S. Giove, Bidimensional approach to fuzzy linear
goal programming, in: M. Delgado, J. Kacprzyk, J.L. Verdegay and
M.A. Vila eds., Fuzzy Optimization (Physical Verlag, Heildelberg,
1994) 234-245.
[7] C. Carlsson, On the relevance of fuzzy sets in management science
methodology, TIMS/Studies in the Management Sciences, 20(1984)
11-28.
[8] C. Carlsson, Fuzzy multiple criteria for decision support systems,
in: M.M. Gupta, A. Kandel and J.B. Kiszka eds., Approximate
Reasoning in Expert Systems (North-Holland, Amsterdam, 1985)
48-60.
146
[9] C. Carlsson and R.Fuller, Interdependence in fuzzy multiple objec-
tive programming, Fuzzy Sets and Systems 65(1994) 19-29.
[10] C. Carlsson and R. Fuller, Fuzzy if-then rules for modeling interde-
pendencies in FMOP problems, in: Proceedings of EUFIT94 Con-
ference, September 20-23, 1994 Aachen, Germany (Verlag der Au-
gustinus Buchhandlung, Aachen, 1994) 1504-1508.
[11] C. Carlsson and R. Fuller, Interdependence in Multiple Criteria
Decision Making, Technical Report, Institute for Advanced Man-
agement Systems Research,

Abo Akademi University, No. 1994/6.
[12] C. Carlsson and R.Fuller, Fuzzy reasoning for solving fuzzy mul-
tiple objective linear programs, in: R.Trappl ed., Cybernetics and
Systems 94, Proceedings of the Twelfth European Meeting on Cyber-
netics and Systems Research (World Scientic Publisher, London,
1994) 295-301.
[13] C. Carlsson and R.Fuller, Multiple Criteria Decision Making:
The Case for Interdependence, Computers & Operations Research
22(1995) 251-260.
[14] C. Carlsson and R. Fuller, On linear interdependences in MOP, in:
Proceedings of CIFT95 Workshop, June 8-10, 1995, Trento, Italy,
University of Trento, 1995 48-52.
[15] J.L. Castro, Fuzzy logic contollers are universal approximators,
IEEE Transactions on Syst. Man Cybernet., 25(1995) 629-635.
[16] S.M. Chen, A weighted fuzzy reasoning akgorithm for medical di-
agnosis, Decision Support Systems, 11(1994) 37-43.
[17] E. Cox, The Fuzzy system Handbook. A Practitioners Guide to
Building, Using, and Maintaining Fuzzy Systems (Academic Press,
New York, 1994).
[18] M. Delgado, E. Trillas, J.L. Verdegay and M.A. Vila, The general-
ized modus ponens with linguistic labels, in: Proceedings of the
Second International Conference on Fuzzy Logics andd Neural Net-
work, IIzuka, Japan, 1990 725-729.
147
[19] M.Delgado, J. Kacprzyk, J.L.Verdegay and M.A.Vila eds., Fuzzy
Optimization (Physical Verlag, Heildelberg, 1994).
[20] J. Dombi, A general class of fuzzy operators, the DeMorgan class of
fuzzy operators and fuziness measures induced by fuzzy operators,
Fuzzy Sets and Systems, 8(1982) 149-163.
[21] J. Dombi, Membership function as an evaluation, Fuzzy Sets and
Systems, 35(1990) 1-21.
[22] D. Driankov, H. Hellendoorn and M. Reinfrank, An Introduction to
Fuzzy Control (Springer Verlag, Berlin, 1993).
[23] D. Dubois, R. Martin-Clouaire and H. Prade, Practical computa-
tion in fuzzy logic, in: M.M. Gupta and T. Yamakawa eds., Fuzzy
Computing (Elsevier Science Publishing, Amsterdam, 1988) 11-34.
[24] D. Dubois and H. Prade, Fuzzy Sets and Systems: Theory and Ap-
plications (Academic Press, London, 1980).
[25] D. Dubois and H. Prade, Criteria aggregation and ranking of alter-
natives in the framework of fuzzy set theory, TIMS/Studies in the
Management Sciences, 20(1984) 209-240.
[26] D.Dubois and H.Prade, Possibility Theory (Plenum Press, New
York,1988).
[27] D.Dubois, H.Prade and R.R Yager eds., Readings in Fuzzy Sets for
Intelligent Systems (Morgan & Kaufmann, San Mateo, CA, 1993).
[28] G. W. Evans ed., Applications of Fuzzy Set Methodologies in Indus-
trial Engineering (Elsevier, Amsterdam, 1989).
[29] M. Fedrizzi, J. Kacprzyk and S. Zadrozny, An interactive multi-user
decision support system for consensus reaching processes using fuzzy
logic with linguistic quantiers, Decision Support Systems, 4(1988)
313-327.
[30] M. Fedrizzi and L. Mich, Decision using production rules, in: Proc.
of Annual Conference of the Operational Research Society of Italy,
September 18-10, Riva del Garda. Italy, 1991 118-121.
148
[31] M. Fedrizzi and R.Fuller, On stability in group decision support
systems under fuzzy production rules, in: R.Trappl ed., Proceed-
ings of the Eleventh European Meeting on Cybernetics and Systems
Research (World Scientic Publisher, London, 1992) 471-478.
[32] M. Fedrizzi and R.Fuller, Stability in possibilistic linear program-
ming problems with continuous fuzzy number parameters, Fuzzy
Sets and Systems, 47(1992) 187-191.
[33] M. Fedrizzi, M, Fedrizzi and W. Ostasiewicz, Towards fuzzy mod-
eling in economics, Fuzzy Sets and Systems (54)(1993) 259-268.
[34] M. Fedrizzi and R.Fuller, On stability in multiobjective possibilistic
linear programs, European Journal of Operational Reseach, 74(1994)
179-187.
[35] J.C. Fodor, A remark on constructing t-norms, Fuzzy Sets and Sys-
tems, 41(1991) 195199.
[36] J.C. Fodor, On fuzzy implication operators, Fuzzy Sets and Systems,
42(1991) 293300.
[37] J.C. Fodor, Strict preference relations based on weak t-norms, Fuzzy
Sets and Systems, 43(1991) 327336.
[38] J.C. Fodor, Traces of fuzzy binary relations, Fuzzy Sets and Systems,
50(1992) 331342.
[39] J.C. Fodor, An axiomatic approach to fuzzy preference modelling,
Fuzzy Sets and Systems, 52(1992) 4752.
[40] J.C. Fodor and M. Roubens, Aggregation and scoring procedures
in multicriteria decision making methods, in: Proceedings of the
IEEE International Conference on Fuzzy Systems, San Diego, 1992
12611267.
[41] J.C. Fodor, Fuzzy connectives via matrix logic, Fuzzy Sets and Sys-
tems, 56(1993) 6777.
[42] J.C. Fodor, A new look at fuzzy connectives, Fuzzy Sets and System,
57(1993) 141148.
149
[43] J.C. Fodor and M. Roubens, Preference modelling and aggrega-
tion procedures with valued binary relations, in: R. Lowen and M.
Roubens eds., Fuzzy Logic: State of the Art (Kluwer, Dordrecht,
1993) 2938.
[44] J.C. Fodor and M. Roubens, Fuzzy Preference Modelling and Mul-
ticriteria Decision Aid (Kluwer Academic Publisher, Dordrecht,
1994).
[45] R. Fuller and T. Keresztfalvi, On Generalization of Nguyens theo-
rem, Fuzzy Sets and Systems, 41(1991) 371374.
[46] R. Fuller, On Hamacher-sum of triangular fuzzy numbers, Fuzzy
Sets and Systems, 42(1991) 205-212.
[47] R. Fuller, Well-posed fuzzy extensions of ill-posed linear equality
systems, Fuzzy Systems and Mathematics, 5(1991) 43-48.
[48] R. Fuller and B. Werners, The compositional rule of inference: in-
troduction, theoretical considerations, and exact calculation formu-
las, Working Paper, RWTH Aachen, institut f ur Wirtschaftswis-
senschaften, No.1991/7.
[49] R. Fuller, On law of large numbers for L-R fuzzy numbers, in:
R. Lowen and M. Roubens eds., Proceedings of the Fourth IFSA
Congress, Volume: Mathematics, Brussels, 1991 74-77.
[50] R. Fuller and H.-J. Zimmermann, On Zadehs compositional rule
of inference, in: R.Lowen and M.Roubens eds., Proceedings of the
Fourth IFSA Congress, Volume: Artical intelligence, Brussels,
1991 41-44.
[51] R. Fuller and H.-J. Zimmermann, On computation of the compo-
sitional rule of inference under triangular norms, Fuzzy Sets and
Systems, 51(1992) 267-275.
[52] R. Fuller and T. Keresztfalvi, t-Norm-based addition of fuzzy inter-
vals, Fuzzy Sets and Systems, 51(1992) 155-159.
[53] R. Fuller and B. Werners, The compositional rule of inference with
several relations, in: B.Riecan and M.Duchon eds., Proceedings of
150
the international Conference on Fuzzy Sets and its Applications,
Liptovsky Mikul as, Czecho-Slovakia, February 17-21, 1992 (Math.
Inst. Slovak Academy of Sciences, Bratislava, 1992) 3944.
[54] R. Fuller and H.-J.Zimmermann, Fuzzy reasoning for solving fuzzy
mathematical programming problems, Working Paper, RWTH
Aachen, institut f ur Wirtschaftswissenschaften, No.1992/01.
[55] R. Fuller and H.-J. Zimmermann, On Zadehs compositional rule of
inference, In: R.Lowen and M.Roubens eds., Fuzzy Logic: State of
the Art, Theory and Decision Library, Series D (Kluwer Academic
Publisher, Dordrecht, 1993) 193-200.
[56] R. Fuller and H.-J. Zimmermann, Fuzzy reasoning for solving
fuzzy mathematical programming problems, Fuzzy Sets and Sys-
tems 60(1993) 121-133.
[57] R. Fuller and E. Triesch, A note on law of large numbers for fuzzy
variables, Fuzzy Sets and Systems, 55(1993).
[58] M.M. Gupta and D.H. Rao, On the principles of fuzzy neural net-
works, Fuzzy Sets and Systems, 61(1994) 1-18.
[59] H. Hamacher, H.Leberling and H.-J.Zimmermann, Sensitivity anal-
ysis in fuzzy linear programming, Fuzzy Sets and Systems, 1(1978)
269-281.
[60] H. Hellendoorn, Closure properties of the compositional rule of in-
ference, Fuzzy Sets and Systems, 35(1990) 163-183.
[61] F. Herrera, M. Kov acs, and J. L. Verdegay, An optimum concept
for fuzzied linear programming problems: a parametric approach,
Tatra Mountains Mathematical Publications, 1(1992) 5764.
[62] F. Herrera, J. L. Verdegay, and M. Kov acs. A parametric approach
for (g,p)-fuzzied linear programming problems, J. of Fuzzy Math-
ematics, 1(1993) 699713.
[63] F. Herrera, M. Kov acs, and J. L. Verdegay, Optimality for fuzzi-
ed mathematical programming problems: a parametric approach,
Fuzzy Sets and Systems, 54(1993) 279285.
151
[64] J. Kacprzyk, Group decision making with a fuzzy linguistic major-
ity, Fuzzy Sets and Systems, 18(1986) 105-118.
[65] O. Kaleva, Fuzzy dierential equations, Fuzzy Sets and Systems,
24(1987) 301-317.
[66] A. Kaufmann and M.M. Gupta, Introduction to Fuzzy Arithmetic:
Theory and Applications (Van Nostrand Reinhold, New York, 1991).
[67] T. Keresztfalvi, H. Rommelfanger, Multicriteria fuzzy optimization
based on Yagers parameterized t-norm, Foundations of Comp. and
Decision Sciences, 16(1991) 99110.
[68] T. Keresztfalvi, H. Rommelfanger, Fuzzy linear programming with
t-norm based extended addition, Operations Research Proceedings
1991 (Springer-Verlag, Berlin, Heidelberg, 1992) 492499.
[69] L.T. K oczy, Fuzzy graphs in the evaluation and optimization of
networks, Fuzzy Sets and Systems, 46(1992) 307-319.
[70] L.T. K oczy and K.Hirota, A Fast algorithm for fuzzy inference by
compact rules, in: L.A. Zadeh and J. Kacprzyk eds., Fuzzy Logic
for the Management of Uncertainty (J. Wiley, New York, 1992) 297-
317.
[71] L.T. K oczy, Approximate reasoning and control with sparse and/or
inconsistent fuzzy rule bases, in: B. Reusch ed., Fuzzy Logic Theorie
and Praxis, Springer, Berlin, 1993 42-65.
[72] L.T. K oczy and K. Hirota, Ordering, distance and Closeness of
Fuzzy Sets, Fuzzy Sets and Systems, 59(1993) 281-293.
[73] L.T. K oczy, A fast algorithm for fuzzy inference by compact rules,
in: L.A. Zadeh and J. Kacprzyk eds., Fuzzy Logic for the Manage-
ment of Uncertainty (J. Wiley, New York, 1993) 297-317.
[74] B.Kosko, Neural networks and fuzzy systems, Prentice-Hall, New
Jersey, 1992.
[75] B. Kosko, Fuzzy systems as universal approximators, in: Proc.
IEEE 1992 Int. Conference Fuzzy Systems, San Diego, 1992 1153-
1162.
152
[76] M.Kov acs and L.H. Tran, Algebraic structure of centered M-fuzzy
numbers, Fuzzy Sets and Systems, 39(1991) 9199.
[77] M. Kov acs, A stable embedding of ill-posed linear systems into fuzzy
systems, Fuzzy Sets and Systems, 45(1992) 305312.
[78] J.R.Layne, K.M.Passino and S.Yurkovich, Fuzzy learning control for
antiskid braking system, IEEE Transactions on Contr. Syst. Tech.,
1(993) 122-129.
[79] C.-C. Lee, Fuzzy logic in control systems: Fuzzy logic controller -
Part I, IEEE Transactions on Syst., Man, Cybern., 20(1990) 419-
435.
[80] C.-C. Lee, Fuzzy logic in control systems: Fuzzy logic controller -
Part II, IEEE Transactions on Syst., Man, Cybern., 20(1990) 404-
418.
[81] E.H. Mamdani and S. Assilian, An experiment in linquistic synthesis
with a fuzzy logic controller. International Journal of Man-Machine
Studies 7(1975) 1-13.
[82] J.K. Mattila, On some logical points of fuzzy conditional decision
making, Fuzzy Sets and Systems, 20(1986) 137-145.
[83] G.F. Mauer, A fuzzy logic controller for an ABS braking system,
IEEE Transactions on Fuzzy Systems, 3(1995) 381-388.
[84] D. McNeil and P, Freiberger, Fuzzy Logic (Simon and Schuster, New
York, 1993).
[85] T. Munkata and Y.Jani, Fuzzy systems: An overview, Communica-
tions of ACM, 37(1994) 69-76.
[86] C. V. Negoita, Fuzzy Systems (Abacus Press, Turnbridge-Wells,
1981).
[87] H.T. Nguyen, A note on the extension principle for fuzzy sets, Jour-
nal of Mathematical Analysis and Applications, 64(1978) 369-380.
[88] S.A. Orlovsky, Calculus of Decomposable Properties. Fuzzy Sets and
Decisions (Allerton Press, 1994).
153
[89] A.R. Ralescu, A note on rule representation in expert systems, In-
formation Sciences, 38(1986) 193-203.
[90] H. Rommelfanger, Entscheiden bei Unscharfe: Fuzzy Decision
Support-Systeme (Springer Verlag, Berlin, 1988).
[91] B.Schweizer and A.Sklar, Associative functions and abstract semi-
groups, Publ. Math. Debrecen, 10(1963) 69-81.
[92] T. Sudkamp, Similarity, interpolation, and fuzzy rule construction,
Fuzzy Sets and Systems, 58(1993) 73-86.
[93] T.Takagi and M.Sugeno, Fuzzy identication of systems and its ap-
plications to modeling and control, IEEE Trans. Syst. Man Cyber-
net., 1985, 116-132.
[94] M. Sugeno, Industrial Applications of Fuzzy Control (North Hol-
land, Amsterdam, 1992).
[95] T. Tilli, Fuzzy Logik: Grundlagen, Anwendungen, Hard- und Soft-
ware (Franzis-Verlag, M unchen, 1992).
[96] T. Tilli, Automatisierung mit Fuzzy Logik (Franzis-Verlag,
M unchen, 1992).
[97] I.B. Turksen, Fuzzy normal forms, Fuzzy Sets and Systems, 69(1995)
319-346.
[98] L.-X. Wang and J.M. Mendel, Fuzzy basis functions, universal ap-
proximation, and orthogonal least-squares learning, IEEE Transac-
tions on Neural Networks, 3(1992) 807-814.
[99] L.-X. Wang, Fuzzy systems are universal approximators, in: Proc.
IEEE 1992 Int. Conference Fuzzy Systems, San Diego, 1992 1163-
1170.
[100] S. Weber, A general concept of fuzzy connectives, negations, and
implications based on t-norms and t-conorms, Fuzzy Sets and Sys-
tems, 11(9183) 115-134.
[101] R.R. Yager, Fuzzy decision making using unequal objectives, Fuzzy
Sets and Systems,1(1978) 87-95.
154
[102] R.R. Yager, A new methodology for ordinal multiple aspect deci-
sions based on fuzzy sets, Decision Sciences 12(1981) 589-600.
[103] R.R. Yager ed., Fuzzy Sets and Applications. Selected Papers by
L.A.Zadeh (John Wiley & Sons, New York, 1987).
[104] R.R.Yager, Ordered weighted averaging aggregation operators in
multi-criteria decision making, IEEE Trans. on Systems, Man and
Cybernetics, 18(1988) 183-190.
[105] R.R.Yager, Families of OWA operators, Fuzzy Sets and Systems,
59(1993) 125-148.
[106] R.R.Yager, Fuzzy Screening Systems, in: R.Lowen and M.Roubens
eds., Fuzzy Logic: State of the Art (Kluwer, Dordrecht, 1993) 251-
261.
[107] R.R.Yager, Aggregation operators and fuzzy systems modeling,
Fuzzy Sets and Systems, 67(1994) 129-145.
[108] R.R.Yager and D.Filev, Essentials of Fuzzy Modeling and Control
(Wiley, New York, 1994).
[109] T. Yamakawa and K. Sasaki, Fuzzy memory device, in: Proceedings
of 2nd IFSA Congress, Tokyo, Japan, 1987 551-555.
[110] T. Yamakawa, Fuzzy controller hardware system, in: Proceedings of
2nd IFSA Congress, Tokyo, Japan, 1987.
[111] T. Yamakawa, Fuzzy microprocessors - rule chip and defuzzier
chip, in: International Workshop on Fuzzy System Applications,
Iizuka, Japan, 1988 51-52.
[112] J. Yen, R. Langari and L.A. Zadeh eds., Industrial Applications of
Fuzzy Logic and Intelligent Systems (IEEE Press, New York, 1995).
[113] L.A. Zadeh, Fuzzy Sets, Information and Control, 8(1965) 338-353.
[114] L.A. Zadeh, Towards a theory of fuzzy systems, in: R.E. Kalman
and N. DeClaris eds., Aspects of Network and System Theory (Hort,
Rinehart and Winston, New York, 1971) 469-490.
155
[115] L.A. Zadeh, Outline of a new approach to the analysis of com-
plex systems and decision processes, IEEE Transanctins on Sys-
tems, Man and Cybernetics, 3(1973) 28-44.
[116] L.A. Zadeh, Concept of a linguistic variable and its application to
approximate reasoning, I, II, III, Information Sciences, 8(1975)
199-249, 301-357; 9(1975) 43-80.
[117] L.A. Zadeh, Fuzzy sets as a basis for a theory of possibility, Fuzzy
Sets and Systems, 1(1978) 3-28.
[118] L.A. Zadeh, A theory of approximate reasoning, In: J.Hayes,
D.Michie and L.I.Mikulich eds., Machine Intelligence, Vol.9 (Hal-
stead Press, New York, 1979) 149-194.
[119] L.A. Zadeh, A computational theory of dispositions, Int. Journal of
Intelligent Systems, 2(1987) 39-63.
[120] L.A. Zadeh, Knowledge representation in fuzzy logic, In: R.R.Yager
and L.A. Zadeh eds., An introduction to fuzzy logic applications in
intelligent systems (Kluwer Academic Publisher, Boston, 1992) 2-
25.
[121] H.-J. Zimmermann and P. Zysno, Latent connectives in human de-
cision making, Fuzzy Sets and Systems, 4(1980) 37-51.
[122] H.-J. Zimmermann, Fuzzy set theory and its applications (Kluwer,
Dordrecht, 1985).
[123] H.-J. Zimmermann, Fuzzy sets, Decision Making and Expert Sys-
tems (Kluwer Academic Publisher, Boston, 1987).
[124] H.-J.Zimmermann and B.Werners, Uncertainty representation in
knowledge-based systems, in: A.S. Jovanovic, K.F. Kussmal,
A.C. Lucia and P.P. Bonissone eds., Proc. of an Interna-
tional Course on Expert Systems in Structural Safety Assessment
Stuttgart, October 2-4, 1989, (Springer-Verlag, Berlin, Heidelberg,
1989) 151-166.
[125] H.-J.Zimmermann, Cognitive sciences, decision technology, and
fuzzy sets, Information Sciences, 57-58(1991) 287-295.
156
Chapter 2
Articial Neural Networks
2.1 The perceptron learning rule
Articial neural systems can be considered as simplied mathematical mod-
els of brain-like systems and they function as parallel distributed computing
networks. However, in contrast to conventional computers, which are pro-
grammed to perform specic task, most neural networks must be taught, or
trained. They can learn new associations, new functional dependencies and
new patterns. Although computers outperform both biological and articial
neural systems for tasks based on precise and fast arithmetic operations, ar-
ticial neural systems represent the promising new generation of information
processing networks.
The study of brain-style computation has its roots over 50 years ago in the
work of McCulloch and Pitts (1943) [19] and slightly later in Hebbs famous
Organization of Behavior (1949) [11]. The early work in articial intelligence
was torn between those who believed that intelligent systems could best be
built on computers modeled after brains, and those like Minsky and Papert
(1969) [20] who believed that intelligence was fundamentally symbol process-
ing of the kind readily modeled on the von Neumann computer. For a variety
of reasons, the symbol-processing approach became the dominant theme in
Artifcial Intelligence in the 1970s. However, the 1980s showed a rebirth in
interest in neural computing:
1982 Hopeld [14] provided the mathematical foundation for understanding
the dynamics of an important class of networks.
157
Output patterns
Hidden nodes
Input patterns
Hidden nodes
1984 Kohonen [16] developed unsupervised learning networks for feature
mapping into regular arrays of neurons.
1986 Rumelhart and McClelland [22] introduced the backpropagation learn-
ing algorithm for complex, multilayer networks.
Beginning in 1986-87, many neural networks research programs were initi-
ated. The list of applications that can be solved by neural networks has
expanded from small test-size examples to large practical tasks. Very-large-
scale integrated neural network chips have been fabricated.
In the long term, we could expect that articial neural systems will be used
in applications involving vision, speech, decision making, and reasoning, but
also as signal processors such as lters, detectors, and quality control systems.
Denition 2.1.1 [32] Articial neural systems, or neural networks, are phys-
ical cellular systems which can acquire, store, and utilize experiental knowl-
edge.
The knowledge is in the form of stable states or mappings embedded in
networks that can be recalled in response to the presentation of cues.
Figure 2.1 A multi-layer feedforward neural network.
The basic processing elements of neural networks are called articial neurons,
or simply neurons or nodes.
158
x1
xn
w1
wn
f


Each processing unit is characterized by an activity level (representing the
state of polarization of a neuron), an output value (representing the ring
rate of the neuron), a set of input connections, (representing synapses on the
cell and its dendrite), a bias value (representing an internal resting level of the
neuron), and a set of output connections (representing a neurons axonal pro-
jections). Each of these aspects of the unit are represented mathematically
by real numbers. Thus, each connection has an associated weight (synaptic
strength) which determines the eect of the incoming input on the activa-
tion level of the unit. The weights may be positive (excitatory) or negative
(inhibitory).
Figure 2.1a A processing element with single output connection.
The signal ow from of neuron inputs, x
j
, is considered to be unidirectionalas
indicated by arrows, as is a neurons output signal ow. The neuron output
signal is given by the following relationship
o = f(< w, x >) = f(w
T
x) = f(
n

j=1
w
j
x
j
)
where w = (w
1
, . . . , w
n
)
T
IR
n
is the weight vector. The function f(w
T
x) is
often referred to as an activation (or transfer) function. Its domain is the set
of activation values, net, of the neuron model, we thus often use this function
as f(net). The variable net is dened as a scalar product of the weight and
input vectors
net =< w, x >= w
T
x = w
1
x
1
+ +w
n
x
n
and in the simplest case the output value o is computed as
o = f(net) =
_
1 if w
T
x
0 otherwise,
159
x1
x2
1/2x1 +1/2x2 = 0.6
where is called threshold-level and this type of node is called a linear
threshold unit.
Example 2.1.1 Suppose we have two Boolean inputs x
1
, x
2
{0, 1}, one
Boolean output o {0, 1} and the training set is given by the following in-
put/output pairs
x
1
x
2
o(x
1
, x
2
) = x
1
x
2
1. 1 1 1
2. 1 0 0
3. 0 1 0
4. 0 0 0
Then the learning problem is to nd weight w
1
and w
2
and threshold (or bias)
value such that the computed output of our network (which is given by the
linear threshold function) is equal to the desired output for all examples. A
straightforward solution is w
1
= w
2
= 1/2, = 0.6. Really, from the equation
o(x
1
, x
2
) =
_
1 if x
1
/2 +x
2
/2 0.6
0 otherwise
it follows that the output neuron res if and only if both inputs are on.
Figure 2.2 A solution to the learning problem of Boolean and function.
Example 2.1.2 Suppose we have two Boolean inputs x
1
, x
2
{0, 1}, one
Boolean output o {0, 1} and the training set is given by the following in-
put/output pairs
160
w1
wn
x1
xn

w1 wn
x1
xn

-1
0
x
1
x
2
o(x
1
, x
2
) = x
1
x
2
1. 1 1 1
2. 1 0 1
3. 0 1 1
4. 0 0 0
Then the learning problem is to nd weight w
1
and w
2
and threshold value
such that the computed output of our network is equal to the desired output
for all examples. A straightforward solution is w
1
= w
2
= 1, = 0.8. Really,
from the equation
o(x
1
, x
2
) =
_
1 if x
1
+x
2
0.8
0 otherwise
it follows that the output neuron res if and only if at least one of the inputs
is on.
The removal of the threshold from our network is very easy by increasing the
dimension of input patterns. Really, the identity
w
1
x
1
+ +w
n
x
n
> w
1
x
1
+ +w
n
x
n
1 > 0
means that by adding an extra neuron to the input layer with xed input
value 1 and weight the value of the threshold becomes zero. It is why in
the following we suppose that the thresholds are always equal to zero.
161

w
x
x2
x1
w1
w2
Figure 2.3 Removing the threshold.
We dene now the scalar product of n-dimensional vectors, which plays a
very important role in the theory of neural networks.
Denition 2.1.2 Let w = (x
n
, . . . , w
n
)
T
and x = (x
1
, . . . , x
n
)
T
be two vec-
tors from IR
n
. The scalar (or inner) product of w and x, denoted by < w, x >
or w
T
x, is dened by
< w, x >= w
1
x
1
+ +w
n
x
n
=
n

j=1
w
j
x
j
Other denition of scalar product in two dimensional case is
< w, x >= wx cos(w, x)
where . denotes the Eucledean norm in the real plane, i.e.
w =
_
w
2
1
+w
2
2
, x =
_
x
2
1
+x
2
2
Figure 2.4 w = (w
1
, w
2
)
T
and x = (x
1
, x
2
)
T
.
Lemma 2.1.1 The following property holds
< w, x >= w
1
x
1
+w
2
x
2
=
_
w
2
1
+w
2
2
_
x
2
1
+x
2
2
cos(w, x) = wxcos(w, x)
162
w
x
<w,x>
Proof
cos(w, x) = cos((w, 1-st axis)(x, 1-st axis)) = cos((w, 1-st axis) cos(x, 1-st axis))+
sin(w, 1-st axis) sin(x, 1-st axis) = w
1
x
1
/
_
w
2
1
+w
2
2
_
x
2
1
+x
2
2
+w
2
x
2
/
_
w
2
1
+w
2
2
_
x
2
1
+x
2
2
That is,
wxcos(w, x) =
_
w
2
1
+w
2
2
_
x
2
1
+x
2
2
cos(w, x) = w
1
x
1
+w
2
x
2
.
From cos /2 = 0 it follows that < w, x >= 0 whenever w and x are perpen-
dicular. If w = 1 (we say that w is normalized) then | < w, x > | is nothing
else but the projection of x onto the direction of w. Really, if w = 1 then
we get
< w, x >= wxcos(w, x) = xcos(w, x)
The problem of learning in neural networks is simply the problem of nding
a set of connection strengths (weights) which allow the network to carry out
the desired computation. The network is provided with a set of example
input/output pairs (a training set) and is to modify its connections in order
to approximate the function from which the input/output pairs have been
drawn. The networks are then tested for ability to generalize.
The error correction learning procedure is simple enough in conception. The
procedure is as follows: During training an input is put into the network and
ows through the network generating a set of values on the output units.
Then, the actual output is compared with the desired target, and a match
is computed. If the output and target match, no change is made to the net.
However, if the output diers from the target a change must be made to
some of the connections.
163
m
x1
x2 xn
w11 wmn
1
Figure 2.5 Projection of x onto the direction of w.
The perceptron learning rule, introduced by Rosenblatt [21], is a typical error
correction learning algorithm of single-layer feedforward networks with linear
threshold activation function.
Figure 2.6 Single-layer feedforward network.
Usually, w
ij
denotes the weight from the j-th input unit to the i-th output
unit and w
i
denotes the weight vector of the i-th output node.
We are given a training set of input/output pairs
No. input values desired output values
1. x
1
= (x
1
1
, . . . x
1
n
) y
1
= (y
1
1
, . . . , y
1
m
)
.
.
.
.
.
.
.
.
.
K. x
K
= (x
K
1
, . . . x
K
n
) y
K
= (y
K
1
, . . . , y
K
m
)
Our problem is to nd weight vectors w
i
such that
o
i
(x
k
) = sign(< w
i
, x
k
>) = y
k
i
, i = 1, . . . , m
for all training patterns k.
The activation function of the output nodes is linear threshold function of
the form
o
i
(x) = sign(< w
i
, x >) =
_
+1 if < w
i
, x > 0
1 if < w
i
, x >< 0
164
and the weight adjustments in the perceptron learning method are performed
by
w
i
:= w
i
+ (y
k
i
sign(< w
i
, x
k
>))x
k
, i = 1, . . . , m
w
ij
:= w
ij
+ (y
k
i
sign(< w
i
, x
k
>))x
k
j
, j = 1, . . . , n
where > 0 is the learning rate.
From this equation it follows that if the desired output is equal to the com-
puted output, y
k
i
= sign(< w
i
, x
k
>), then the weight vector of the i-th
output node remains unchanged, i.e. w
i
is adjusted if and only if the com-
puted output, o
i
(x
k
), is incorrect. The learning stops when all the weight
vectors remain unchanged during a complete training cycle.
Consider now a single-layer network with one output node. Then the input
components of the training patterns can be classied into two disjunct classes
C
1
= {x
k
|y
k
= 1}, C
2
= {x
k
|y
k
= 1}
i.e. x belongs to class C
1
if there exists an input/output pair (x, 1), and x
belongs to class C
2
if there exists an input/output pair (x, 1).
Taking into consideration the denition of the activation function it is easy
to see that we are searching for a weight vector w such that
< w, x > 0 for each x C
1
, and < w, x >< 0 for each x C
2
.
If such vector exists then the problem is called linearly separable.
Summary 2.1.1 Perceptron learning algorithm.
Given are K training pairs arranged in the training set
(x
1
, y
1
), . . . , (x
K
, y
K
)
where x
k
= (x
k
1
, . . . , x
k
n
), y
k
= (y
k
1
, . . . , y
k
m
), k = 1, . . . , K.
Step 1 > 0 is chosen
Step 2 Weigts w
i
are initialized at small random values, the running
error E is set to 0, k := 1
165
Step 3 Training starts here. x
k
is presented, x := x
k
, y := y
k
and
output o is computed
o
i
(x) = sign(< w
i
, x >), i = 1, . . . , m
Step 4 Weights are updated
w
i
:= w
i
+ (y
i
sign(< w
i
, x >))x, i = 1, . . . , m
Step 5 Cumulative cycle error is computed by adding the present error
to E
E := E +
1
2
y o
2
Step 6 If k < K then k := k +1 and we continue the training by going
back to Step 3, otherwise we go to Step 7
Step 7 The training cycle is completed. For E = 0 terminate the
training session. If E > 0 then E is set to 0, k := 1 and we initiate a
new training cycle by going to Step 3
The following theorem shows that if the problem has solutions then the
perceptron learning algorithm will nd one of them.
Theorem 2.1.1 (Convergence theorem) If the problem is linearly separable
then the program will go to Step 3 only netely many times.
Example 2.1.3 Illustration of the perceptron learning algorithm.
Consider the following training set
No. input values desired output value
1. x
1
= (1, 0, 1)
T
-1
2. x
2
= (0, 1, 1)
T
1
3. x
3
= (1, 0.5, 1)
T
1
The learning constant is assumed to be 0.1. The initial weight vector is
w
0
= (1, 1, 0)
T
.
Then the learning according to the perceptron learning rule progresses as
follows.
166
Step 1 Input x
1
, desired output is -1:
< w
0
, x
1
>= (1, 1, 0)
_
_
1
0
1
_
_
= 1
Correction in this step is needed since y
1
= 1 = sign(1). We thus
obtain the updated vector
w
1
= w
0
+ 0.1(1 1)x
1
Plugging in numerical values we obtain
w
1
=
_
_
1
1
0
_
_
0.2
_
_
1
0
1
_
_
=
_
_
0.8
1
0.2
_
_
Step 2 Input is x
2
, desired output is 1. For the present w
1
we compute the
activation value
< w
1
, x
2
>= (0.8, 1, 0.2)
_
_
0
1
1
_
_
= 1.2
Correction is not performed in this step since 1 = sign(1.2), so we let
w
2
:= w
1
.
Step 3 Input is x
3
, desired output is 1.
< w
2
, x
3
>= (0.8, 1, 0.2)
_
_
1
0.5
1
_
_
= 0.1
Correction in this step is needed since y
3
= 1 = sign(0.1). We thus
obtain the updated vector
w
3
= w
2
+ 0.1(1 + 1)x
3
Plugging in numerical values we obtain
w
3
=
_
_
0.8
1
0.2
_
_
+ 0.2
_
_
1
0.5
1
_
_
=
_
_
0.6
1.1
0.4
_
_
167
Step 4 Input x
1
, desired output is -1:
< w
3
, x
1
>= (0.6, 1.1, 0.4)
_
_
1
0
1
_
_
= 0.2
Correction in this step is needed since y
1
= 1 = sign(0.2). We thus
obtain the updated vector
w
4
= w
3
+ 0.1(1 1)x
1
Plugging in numerical values we obtain
w
4
=
_
_
0.6
1.1
0.4
_
_
0.2
_
_
1
0
1
_
_
=
_
_
0.4
1.1
0.6
_
_
Step 5 Input is x
2
, desired output is 1. For the present w
4
we compute the
activation value
< w
4
, x
2
>= (0.4, 1.1, 0.6)
_
_
0
1
1
_
_
= 1.7
Correction is not performed in this step since 1 = sign(1.7), so we let
w
5
:= w
4
.
Step 6 Input is x
3
, desired output is 1.
< w
5
, x
3
>= (0.4, 1.1, 0.6)
_
_
1
0.5
1
_
_
= 0.75
Correction is not performed in this step since 1 = sign(0.75), so we let
w
6
:= w
5
.
This terminates the learning process, because
< w
6
, x
1
>= 0.2 < 0, < w
6
, x
2
>= 1.7 > 0, < w
6
, x
3
>= 0.75 > 0
Minsky and Papert [20] provided a very careful analysis of conditions under
which the perceptron learning rule is capable of carrying out the required
mappings. They showed that the perceptron can not succesfully solve the
problem
168
x
1
x
1
o(x
1
, x
2
)
1. 1 1 0
2. 1 0 1
3. 0 1 1
4. 0 0 0
This Boolean function is known in the literature as exclusive or (XOR). We
will refer to the above function as two-dimensional parity function.
Figure 2.7 Linearly nonseparable XOR problem.
The n-dimensional parity function is a binary Boolean function, which takes
the value 1 if we have odd number of 1-s in the input vector, and zero
otherwise. For example, the 3-dimensional parity function is dened as
x
1
x
1
x
3
o(x
1
, x
2
, x
3
)
1. 1 1 1 1
2. 1 1 0 0
3. 1 0 1 0
4. 1 0 0 1
5. 0 0 1 1
6. 0 1 1 0
7. 0 1 0 1
8. 0 0 0 0
169
x
xn
x - xn
f(x) - f(xn)
f
2.2 The delta learning rule
The error correction learning procedure is simple enough in conception. The
procedure is as follows: During training an input is put into the network and
ows through the network generating a set of values on the output units.
Then, the actual output is compared with the desired target, and a match
is computed. If the output and target match, no change is made to the net.
However, if the output diers from the target a change must be made to
some of the connections.
Lets rst recall the denition of derivative of single-variable functions.
Denition 2.2.1 The derivative of f at (an interior point of its domain) x,
denoted by f

(x), and dened by


f

(x) = lim
x
n
x
f(x) f(x
n
)
x x
n
Let us consider a dierentiable function f : IR IR. The derivative of f at
(an interior point of its domain) x is denoted by f

(x). If f

(x) > 0 then we


say that f is increasing at x, if f

(x) < 0 then we say that f is decreasing


at x, if f

(x) = 0 then f can have a local maximum, minimum or inextion


point at x.
Figure 2.8 Derivative of function f.
170
x0 x1
f(x0)
f(x)
Downhill direction
A dierentiable function is always increasing in the direction of its derivative,
and decreasing in the opposite direction. It means that if we want to nd one
of the local minima of a function f starting from a point x
0
then we should
search for a second candidate in the right-hand side of x
0
if f

(x
0
) < 0 (when
f is decreasing at x
0
) and in the left-hand side of x
0
if f

(x
0
) > 0 (when f
increasing at x
0
).
The equation for the line crossing the point (x
0
, f(x
0
)) is given by
y f(x
0
)
x x
0
= f

(x
0
)
that is
y = f(x
0
) + (x x
0
)f

(x
0
)
The next approximation, denoted by x
1
, is a solution to the equation
f(x
0
) + (x x
0
)f

(x
0
) = 0
which is,
x
1
= x
0

f(x
0
)
f

(x
0
)
This idea can be applied successively, that is
x
n+1
= x
n

f(x
n
)
f

(x
n
)
.
171
Figure 2.9 The downhill direction is negative at x
0
.
The above procedure is a typical descent method. In a descent method the
next iteration w
n+1
should satisfy the following property
f(w
n+1
) < f(w
n
)
i.e. the value of f at w
n+1
is smaller than its previous value at w
n
.
In error correction learning procedure, each iteration of a descent method
calculates the downhill direction (opposite of the direction of the derivative)
at w
n
which means that for a suciently small > 0 the inequality
f(w
n
f

(w
n
)) < f(w
n
)
should hold, and we let w
n+1
be the vector
w
n+1
= w
n
f

(w
n
)
Let f : IR
n
IR be a real-valued function. In a descent method, whatever
is the next iteration, w
n+1
, it should satisfy the property
f(w
n+1
) < f(w
n
)
i.e. the value of f at w
n+1
is smaller than its value at previous approximation
w
n
.
Each iteration of a descent method calculates a downhill direction (opposite
of the direction of the derivative) at w
n
which means that for a suciently
small > 0 the inequality
f(w
n
f

(w
n
)) < f(w
n
)
should hold, and we let w
n+1
be the vector
w
n+1
= w
n
f

(w
n
).
Let f : IR
n
IR be a real-valued function and let e IR
n
with e = 1 be a
given direction.The derivative of f with respect e at w is dened as

e
f(w) = lim
t+0
f(w +te) f(w)
t
172
e
te
f(w + te)
w
w + te
f(w)
If e = (0, . . .
i-th
..
1 . . . , 0)
T
, i.e. e is the i-th basic direction then instead of

e
f(w) we write
i
f(w), which is dened by

i
f(w) = lim
t+0
f(w
1
, . . . , w
i
+t, . . . w
n
) f(w
1
, . . . , w
i
, . . . , w
n
)
t
Figure 2.10 The derivative of f with respect to the direction e..
The gradient of f at w, denoted by f

(w) is dened by
f

(w) = (
1
f(w), . . . ,
n
f(w))
T
Example 2.2.1 Let f(w
1
, w
2
) = w
2
1
+w
2
2
then the gradient of f is given by
f

(w) = 2w = (2w
1
, 2w
2
)
T
.
The gradient vector always points to the uphill direction of f. The downhill
(steepest descent) direction of f at w is the opposite of the uphill direction,
i.e. the downhill direction is f

(w), which is
(
1
f(w), . . . ,
n
f(w))
T
.
173
m
x1
x2 xn
w11 wmn
1
Denition 2.2.2 (linear activation function) A linear activation function is
a mapping from f : IR IR such that
f(t) = t
for all t IR.
Suppose we are given a single-layer network with n input units and m linear
output units, i.e. the output of the i-th neuron can be written as
o
i
= net
i
=< w
i
, x >= w
i1
x
1
+ +w
in
x
n
, i = 1, . . . , m.
Assume we have the following training set
{(x
1
, y
1
), . . . , (x
K
, y
K
)}
where x
k
= (x
k
1
, . . . , x
k
n
), y
k
= (y
k
1
, . . . , y
k
m
), k = 1, . . . , K.
Figure 2.11 Single-layer feedforward network with m output units
The basic idea of the delta learning rule is to dene a measure of the overall
performance of the system and then to nd a way to optimize that perfor-
mance. In our network, we can dene the performance of the system as
E =
K

k=1
E
k
=
1
2
K

k=1
y
k
o
k

2
That is
E =
1
2
K

k=1
m

i=1
(y
k
i
o
k
i
)
2
=
1
2
K

k=1
m

i=1
(y
k
i
< w
i
, x
k
>)
2
174
where i indexes the output units; k indexes the input/output pairs to be
learned; y
k
i
indicates the target for a particular output unit on a particular
pattern; o
k
i
:=< w
i
, x
k
> indicates the actual output for that unit on that
pattern; and E is the total error of the system. The goal, then, is to minimize
this function. It turns out, if the output functions are dierentiable, that
this problem has a simple solution: namely, we can assign a particular unit
blame in proportion to the degree to which changes in that units activity
lead to changes in the error. That is, we change the weights of the system in
proportion to the derivative of the error with respect to the weights.
The rule for changing weights following presentation of input/output pair
(x
k
, y
k
) is given by the gradient descent method, i.e. we minimize the
quadratic error function by using the following iteration process
w
ij
:= w
ij

E
k
w
ij
where > 0 is the learning rate.
Let us compute now the partial derivative of the error function E
k
with
respect to w
ij
E
k
w
ij
=
E
k
net
k
i
net
k
i
w
ij
= (y
k
i
o
k
i
)x
k
j
where net
k
i
= w
i1
x
k
1
+ +w
in
x
k
n
.
That is,
w
ij
:= w
ij
+ (y
k
i
o
k
i
)x
k
j
for j = 1, . . . , n.
Denition 2.2.3 The error signal term, denoted by
k
i
and called delta, pro-
duced by the i-th output neuron is dened as

k
i
=
E
k
net
k
i
= (y
k
i
o
k
i
)
For linear output units
k
i
is nothing else but the dierence between the desired
and computed output values of the i-th neuron.
So the delta learning rule can be written as
w
ij
:= w
ij
+
k
i
x
k
j
175
for i = 1, . . . , m and j = 1, . . . , n.
A key advantage of neural network systems is that these simple, yet powerful
learning procedures can be dened, allowing the systems to adapt to their
environments.
The essential character of such networks is that they map similar input pat-
terns to similar output patterns.
This characteristic is what allows these networks to make reasonable gener-
alizations and perform reasonably on patterns that have never before been
presented. The similarity of patterns in a connectionist system is determined
by their overlap. The overlap in such networks is determined outside the
learning system itself whatever produces the patterns. The standard delta
rule essentially implements gradient descent in sum-squared error for linear
activation functions.
It should be noted that the delta learning rule was introduced only recently
for neural network training by McClelland and Rumelhart [22]. This rule
parallels the discrete perceptron training rule. It also can be called the
continuous perceptron training rule.
Summary 2.2.1 The delta learning rule with linear activation functions.
Given are K training pairs arranged in the training set
{(x
1
, y
1
), . . . , (x
K
, y
K
)}
where x
k
= (x
k
1
, . . . , x
k
n
) and y
k
= (y
k
1
, . . . , y
k
m
), k = 1, . . . , K.
Step 1 > 0, E
max
> 0 are chosen
Step 2 Weights w
ij
are initialized at small random values, k := 1, and
the running error E is set to 0
Step 3 Training starts here. Input x
k
is presented, x := x
k
, y := y
k
,
and output o = (o
1
, . . . , o
m
)
T
is computed
o =< w
i
, x >= w
T
i
x
for i = 1, . . . , m.
Step 4 Weights are updated
w
ij
:= w
ij
+ (y
i
o
i
)x
j
176
Step 5 Cumulative cycle error is computed by adding the present error
to E
E := E +
1
2
y o
2
Step 6 If k < K then k := k +1 and we continue the training by going
back to Step 3, otherwise we go to Step 7
Step 7 The training cycle is completed. For E < E
max
terminate the
training session. If E > E
max
then E is set to 0 and we initiate a new
training cycle by going back to Step 3
177
2.2.1 The delta learning rule with semilinear activa-
tion function
In many practical cases instead of linear activation functions we use semi-
linear ones. The next table shows the most-often used types of activation
functions.
Linear f(< w, x >) = w
T
x
Piecewise linear f(< w, x >) =
_
_
_
1 if < w, x >> 1
< w, x > if | < w, x > | 1
1 if < w, x >< 1
Hard limiter f(< w, x >) = sign(w
T
x)
Unipolar sigmoidal f(< w, x >) = 1/(1 + exp(w
T
x))
Bipolar sigmoidal (1) f(< w, x >) = tanh(w
T
x)
Bipolar sigmoidal (2) f(< w, x >) = 2/(1 +exp(w
T
x)) 1
Table 2.1 Activation functions.
The derivatives of sigmoidal activation functions are extensively used in
learning algorithms.
If f is a bipolar sigmoidal activation function of the form
f(t) =
2
1 + exp(t)
1.
Then the following equality holds
f

(t) =
2 exp(t)
(1 + exp(t))
2
=
1
2
(1 f
2
(t)).
178
x1
xn
w1
wn
= f(net)
f
Figure 2.12 Bipolar activation function.
If f is a unipolar sigmoidal activation function of the form
f(t) =
1
1 + exp(t)
.
Then f

satises the following equality


f

(t) = f(t)(1 f(t)).


Figure 2.13 Unipolar activation function.
We shall describe now the delta learning rule with semilinear activation func-
tion. For simplicity we explain the learning algorithm in the case of a single-
output network.
179
Figure 2.14 Single neuron network.
The output of the neuron is computed by unipolar sigmoidal activation func-
tion
o(< w, x >) =
1
1 + exp(w
T
x)
.
Suppose we are given the following training set
No. input values desired output value
1. x
1
= (x
1
1
, . . . x
1
n
) y
1
2. x
2
= (x
2
1
, . . . x
2
n
) y
2
.
.
.
.
.
.
.
.
.
K. x
K
= (x
K
1
, . . . x
K
n
) y
K
The system rst uses the input vector, x
k
, to produce its own output vector,
o
k
, and then compares this with the desired output, y
k
. Let
E
k
=
1
2
(y
k
o
k
)
2
=
1
2
(y
k
o(< w, x
k
>))
2
=
1
2
_
y
k

1
1 + exp (w
T
x
k
)
_
2
be our measure of the error on input/output pattern k and let
E =
K

k=1
E
k
be our overall measure of the error.
The rule for changing weights following presentation of input/output pair k
is given by the gradient descent method, i.e. we minimize the quadratic error
function by using the following iteration process
w := w E

k
(w).
Let us compute now the gradient vector of the error function E
k
at point w:
E

k
(w) =
d
dw
_
1
2

_
y
k

1
1 + exp (w
T
x
k
)
_
2
_
=
180
1
2

d
dw
_
y
k

1
1 + exp (w
T
x
k
)
_
2
= (y
k
o
k
)o
k
(1 o
k
)x
k
where o
k
= 1/(1 + exp (w
T
x
k
)).
Therefore our learning rule for w is
w := w + (y
k
o
k
)o
k
(1 o
k
)x
k
which can be written as
w := w +
k
o
k
(1 o
k
)x
k
where
k
= (y
k
o
k
)o
k
(1 o
k
).
Summary 2.2.2 The delta learning rule with unipolar sigmoidal activation
function.
Given are K training pairs arranged in the training set
{(x
1
, y
1
), . . . , (x
K
, y
K
)}
where x
k
= (x
k
1
, . . . , x
k
n
) and y
k
IR, k = 1, . . . , K.
Step 1 > 0, E
max
> 0 are chosen
Step 2 Weigts w are initialized at small random values, k := 1, and
the running error E is set to 0
Step 3 Training starts here. Input x
k
is presented, x := x
k
, y := y
k
,
and output o is computed
o = o(< w, x >) =
1
1 + exp (w
T
x)
Step 4 Weights are updated
w := w + (y o)o(1 o)x
Step 5 Cumulative cycle error is computed by adding the present error
to E
E := E +
1
2
(y o)
2
181
Step 6 If k < K then k := k +1 and we continue the training by going
back to Step 3, otherwise we go to Step 7
Step 7 The training cycle is completed. For E < E
max
terminate the
training session. If E > E
max
then E is set to 0 and we initiate a new
training cycle by going back to Step 3
In this case, without hidden units, the error surface is shaped like a bowl with
only one minimum, so gradient descent is guaranteed to nd the best set of
weights. With hidden units, however, it is not so obvious how to compute
the derivatives, and the error surface is not concave upwards, so there is the
danger of getting stuck in local minima.
We illustrate the delta learning rule with bipolar sigmoidal activation func-
tion f(t) = 2/(1 + exp t) 1.
Example 2.2.2 The delta learning rule with bipolar sigmoidal activation
function.
Given are K training pairs arranged in the training set
{(x
1
, y
1
), . . . , (x
K
, y
K
)}
where x
k
= (x
k
1
, . . . , x
k
n
) and y
k
IR, k = 1, . . . , K.
Step 1 > 0, E
max
> 0 are chosen
Step 2 Weigts w are initialized at small random values, k := 1, and
the running error E is set to 0
Step 3 Training starts here. Input x
k
is presented, x := x
k
, y := y
k
,
and output o is computed
o = o(< w, x >) =
2
1 + exp(w
T
x)
1
Step 4 Weights are updated
w := w +
1
2
(y o)(1 o
2
)x
182
Step 5 Cumulative cycle error is computed by adding the present error
to E
E := E +
1
2
(y o)
2
Step 6 If k < K then k := k +1 and we continue the training by going
back to Step 3, otherwise we go to Step 7
Step 7 The training cycle is completed. For E < E
max
terminate the
training session. If E > E
max
then E is set to 0 and we initiate a new
training cycle by going back to Step 3
183
x1
x2
xn
w11
w12
w1n
wLn
L hidden nodes
W11
WmL
1
m
m output nodes
n input nodes
2.3 The generalized delta learning rule
We now focus on generalizing the delta learning rule for feedforward layered
neural networks. The architecture of the two-layer network considered below
is shown in Figure 2.16. It has strictly speaking, two layers of processing neu-
rons. If, however, the layers of nodes are counted, then the network can also
be labeled as a three-layer network. There is no agreement in the literature
as to which approach is to be used to describe network architectures. In this
text we will use the term layer in reference to the actual number of existing
and processing neuron layers. Layers with neurons whose outputs are not
directly accesible are called internal or hidden layers. Thus the network of
Figure 2.16 is a two-layer network, which can be called a single hidden-layer
network.
Figure 2.16 Layered neural network with two continuous perceptron layers.
The generalized delta rule is the most often used supervised learning algo-
rithm of feedforward multi-layer neural networks. For simplicity we consider
only a neural network with one hidden layer and one output node.
184
x1
x2
xn
w11
w12
w1n
wLn
W1
WL

Figure 2.16a Two-layer neural network with one output node.


The measure of the error on an input/output training pattern (x
k
, y
k
) is
dened by
E
k
(W, w) =
1
2
(y
k
O
k
)
2
where O
k
is the computed output and the overall measure of the error is
E(W, w) =
K

k=1
E
k
(W, w).
If an input vector x
k
is presented to the network then it generates the fol-
lowing output
O
k
=
1
1 + exp(W
T
o
k
)
where o
k
is the output vector of the hidden layer
o
k
l
=
1
1 + exp(w
T
l
x
k
)
and w
l
denotes the weight vector of the l-th hidden neuron, l = 1, . . . , L.
The rule for changing weights following presentation of input/output pair k
is given by the gradient descent method, i.e. we minimize the quadratic error
185
function by using the following iteration process
W := W
E
k
(W, w)
W
,
w
l
:= w
l

E
k
(W, w)
w
l
,
for l = 1, . . . , L, and > 0 is the learning rate.
By using the chain rule for derivatives of composed functions we get
E
k
(W, w)
W
=
1
2

W
_
y
k

1
1 + exp(W
T
o
k
)
_
2
= (y
k
O
k
)O
k
(1 O
k
)o
k
i.e. the rule for changing weights of the output unit is
W := W + (y
k
O
k
)O
k
(1 O
k
)o
k
= W +
k
o
k
that is
W
l
:= W
l
+
k
o
k
l
,
for l = 1, . . . , L, and we have used the notation
k
= (y
k
O
k
)O
k
(1 O
k
).
Let us now compute the partial derivative of E
k
with respect to w
l
E
k
(W, w)
w
l
= O
k
(1 O
k
)W
l
o
k
l
(1 o
k
l
)x
k
i.e. the rule for changing weights of the hidden units is
w
l
:= w
l
+
k
W
l
o
k
l
(1 o
k
l
)x
k
, l = 1, . . . , L.
that is
w
lj
:= w
lj
+
k
W
l
o
k
l
(1 o
k
l
)x
k
j
, j = 1, . . . , n.
Summary 2.3.1 The generalized delta learning rule (error backpropagation
learning)
We are given the training set
{(x
1
, y
1
), . . . , (x
K
, y
K
)}
where x
k
= (x
k
1
, . . . , x
k
n
) and y
k
IR, k = 1, . . . , K.
186
Step 1 > 0, E
max
> 0 are chosen
Step 2 Weigts w are initialized at small random values, k := 1, and
the running error E is set to 0
Step 3 Training starts here. Input x
k
is presented, x := x
k
, y := y
k
,
and output O is computed
O =
1
1 + exp(W
T
o)
where o
l
is the output vector of the hidden layer
o
l
=
1
1 + exp(w
T
l
x)
Step 4 Weights of the output unit are updated
W := W + o
where = (y O)O(1 O).
Step 5 Weights of the hidden units are updated
w
l
= w
l
+ W
l
o
l
(1 o
l
)x, l = 1, . . . , L
Step 6 Cumulative cycle error is computed by adding the present error
to E
E := E +
1
2
(y O)
2
Step 7 If k < K then k := k +1 and we continue the training by going
back to Step 2, otherwise we go to Step 8
Step 8 The training cycle is completed. For E < E
max
terminate the
training session. If E > E
max
then E := 0, k := 1 and we initiate a
new training cycle by going back to Step 3
Exercise 2.3.1 Derive the backpropagation learning rule with bipolar sig-
moidal activation function f(t) = 2/(1 + exp t) 1.
187
x1
x2 xn
w1
wm
w11
w1n
1 =( w1j xj)
= wi(i)
wmn
m =( wmj xj)
2.3.1 Eectivity of neural networks
Funahashi [8] showed that innitely large neural networks with a single
hidden layer are capable of approximating all continuous functions. Namely,
he proved the following theorem
Theorem 2.3.1 Let (x) be a nonconstant, bounded and monotone increas-
ing continuous function. Let K IR
n
be a compact set and
f : K IR
be a real-valued continuous function on K. Then for arbitrary > 0, there
exists an integer N and real constants w
i
, w
ij
such that

f(x
1
, . . . , x
n
) =
N

i=1
w
i
(
n

j=1
w
ij
x
j
)
satises
f

f

= sup
xK
|f(x)

f(x)| .
In other words, any continuous mapping can be approximated in the sense
of uniform topology on K by input-output mappings of two-layers networks
whose output functions for the hidden layer are (x) and are linear for the
output layer.
188
Figure 2.17 Funahashis network.
The Stone-Weierstrass theorem from classical real analysis can be used to
show certain network architectures possess the universal approximation ca-
pability. By employing the Stone-Weierstrass theorem in the designing of our
networks, we also guarantee that the networks can compute certain polyno-
mial expressions: if we are given networks exactly computing two functions,
f
1
and f
2
, then a larger network can exactly compute a polynomial expression
of f
1
and f
2
.
Theorem 2.3.2 (Stone-Weierstrass) Let domain K be a compact space of
n dimensions, and let G be a set of continuous real-valued functions on K,
satisfying the following criteria:
1. The constant function f(x) = 1 is in G.
2. For any two points x
1
= x
2
in K, there is an f in G such that f(x
1
) =
f(x
2
).
3. If f
1
and f
2
are two functions in G, then fg and
1
f
1
+
2
f
2
are in G for
any two real numbers
1
and
2
.
Then G is dense in C(K), the set of continuous real-valued functions on K.
In other words, for any > 0 and any function g in C(K), there exists g in
G such that
f g

= sup
xK
|f(x) g(x)| .
The key to satisfying he Stone-Weierstrass theorem is to nd functions that
transform multiplication into addition so that products can be written as
summations. There are at least three generic functions that accomplish this
transfomation: exponential functions, partial fractions, and step functions.
The following networks satisfy the Stone-Weierstrass theorem.
Decaying-exponential networks Exponential functions are basic to
the process of transforming multiplication into addition in several kinds
of networks:
exp(x
1
) exp(x
2
) = exp(x
1
+x
2
).
189
Let G be the set of all continuous functions that can be computed by
arbitrarily large decaying-exponential networks on domain K = [0, 1]
n
:
G =
_
f(x
1
, . . . , x
n
) =
N

i=1
w
i
exp(
n

j=1
w
ij
x
j
), w
i
, w
ij
IR
_
.
Then G is dense in C(K)
Fourier networks
Exponentiated-function networks
Modied logistic networks
Modied sigma-pi and polynomial networks Let G be the set
of all continuous functions that can be computed by arbitrarily large
modied sigma-pi or polynomial networks on domain K = [0, 1]
n
:
G =
_
f(x
1
, . . . , x
n
) =
N

i=1
w
i
n

j=1
x
w
ij
j
, w
i
, w
ij
IR
_
.
Then G is dense in C(K).
Step functions and perceptron networks
Partial fraction networks
190
m
x1
x2
xn
w11
wmn
1
w1n
wm1
2.4 Winner-take-all learning
Unsupervised classication learning is based on clustering of input data. No a
priori knowledge is assumed to be available regarding an inputs membership
in a particular class. Rather, gradually detected characteristics and a history
of training will be used to assist the network in dening classes and possible
boundaries between them.
Clustering is understood to be the grouping of similar objects and separating
of dissimilar ones.
We discuss Kohonens network [16], which classies input vectors into one of
the specied number of m categories, according to the clusters detected in
the training set
{x
1
, . . . , x
K
}.
The learning algorithm treats the set of m weight vectors as variable vectors
that need to be learned. Prior to the learning, the normalization of all
(randomly chosen) weight vectors is required.
Figure 2.18 The winner-take-all learning network.
The weight adjustment criterion for this mode of training is the selection of
w
r
such that
x w
r
= min
i=1,...,m
x w
i

The index r denotes the winning neuron number corresponding to the vector
w
r
, which is the closest approximation of the current input x. Using the
equality
x w
i

2
=< x w
i
, x w
i
>=< x, x > 2 < w
i
, x > + < w
i
, w
i
>=
191
x
w1
w3
w2
x
2
2 < w, x > +w
i

2
= x
2
2 < w
i
, x > +1
we can infer that searching for the minimum of m distances corresponds to
nding the maximum among the m scalar products
< w
r
, x >= max
i=1,...,m
< w
i
, x >
Taking into consideration that w
i
= 1, i {1, . . . , m} the scalar product
< w
i
, x > is nothing else but the projection of x on the direction of w
i
. It is
clear that the closer the vector w
i
to x the bigger the projection of x on w
i
.
Note that < w
r
, x > is the activation value of the winning neuron which has
the largest value net
i
, i = 1, . . . , m.
Figure 2.19 The winner weight is w
2
.
When using the scalar product metric of similarity, the synaptic weight vec-
tors should be modied accordingly so that they become more similar to the
current input vector.
With the similarity criterion being cos(w
i
, x), the weight vector lengths should
be identical for this training approach. However, their directions should be
modied.
Intuitively, it is clear that a very long weight vector could lead to a very
large output ots neuron even if there were a large angle between the weight
vector and the pattern. This explains the need for weight normalization.
192
After the winning neuron has been identied and declared a winner, its weight
must be adjusted so that the distance x w
r
is reduced in the current
training step.
Thus, x w
r
must be reduced, preferebly along the gradient direction in
the weight space w
r1
, . . . , w
rn
dx w
2
dw
=
d
dw
(< xw, xw >) =
d
dw
(< x, x > 2 < w, x > + < w, w >) =
d
dw
(< x, x >)
d
dw
(2 < w, x >) +
d
dw
(< w, w >) =
2
d
dw
(w
1
x
1
+ +w
n
x
n
) +
d
dw
(w
2
1
+ +w
2
n
) =
2
_
d
dw
1
(w
1
x
1
+ +w
n
x
n
), . . . ,
d
dw
n
(w
1
x
1
+ +w
n
x
n
)
_
T
+
_
d
dw
1
(w
2
1
+ +w
2
n
), . . . ,
d
dw
n
(w
2
1
+ +w
2
n
)
_
T
=
2(x
1
, . . . , x
n
)
T
+ 2(w
1
, . . . , w
n
)
T
= 2(x w)
It seems reasonable to reward the weights of the winning neuron with an
increment of weight in the negative gradient direction, thus in the direction
(x w
r
). We thus have
w
r
:= w
r
+ (x w
r
) (2.1)
where is a small lerning constant selected heuristically, usually between 0.1
and 0.7. The remaining weight vectors are left unaected.
Summary 2.4.1 Kohonens learning algorithm can be summarized in the
following three steps
Step 1 w
r
:= w
r
+ (x w
r
), o
r
:= 1, (r is the winner neuron)
Step 2 w
r
:= w
r
/w
r
(normalization)
193
x
w1
w3
w2
w2: = (1- )w2 + x
Step 3 w
i
:= w
i
, o
i
:= 0, i = r (losers are unaected)
It should be noted that from the identity
w
r
:= w
r
+ (x w
r
) = (1 )w
r
+ x
it follows that the updated weight vector is a convex linear combination of
the old weight and the pattern vectors.
Figure 2.20 Updating the weight of the winner neuron.
In the end of the training process the nal weight vectors point to the center
of gravity of classes.
The network will only be trainable if classes/clusters of patterns are linearly
separable from other classes by hyperplanes passing through origin.
To ensure separability of clusters with a priori unknown numbers of train-
ing clusters, the unsupervised training can be performed with an excessive
number of neurons, which provides a certain separability safety margin.
194
w1
w3
w2
Figure 2.21 The nal weight vectors point to the center of gravity of the classes.
During the training, some neurons are likely not to develop their weights, and
if their weights change chaotically, they will not be considered as indicative
of clusters.
Therefore such weights can be omitted during the recall phase, since their
output does not provide any essential clustering information. The weights of
remaining neurons should settle at values that are indicative of clusters.
Another learning extension is possible for this network when the proper class
for some patterns is known a priori [29]. Although this means that the en-
coding of data into weights is then becoming supervised, this information
accelerates the learning process signicantly. Weight adjustments are com-
puted in the superviesed mode as in (2.1), i.e.
w
ij
:= (x w
r
) (2.2)
and only for correct classications. For inproper clustering responses of the
network, the weight adjustment carries the opposite sign compared to (2.2).
That is, > 0 for proper node responses, and < 0 otherwise, in the
supervised learning mode for the Kohonen layer.
Another mocation of the winner-take-all learning rule is that both the win-
ners and losers weights are adjusted in proportion to their level of responses.
195
This is called leakly competative learning and provides more subtle learning
in the case for which clusters may be hard to distinguish.
196
2.5 Applications of articial neural networks
There are large classes of problems that appear to be more amenable to solu-
tion by neural networks than by other available techniques. These tasks often
involve ambiguity, such as that inherent in handwritten character recogni-
tion. Problems of this sort are dicult to tackle with conventional methods
such as matched ltering or nearest neighbor classication, in part because
the metrics used by the brain to compare patterns may not be very closely
related to those chosen by an engineer designing a recognition system. Like-
wise, because reliable rules for recognizing a pattern are usually not at hand,
fuzzy logic and expert system designers also face the dicult and sometimes
impossible task of nding acceptable descriptions of the complex relations
governing class inclusion. In trainable neural network systems, these rela-
tions are abstracted directly from training data. Moreover, because neural
networks can be constructed with numbers of inputs and outputs ranging into
thousands, they can be used to attack problems that require consideration of
more input variables than could be feasibly utilized by most other approaches.
It should be noted, however, that neural networks will not work well at solv-
ing problems for which suciently large and general sets of training data are
not obtainable. Drawing heavily on [25] we provide a comprehensive list of
applications of neural networks in Industry, Business and Science.
The telecommunications industry. Many neural network applica-
tions are under development in the telecommunications industry for
solving problems ranging from control of a nationwide switching net-
work to management of an entire telephone company. Other aplications
at the telephone circuit level turn out to be the most signicant com-
mercial applications of neural networks in the world today. Modems,
commonly used for computer-to-commputer communications and in ev-
ery fax machine, have adaptive circuits for telephone line equalization
and for echo cancellation.
Control of sound and vibration Active control of vibration and
noise is accomplished by using an adaptive actuator to generate equal
and opposite vibration and noise. This is being used in air-conditioning
systems, in automotive systems, and in industrial applications.
Particle accelerator beam control. The Stanford linear accelerator
Center is now using adaptive techniques to cancel disturbances that
197
diminish the positioning accuracy of opposing beams of positrons and
electrons in a particle colloder.
Credit card fraud detection. Several banks and credit card compa-
nies including American Express, Mellon Bank, First USA Bank, and
others are currently using neural networks to study patterns of credit
card usage and and to detect transactions that are potentially fraudu-
lent.
Machine-printed character recognition. Commercial products
performing machine-printed character recognition have been introduced
by a large number of companies and have been described in the litera-
ture.
Hand-printed character recognition. Hecht-Nielsen Corp.s Quick-
strokes Automated Data Entry System is being used to recognize hand-
written forms at Avons order-processing center and at the state of
Wyomings Department of revenue. In the June 1992 issue of Systems
Integration Business, Dennis Livingston reports that before implement-
ing the system, Wyoming was losing an estimated $300,000 per year
in interest income because so many cheks were being deposited late.
Cardi Software oers a product called Teleform which uses Nestors
hand-printed character recognition system to convert a fax machine
into an OCR scanner. Poqet Computer, now a subsidiary of Fujitsu,
uses Nestors NestorWriter neural network software to perform hand-
writing recognition for the penbased PC it announced in January 1992
[26].
Cursive handwriting recognition. Neural networks have proved
useful in the development of algorithms for on-line cursive handwrit-
ing recognition [23]: A recent startup company in Palo Alto, Lexicus,
beginning with this basic technology has developed an impressive PC-
based cursive handwriting system.
Quality control in manufacturing. Neural networks are being used
in a large number of quality control and quality assurance programs
throughout industry. Applications include contaminant-level detection
from spectroscopy data at chemical plants and loudspeaker defect clas-
sication by CTS Electronics.
198
Event detection in particle accelerators.
Petroleum exploration. Oil companies including Arco and Texaco
are using neural networks to help determine the locations of under-
ground oil and gas deposits.
Medical applications. Commercial products by Neuromedical Sys-
tems Inc. are used for cancer screening and other medical applications
[28]. The company markets electrocardiograph and pap smear systems
that rely on neural network technology. The pap smear system. Pap-
net, is able to help cytotechnologists spot cancerous cells, drastically
reducing false/negative classications. The system is used by the U.S.
Food and Drug Administration [7].
Financial forecasting and portfolio management. Neural net-
works are used for nancial forecasting at a large number of invest-
ment rms and nancial entities including Merill Lynch & Co., Sa-
lomon Brothers, Shearson Lehman Brothers Inc., Citibank, and the
World Bank. Using neural networks trained by genetic algorithms,
Citibanks Andrew Colin claims to be able to earn 25 % returns per
year investing in the currency markets. A startup company, Promised
Land Technologies, oers a $249 software package that is claimed to
yield impressive annual returns [27].
Loan approval. Chase Manhattan Bank reportedly uses a hybrid
system utilizing pattern analysis and neural networks to evaluate cor-
porate loan risk. Robert Marose reports in the May 1990 issue of AI
Expert that the system, Creditview, helps loan ocers estimate the
credit worthiness of corporate loan candidates.
Real estate analysis
Marketing analysis. The Target Marketing System developed by
Churchill System is currently in use by Veratex Corp. to optimize
marketing strategy and cut marketing costs by removing unlikely future
customers from a list of potential customers [10].
Electric arc furnace electrode position control. Electric arc fur-
naces are used to melt scrap steel. The Intelligent Arc furnace controller
systems installed by Neural Applications Corp. are reportedly saving
199
millions of dollars per year per furnace in increased furnace through-put
and reduced electrode wear and electricity consumption. The controller
is currently being installed at furnaces worldwide.
Semiconductor process control. Kopin Corp. has used neural
networks to cut dopant concentration and deposition thickness errors
in solar cell manufacturing by more than a factor of two.
Chemical process control. Pavilion Technologies has developed a
neural network process control package, Process Insights, which is help-
ing Eastman Kodak and a number of other companies reduce waste,
improve product quality, and increase plant throughput [9]. Neural net-
work models are being used to perform sensitivity studies, determine
process set points, detect faults, and predict process performance.
Petroleum renery process control.
Continuous-casting control during steel production
Food and chemical formulation optimization.
Nonlinear Applications on the Horizon. A large number of re-
search programs are developing neural network solutions that are either
likely to be used in the near future or, particularly in the case of mil-
itary applications, that may already be incorporated into products,
albeit unadvertised. This category is much larger than the foregoing,
so we present here only a few representative examples.
Fighter ight and battle pattern quidance..
Optical telescope focusing.
Automobile applications. Ford Motor Co., General Motors, and
other automobile manufacturers are currently researching the possibil-
ity of widespread use of neural networks in automobiles and in au-
tomobile production. Some of the areas that are yielding promising
results in the laboratory include engine fault detection and diagnosis,
antilock brake control, active-suspension control, and idle-speed con-
trol. General Motors is having preliminary success using neural net-
works to model subjective customer ratings of automobiles based on
200
their dynamic characteristics to help engineers tailor vehicles to the
market.
Electric motor failure prediction. Siemens has reportedly devel-
oped a neural network system that can accurately and inexpensively
predict failure of large induction motors.
Speech recognition. The Stanford Research Institute is currently
involved in research combining neural networks with hidden Markov
models and other technologies in a highly successful speaker indepen-
dent speech recognition system. The technology will most likely be
licensed to interested companies once perfected.
Biomedical applications. Neural networks are rapidly nding di-
verse applications in the biomedical sciences. They are being used
widely in research on amino acid sequencing in RNA and DNA, ECG
and EEG waveform classication, prediction of patients reactions to
drug treatments, prevention of anesthesia-related accidents, arrhythmia
recognition for implantable debrillators patient mortality predictions,
quantitative cytology, detection of breast cancer from mammograms,
modeling schizophrenia, clinical diagnosis of lowerback pain, enhance-
ment and classication of medical images, lung nodule detection, diag-
nosis of hepatic masses, prediction of pulmonary embolism likelihood
from ventilation-perfusion lung scans, and the study of interstitial lung
disease.
Drug development. One particularly promising area of medical re-
search involves the use of neural networks in predicting the medicinal
properties of substances without expensive, time-consuming, and often
inhumane animal testing.
Control of copies. The Ricoh Corp. has successfully employed neu-
ral learning techniques for control of several voltages in copies in order
to preserve uniform copy quality despite changes in temperature, hu-
midity, time since last copy, time since change in toner cartridge, and
other variables. These variables inuence copy quality in highly non-
linear ways, which were learned through training of a backpropagation
network.
201
The truck backer-upper. Vehicular control by articial neural net-
works is a topic that has generated widespread interest. At Purdue
University, tests have been performed using neural networks to control
a model helicopter.
Perhaps the most important advantage of neural networks is their adaptiv-
ity. Neural networks can automatically adjust their parameters (weights) to
optimize their behavior as pattern recognizers, decision makers, system con-
trollers, predictors, and so on.
Self-optimization allows the neural network to design itself. The system
designer rst denes the neural network architecture, determines how the net-
work connects to other parts of the system, and chooses a training method-
ology for the network. The neural network then adapts to the application.
Adaptivity allows the neural network to perform well even when the environ-
ment or the system being controlled varies over time. There are many control
problems that can benet from continual nonlinear modeling and adaptation.
Neural networks, such as those used by Pavilion in chemical process control,
and by Neural Application Corp. in arc furnace control, are ideally suited to
track problem solutions in changing environments. Additionally, with some
programmability, such as the choices regarding the number of neurons per
layer and number of layers, a practitioner can use the same neural network
in a wide variety of applications. Engineering time is thus saved.
Another example of the advantages of self-optimization is in the eld of
Expert Systems. In some cases, instead of obtaining a set of rules through
interaction between an experienced expert and a knowledge engineer, a neural
system can be trained with examples of expert behavior.
202
Bibliography
[1] I. Aleksander and H. Morton, An Introduction to Neural Computing
(Chapmann and Hal, 1990).
[2] J.A. Anderson and E. Rosenfels eds., Neurocomputing: Foundations
of Research (MIT Press, Cambridge, MA,1988).
[3] E. K. Blum and L. K. Li, Approximation theory and feedforward
networks, Neural Networks, 4(1991) 511-515.
[4] P. Cardaliaguet, Approximation of a function and its derivative with
a neural network, Neural Networks, 5(1992) 207-220.
[5] N.E.Cotter, The Stone-Weierstrass theorem and its applications to
neural networks, IEEE Transactions on Neural Networks, 1(1990)
290-295.
[6] J.E. Dayho, Neural Network Architectures: An Introduction (Van
Nostrand Reinhold, New York,1990).
[7] A. Fuochi, Neural networks: No zealots yet but progress being
made, Comput. Can., (January 20, 1992).
[8] K. Funahashi, On the Approximate Realization of Continuous Map-
pings by Neural Networks, Neural Networks 2(1989) 183-192.
[9] C. Hall, Neural net technology: Ready for prime time? IEEE Expert
(December 1992) 2-4.
[10] D. Hammerstrom, Neural networks at work. IEEE Spectr. (June
1993) 26-32.
203
[11] D.O.Hebb, The Organization of Behavior (Wiley, New York, 1949).
[12] R.Hecht-Nielsen, Theory of the Backpropagation Neural Network,
Proceedings of International Conference on Neural Networks, Vol.
1, 1989 593-605.
[13] R.Hecht-Nielsen, Neurocomputing (Addison-Wesley Publishing Co.,
Reading, Mass., 1990).
[14] J.J.Hopeld, Neural networks and to physical systems with emer-
gent collective computational abilities, Proc. Natl. Acad. Sci.,
79(1982) 2554-2558.
[15] K. Hornik, M. Stinchcombe and H. White, Universal Approxima-
tion of an Unknown Mapping and Its Derivatives Using Multilayer
Freedforward Networks, Neural Networks, 3(1990) 551-560.
[16] T.Kohonen, Self-organization and Associative Memory, (Springer-
Verlag, New York 1984).
[17] S.Y.Kung, Digital Neural Networks (Prentice Hall, Englewood
Clis, New York, 1993).
[18] V. Kurkova, Kolmogorovs theorem and multilayer neural networks,
Neural Networks, 5(1992) 501-506.
[19] W.S.McCulloch and W.A.Pitts, A logical calculus of the ideas im-
minent in nervous activity, Bull. Math. Biophys. 5(1943) 115-133.
[20] M. Minsky and S. Papert, Perceptrons (MIT Press, Cambridge,
Mass., 1969).
[21] F.Rosenblatt, The perceptron: A probabilistic model for informa-
tion storage and organization in the brain, Physic. Rev, 65(1958)
386-408.
[22] D.E.Rumelhart and J.L. McClelland and the PDP Research Group,
Parallel Distributed Processing: Explorations in the Microstructure
of Cognition (MIT Press/Bradford Books, Cambridge, Mass., 1986).
204
[23] D.E. Rumelhart, Theory to practice: A case study - recognizing cur-
sive handwriting. In Proceedings of the Third NEC Research Sym-
posium. SIAM, Philadelphia, Pa., 1993.
[24] D.E.Rumelhart, B.Widrow and M.A.Lehr, The basic ideas in neural
networks, Communications of ACM, 37(1994) 87-92.
[25] D.E.Rumelhart, B.Widrow and M.A.Lehr, Neural Networks: Ap-
plications in Industry, Business and Science, Communications
of ACM, 37(1994) 93-105.
[26] E.I. Schwartz and J.B. Treece, Smart programs go to work: How ap-
plied intelligence software makes decisions for the real world. Busi-
ness Week (Mar. 2, 1992) 97-105.
[27] E.I. Schwartz, Where neural networks are already at work: Putting
AI to work in the markets, Business Week (Nov. 2, 1992) 136-137.
[28] J. Shandle, Neural networks are ready for prime time, Elect. Des.,
(February 18, 1993), 51-58.
[29] P.I.Simpson, Articial Neural Systems: Foundations, Paradigms,
Applications, and Implementation (Pergamon Press, New York,
1990).
[30] P.D.Wasserman, Advanced Methods in Neural Computing, Van Nos-
trand Reinhold, New York 1993.
[31] H. White, Connectionist Nonparametric Regression: Multilayer
feedforward Networks Can Learn Arbitrary Mappings, Neural Net-
works 3(1990) 535-549.
[32] J.M.Zurada, Introduction to Articial Neural Systems (West Pub-
lishing Company, New York, 1992).
205
Chapter 3
Fuzzy Neural Networks
3.1 Integration of fuzzy logic and neural net-
works
Hybrid systems combining fuzzy logic, neural networks, genetic algorithms,
and expert systems are proving their eectiveness in a wide variety of real-
world problems.
Every intelligent technique has particular computational properties (e.g. abil-
ity to learn, explanation of decisions) that make them suited for particular
problems and not for others. For example, while neural networks are good at
recognizing patterns, they are not good at explaining how they reach their
decisions. Fuzzy logic systems, which can reason with imprecise information,
are good at explaining their decisions but they cannot automatically acquire
the rules they use to make those decisions. These limitations have been a
central driving force behind the creation of intelligent hybrid systems where
two or more techniques are combined in a manner that overcomes the lim-
itations of individual techniques. Hybrid systems are also important when
considering the varied nature of application domains. Many complex do-
mains have many dierent component problems, each of which may require
dierent types of processing. If there is a complex application which has
two distinct sub-problems, say a signal processing task and a serial reason-
ing task, then a neural network and an expert system respectively can be
used for solving these separate tasks. The use of intelligent hybrid systems
is growing rapidly with successful applications in many areas including pro-
206
Linguistic
statements
Neural
Network
Decisions
Perception as
neural inputs
(Neural
outputs)
Fuzzy
Interface
Learning
algorithm
cess control, engineering design, nancial trading, credit evaluation, medical
diagnosis, and cognitive simulation.
While fuzzy logic provides an inference mechanism under cog-
nitive uncertainty, computational neural networks oer exciting
advantages, such as learning, adaptation, fault-tolerance, paral-
lelism and generalization.
To enable a system to deal with cognitive uncertainties in a man-
ner more like humans, one may incorporate the concept of fuzzy
logic into the neural networks.
The computational process envisioned for fuzzy neural systems is as follows.
It starts with the development of a fuzzy neuron based on the understand-
ing of biological neuronal morphologies, followed by learning mechanisms.
This leads to the following three steps in a fuzzy neural computational pro-
cess
development of fuzzy neural models motivated by biological neurons,
models of synaptic connections which incorporates fuzziness into neural
network,
development of learning algorithms (that is the method of adjusting
the synaptic weights)
Two possible models of fuzzy neural systems are
In response to linguistic statements, the fuzzy interface block provides
an input vector to a multi-layer neural network. The neural network can
be adapted (trained) to yield desired command outputs or decisions.
207
Neural
Inputs Decisions Fuzzy
Inference
Neural outputs
Neural
Network
Learning
algorithm
Knowledge-base
Figure 3.1 The rst model of fuzzy neural system.
A multi-layered neural network drives the fuzzy inference mechanism.
Figure 3.2 The second model of fuzzy neural system.
Neural networks are used to tune membership functions of fuzzy systems
that are employed as decision-making systems for controlling equipment. Al-
though fuzzy logic can encode expert knowledge directly using rules with
linguistic labels, it usually takes a lot of time to design and tune the mem-
bership functions which quantitatively dene these linquistic labels. Neural
network learning techniques can automate this process and substantially re-
duce development time and cost while improving performance.
In theory, neural networks, and fuzzy systems are equivalent in that they
are convertible, yet in practice each has its own advantages and disadvan-
tages. For neural networks, the knowledge is automatically acquired by the
backpropagation algorithm, but the learning process is relatively slow and
analysis of the trained network is dicult (black box). Neither is it possi-
ble to extract structural knowledge (rules) from the trained neural network,
nor can we integrate special information about the problem into the neural
network in order to simplify the learning procedure.
Fuzzy systems are more favorable in that their behavior can be explained
based on fuzzy rules and thus their performance can be adjusted by tuning
the rules. But since, in general, knowledge acquisition is dicult and also
the universe of discourse of each input variable needs to be divided into
several intervals, applications of fuzzy systems are restricted to the elds
where expert knowledge is available and the number of input variables is
208
small.
To overcome the problem of knowledge acquisition, neural networks are ex-
tended to automatically extract fuzzy rules from numerical data.
Cooperative approaches use neural networks to optimize certain parameters
of an ordinary fuzzy system, or to preprocess data and extract fuzzy (control)
rules from data.
Based upon the computational process involved in a fuzzy-neuro system, one
may broadly classify the fuzzy neural structure as feedforward (static) and
feedback (dynamic).
A typical fuzzy-neuro system is Berenjis ARIC (Approximate Reasoning
Based Intelligent Control) architecture [9]. It is a neural network model of a
fuzy controller and learns by updating its prediction of the physical systems
behavior and ne tunes a predened control knowledge base.
209
Predict
Stochastic
Action
Modifier
Physical
System
AEN
ASN
u(t)
p
Fuzzy inference network
x
x
x
v
u'(t)
System state
Neural network
r (error signal)
Updating weights
Figure 3.3 Berenjis ARIC architecture.
This kind of architecture allows to combine the advantages of neural networks
and fuzzy controllers. The system is able to learn, and the knowledge used
within the system has the form of fuzzy IF-THEN rules. By predening
these rules the system has not to learn from scratch, so it learns faster than
a standard neural control system.
ARIC consists of two coupled feed-forward neural networks, the Action-state
Evaluation Network (AEN) and the Action Selection Network (ASN). The
ASN is a multilayer neural network representation of a fuzzy controller. In
fact, it consists of two separated nets, where the rst one is the fuzzy inference
part and the second one is a neural network that calculates p[t, t + 1], a
210
measure of condence associated with the fuzzy inference value u(t+1), using
the weights of time t and the system state of time t+1. A stochastic modier
combines the recommended control value u(t) of the fuzzy inference part and
the so called probability value p and determines the nal output value
u

(t) = o(u(t), p[t, t + 1])


of the ASN. The hidden units z
i
of the fuzzy inference network represent
the fuzzy rules, the input units x
j
the rule antecedents, and the output
unit u represents the control action, that is the defuzzied combination of
the conclusions of all rules (output of hidden units). In the input layer the
system state variables are fuzzied. Only monotonic membership functions
are used in ARIC, and the fuzzy labels used in the control rules are adjusted
locally within each rule. The membership values of the antecedents of a
rule are then multiplied by weights attached to the connection of the input
unit to the hidden unit. The minimum of those values is its nal input. In
each hidden unit a special monotonic membership function representing the
conclusion of the rule is stored. Because of the monotonicity of this function
the crisp output value belonging to the minimum membership value can be
easily calculated by the inverse function. This value is multiplied with the
weight of the connection from the hidden unit to the output unit. The output
value is then calculated as a weighted average of all rule conclusions.
The AEN tries to predict the system behavior. It is a feed-forward neural
network with one hidden layer, that receives the system state as its input and
an error signal r from the physical system as additional information. The
output v[t, t

] of the network is viewed as a prediction of future reinforcement,


that depends of the weights of time t and the system state of time t

, where
t

may be t or t +1. Better states are characterized by higher reinforcements.


The weight changes are determined by a reinforcement procedure that uses
the ouput of the ASN and the AEN. The ARIC architecture was applied to
cart-pole balancing and it was shown that the system is able to solve this
task [9].
211
x1
x2
w1
w2
y = f(w1x1+w2x2)
3.1.1 Fuzzy neurons
Consider a simple neural net in Figure 3.4. All signals and weights are real
numbers. The two input neurons do not change the input signals so their
output is the same as their input. The signal x
i
interacts with the weight w
i
to produce the product
p
i
= w
i
x
i
, i = 1, 2.
The input information p
i
is aggregated, by addition, to produce the input
net = p
1
+p
2
= w
1
x
1
+w
2
x
2
to the neuron. The neuron uses its transfer function f, which could be a
sigmoidal function, f(x) = (1 +e
x
)
1
, to compute the output
y = f(net) = f(w
1
x
1
+w
2
x
2
).
This simple neural net, which employs multiplication, addition, and sig-
moidal f, will be called as regular (or standard) neural net.
Figure 3.4 Simple neural net.
If we employ other operations like a t-norm, or a t-conorm, to combine the
incoming data to a neuron we obtain what we call a hybrid neural net.
These modications lead to a fuzzy neural architecture based on fuzzy arith-
metic operations. Let us express the inputs (which are usually membership
degrees of a fuzzy concept) x
1
, x
2
and the weigths w
1
, w
2
over the unit interval
[0, 1].
A hybrid neural net may not use multiplication, addition, or a sigmoidal
function (because the results of these operations are not necesserily are in
the unit interval).
Denition 3.1.1 A hybrid neural net is a neural net with crisp signals and
weights and crisp transfer function. However,
212
x1
x2
w1
w2
y = T(S(w1, x1), S(w2, x2))
we can combine x
i
and w
i
using a t-norm, t-conorm, or some other
continuous operation,
we can aggregate p
1
and p
2
with a t-norm, t-conorm, or any other
continuous function
f can be any continuous function from input to output
We emphasize here that all inputs, outputs and the weights of a hybrid neural
net are real numbers taken from the unit interval [0, 1]. A processing element
of a hybrid neural net is called fuzzy neuron. In the following we present some
fuzzy neurons.
Denition 3.1.2 (AND fuzzy neuron [74, 75])
The signal x
i
and w
i
are combined by a triangular conorm S to produce
p
i
= S(w
i
, x
i
), i = 1, 2.
The input information p
i
is aggregated by a triangular norm T to produce the
output
y = AND(p
1
, p
2
) = T(p
1
, p
2
) = T(S(w
1
, x
1
), S(w
2
, x
2
))
of the neuron.
So, if T = min and S = max then the AND neuron realizes the min-max
composition
y = min{w
1
x
1
, w
2
x
2
}.
Figure 3.5 AND fuzzy neuron.
213
x1
x2
w1
w2
y = S(T(w1, x1), T(w2, x2))
Denition 3.1.3 (OR fuzzy neuron [74, 75])
The signal x
i
and w
i
are combined by a triangular norm T to produce
p
i
= T(w
i
, x
i
), i = 1, 2.
The input information p
i
is aggregated by a triangular conorm S to produce
the output
y = OR(p
1
, p
2
) = S(p
1
, p
2
) = S(T(w
1
, x
1
), T(w
2
, x
2
))
of the neuron.
Figure 3.6 OR fuzzy neuron.
So, if T = min and S = max then the AND neuron realizes the max-min
composition
y = max{w
1
x
1
, w
2
x
2
}.
The AND and OR fuzzy neurons realize pure logic operations on the member-
ship values. The role of the connections is to dierentiate between particular
leveles of impact that the individual inputs might have on the result of aggre-
gation. We note that (i) the higher the value w
i
the stronger the impact of
x
i
on the output y of an OR neuron, (ii) the lower the value w
i
the stronger
the impact of x
i
on the output y of an AND neuron.
The range of the output value y for the AND neuron is computed by letting all
x
i
equal to zero or one. In virtue of the monotonicity property of triangular
norms, we obtain
y [T(w
1
, w
2
), 1]
and for the OR neuron one derives the boundaries
y [0, S(w
1
, w
2
)].
214
x1
x2
w1
w2
y=S(w1x1, w2 x2)
Denition 3.1.4 (Implication-OR fuzzy neuron [37, 39])
The signal x
i
and w
i
are combined by a fuzzy implication operator I to produce
p
i
= I(w
i
, x
i
) = w
i
x
i
, i = 1, 2.
The input information p
i
is aggregated by a triangular conorm S to produce
the output
y = I(p
1
, p
2
) = S(p
1
, p
2
) = S(w
1
x
1
, w
2
x
2
)
of the neuron.
Figure 3.7 Implication-OR fuzzy neuron.
Denition 3.1.5 (Kwan and Cais fuzzy neuron [111])
The signal x
i
interacts with the weight w
i
to produce the product
p
i
= w
i
x
i
, i = 1, . . . , n
The input information p
i
is aggregated by an agregation function h to produce
the input of the neuron
z = h(w
1
x
1
, w
2
x
2
, . . . , w
n
x
n
)
the state of the neuron is computed by
s = f(z )
where f is an activation function and is the activating threshold. And the
m outputs of the neuron are computed by
y
j
= g
j
(s), j = 1, . . . , m
where g
j
, j = 1, . . . , m are the m output functions of the neuron which rep-
resent the membership functions of the input pattern x
1
, x
2
, . . . , x
n
in all the
m fuzzy sets.
215
h
f
g1
gm
x1
xn
w1
wn
y1
ym
x1
x2
w1
w2
z = max{w1x1, w2x2}
Figure 3.8 Kwan and Cais fuzzy neuron.
Denition 3.1.6 (Kwan and Cais max fuzzy neuron [111])
The signal x
i
interacts with the weight w
i
to produce the product
p
i
= w
i
x
i
, i = 1, 2.
The input information p
i
is aggregated by the maximum conorm
z = max{p
1
, p
2
} = max{w
1
x
1
, w
2
x
2
}
and the j-th output of the neuron is computed by
y
j
= g
j
(f(z )) = g
j
(f(max{w
1
x
1
, w
2
x
2
} ))
where f is an activation function.
Figure 3.9 Kwan and Cais max fuzzy neuron.
Denition 3.1.7 (Kwan and Cais min fuzzy neurons [111])
The signal x
i
interacts with the weight w
i
to produce the product
p
i
= w
i
x
i
, i = 1, 2.
216
x1
x2
w1
w2
z = min{w1x1, w2x2}
The input information p
i
is aggregated by the minimum norm
y = min{p
1
, p
2
} = min{w
1
x
1
, w
2
x
2
}
and the j-th output of the neuron is computed by
y
j
= g
j
(f(z )) = g
j
(f(min{w
1
x
1
, w
2
x
2
} ))
where f is an activation function.
Figure 3.10 Kwan and Cais min fuzzy neuron.
It is well-known that regular nets are universal approximators, i.e. they can
approximate any continuous function on a compact set to arbitrary accuracy.
In a discrete fuzzy expert system one inputs a discrete approximation to the
fuzzy sets and obtains a discrete approximation to the output fuzzy set.
Usually discrete fuzzy expert systems and fuzzy controllers are continuous
mappings. Thus we can conclude that given a continuous fuzzy expert sys-
tem, or continuous fuzzy controller, there is a regular net that can uniformly
approximate it to any degree of accuracy on compact sets. The problem
with this result that it is non-constructive and only approximative. The
main problem is that the theorems are existence types and do not tell you
how to build the net.
Hybrid neural nets can be used to implement fuzzy IF-THEN rules in a
constructive way. Following Buckley & Hayashi [30], and, Keller, Yager &
Tahani [99] we will show how to construct hybrid neural nets which are
computationally equivalent to fuzzy expert systems and fuzzy controllers. It
should be noted that these hybrid nets are for computation and they do not
have to learn anything.
Though hybrid neural nets can not use directly the standard error backpropa-
gation algorithm for learning, they can be trained by steepest descent methods
217
to learn the parameters of the membership functions representing the linguis-
tic terms in the rules (supposing that the system output is a dierentiable
function of these parameters).
The direct fuzzication of conventional neural networks is to extend connec-
tion weigths and/or inputs and/or fuzzy desired outputs (or targets) to fuzzy
numbers. This extension is summarized in Table 3.1.
Fuzzy neural net Weights Inputs Targets
Type 1 crisp fuzzy crisp
Type 2 crisp fuzzy fuzzy
Type 3 fuzzy fuzzy fuzzy
Type 4 fuzzy crisp fuzzy
Type 5 crisp crisp fuzzy
Type 6 fuzzy crisp crisp
Type 7 fuzzy fuzzy crisp
Table 3.1 Direct fuzzication of neural networks.
Fuzzy neural networks (FNN) of Type 1 are used in classication problem of
a fuzzy input vector to a crisp class [84, 114]. The networks of Type 2, 3 and
4 are used to implement fuzzy IF-THEN rules [93, 95].
However, the last three types in Table 3.1 are unrealistic.
In Type 5, outputs are always real numbers because both inputs and
weights are real numbers.
In Type 6 and 7, the fuzzication of weights is not necessary because
targets are real numbers.
Denition 3.1.8 A regular fuzzy neural network is a neural network with
fuzzy signals and/or fuzzy weights, sigmoidal transfer function and all the
operations are dened by Zadehs extension principle.
Consider a simple regular fuzzy neural net in Figure 3.11. All signals and
weights are fuzzy numbers. The two input neurons do not change the input
218
X1
X2
W1
W2
Y = f(W1X1+W2X2)
signals so their output is the same as their input. The signal X
i
interacts
with the weight W
i
to produce the product
P
i
= W
i
X
i
, i = 1, 2.
where we use the extension principle to compute P
i
. The input information
P
i
is aggregated, by standard extended addition, to produce the input
net = P
1
+P
2
= W
1
X
1
+W
2
X
2
to the neuron. The neuron uses its transfer function f, which is a sigmoidal
function, to compute the output
Y = f(net) = f(W
1
X
1
+W
2
X
2
)
where f(x) = (1 +e
x
)
1
and the membership function of the output fuzzy
set Y is computed by the extension principle
Y (y) =
_
(W
1
X
1
+W
2
X
2
)(f
1
(y)) if 0 y 1
0 otherwise
where f
1
(y) = ln y ln(1 y).
Figure 3.11 Simple regular fuzzy neural net.
Buckley and Hayashi [28] showed that regular fuzzy neural nets are mono-
tonic, i.e. if X
1
X

1
and X
2
X

2
then
Y = f(W
1
X
1
+W
2
X
2
) Y

= f(W
1
X

1
+W
2
X

2
).
where f is the sigmoid transfer function, and all the operations are dened
by Zadehs extension principle.
This means that fuzzy neural nets based on the extension principle might be
universal approximators only for continuous monotonic functions. If a fuzzy
219
1
A
c = D(A, 0)
c - 1
f(A) = (c, 1)
c +1
function is not monotonic there is no hope of approximating it with a fuzzy
neural net which uses the extension principle.
The following example shows a continuous fuzzy function which is non-
monotonic. Therefore we must abandon the extension principle if we are
to obtain a universal approximator.
Example 3.1.1 Let f : F F be a fuzzy function dened by
f(A) = (D(A,

0), 1)
where A is a fuzzy number,

0 is a fuzzy point with center zero, D(A,

0)
denotes the Hausdor distance between A and

0, and (D(A,

0), 1) denotes a
symmetrical triangular fuzzy number with center D(A,

0) and width one.


We rst show that f is continuous in metric D. Let A
n
F be a sequence
of fuzzy numbers such that D(A
n
, A) 0 if n . Using the denition of
metric D we have
D(f(A
n
), f(A)) = D((D(A
n
,

0), 1), (D(A,

0), 1)) = |D(A


n
,

0) D(A,

0)|
D(A
n
, A) +D(

0,

0) = D(A
n
, A)
which veries the continuity of f in metric D.
Figure 3.12 A and f(A).
Let A, A

F such that A A

. Then f(A) = (D(A,

0), 1) and f(A

) =
(D(A

0), 1) are both symmetrical triangular fuzzy numbers with dierent


centers, i.e. nor A A

neither A

A can occur.
220
X1
X2
1
1
R
Y = (X1 x X2) R
1
2 n
3
(3) = 3
A
Denition 3.1.9 A hybrid fuzzy neural network is a neural network with
fuzzy signals and/or fuzzy weights. However, (i) we can combine X
i
and W
i
using a t-norm, t-conorm, or some other continuous operation; we can aggre-
gate P
1
and P
2
with a t-norm, t-conorm, or any other continuous function;
f can be any function from input to output
Buckley and Hayashi [28] showed that hybrid fuzzy neural networks are uni-
versal approximators, i.e. they can approximate any continuous fuzzy func-
tions on a compact domain.
Figure 3.13 Simple hybrid fuzzy neural net for the compositional rule of inference.
Buckley, Hayashi and Czogala [22] showed that any continuous feedfor-
ward neural net can be approximated to any degree of accuracy by a discrete
fuzzy expert system:
Assume that all the
j
in the input signals and all the y
i
in the output
from the neural net belong to [0, 1]. Therefore, o = G() with [0, 1]
n
,
o [0, 1]
m
and G is continuous, represents the net. Given any input () -
output o pair for the net we now show how to construct the corresponding
rule in the fuzzy expert system. Dene fuzzy set A as A(j) =
j
, j = 1, . . . , n
and zero otherwise.
Figure 3.14 Denition of A.
221
1
2
C
C(2) = 2
m
Also let C(i) = o
i
, i = 1, . . . , m, and zero otherwise.
Figure 3.15 Denition of C.
Then the rule obtained from the pair (, o) is
() : If x is A then z is C,
That is, in rule construction is identied with A and C.
Theorem 3.1.1 [22] Given > 0, there exists a fuzzy expert system so that
F(u) G(u) , u [0, 1]
n
where F is the input - output function of the fuzzy expert system = {()}.
222
3.2 Hybrid neural nets
Drawing heavily on Buckley and Hayashi [23] we show how to construct hy-
brid neural nets that are computationally identical to discrete fuzzy expert
systems and the Sugeno and Expert system elementary fuzzy controller. Hy-
brid neural nets employ more general operations (t-norms, t-conorms, etc.)
in combining signals and weights for input to a neuron.
Consider a fuzzy expert system with one block of rules

i
: If x is A
i
then y is B
i
, 1 i n.
For simplicity we have only one clause in the antecedent but our results easily
extend to many clauses in the antecedent.
Given some data on x, say A

, the fuzzy expert system comes up with its


nal conclusion y is B

. In computer applications we usually use discrete


versions of the continuous fuzzy sets. Let [
1
,
2
] contain the support of all
the A
i
, plus the support of all the A

we might have as input to the system.


Also, let [
1
,
2
] contain the support of all the B
i
, plus the support of all
the B

we can obtain as outputs from the system. Let M 2 and N be


positive integers. Let
x
j
=
1
+ (j 1)(
2

1
)/(M 1)
for 1 j M.
y
i
=
1
+ (i 1)(
2

1
)/(N 1)
for 1 i N. The discrete version of the system is to input
a

= (A

(x
1
), . . . , A

(x
M
))
and obtain output b

= (B

(y
1
), . . . , B

(y
N
)).
223
x2
2 = yM
x3
2 =xN
1 = x1
1 = y1
A'
B'
a'3 = A'(x3)
Fuzzy rule base
a'1
a'N
b'1
b'M
b'1 = B'(y1) = 0
Figure 3.16 A discrete version of fuzzy expert system.
We now need to describe the internal workings of the fuzzy expert system.
There are two cases:
Case 1. Combine all the rules into one rule which is used to obtain b

from
a

.
We rst construct a fuzzy relation R
k
to model rule

k
: If x is A
k
, then y is B
k
, 1 k n.
This is called modeling the implication and there are many ways to do this.
One takes the data A
k
(x
i
) and B
k
(y
j
) to obtain R
k
(x
i
, y
j
) for each rule. One
way to do this is
R
k
(x
i
, y
j
) = min{A
k
(x
i
), B
k
(y
j
)}.
Then we combine all the R
k
into one R, which may be performed in many
dierent ways and one procedure would be to intersect the R
k
to get R. In
any case, let
r
ij
= R(x
i
, y
j
),
the value of R at the pair (x
i
, y
j
).
224
a'1
a'M
b'1
b'N
r11
rM1
r1N
rMN
1
N
The method of computing b

from a

is called the compositional rule of infer-


ence. Let

ij
= a

i
r
ij
,
where a

i
= A

(x
i
) and is some method (usually a t-norm) of combining the
data into
ij
.
Then set b

= (b

1
, . . . , b

N
) and
b

j
= Agg(
1j
, . . . ,
Mj
), 1 j N,
for Agg a method of aggregating the information.
A hybrid neural net computationally the same as this fuzzy expert system is
shown in Figure 3.17.
Figure 3.17 Combine the rules rst.
We rst combine the signals (a

i
) and the weights (r
i1
) and then aggregate
the data
a

1
r
11
, . . . , a

M
r
M1
using Agg, so the input to the neuron is b

1
. Now the transfer function is
identity function f(t) = t, t [0, 1] so that the output is b

1
. Similarly
for all neurons, which implies the net gives b

from a

. The hybrid neural


net in Figure 3.15 provides fast parallel computation for a discrete fuzzy
expert system. However, it can get too large to be useful. For example,
let [
1
,
2
] = [
1
,
2
] = [10, 10] with discrete increments of 0.01 so that
M = N = 1000. Then there will be: 2000 input neurons, 2000
2
connections
from the input nodes to the output nodes, and 2000 output nodes.
Case 2. Fire the rules individually, given a

, and combine their results into


b

.
225
a'1
a'2
b'1
b'2
1
1
1
1
r111
r121
r112
r122
r211
r221
r212
r222
b'11
b'12
b'21
b'22
We compose a

with each R
k
producing intermediate result b

k
= (b

k1
, . . . , b

kN
)
Then combine all the b

k
into b

.
One takes the data A
k
(x
i
) and B
k
(y
j
) to obtain R
k
(x
i
, y
j
) for each rule.
One way to do this is
R
k
(x
i
, y
j
) = min{A
k
(x
i
), B
k
(y
j
)}.
In any case, let R
k
(x
i
, y
j
) = r
kij
. Then we have
kij
= a

i
r
kij
and
b

kj
= Agg(
k1j
, . . . ,
kMj
).
The method of combining the b

k
would be done component wise so let
b

j
= Agg
1
(b

1j
, . . . , b

nj
), 1 j N
for some other aggregating operator Agg
1
. A hybrid neural net compu-
tationally equal to this type of fuzzy expert system is shown in Figure
3.18. For simplicity we have drawn the gure for M = N = 2.
Figure 3.18 Fire the rules rst.
In the hidden layer: the top two nodes operate as the rst rule
1
, and the
bottom two nodes model the second rule
2
. In the two output nodes: the
226
top node, all weights are one, aggregates b

11
and b

21
using Agg
1
to produce
b

1
, the bottom node, weights are one, computes b

2
.
Therefore, the hybrid neural net computes the same output b

given a

as the
fuzzy expert system.
As in the previous case this hybrid net quickly gets too big to be practical
at this time. Suppose there are 10 rules and
[
1
,
2
] = [
1
,
2
] = [10, 10].
with discrete increments of 0.01 so that M = N = 1000. Then there will be:
2000 input neurons, 4 millions (102000
2
) connections from the input nodes
to the hidden layer, 2000 neurons in the hidden layer, 10 2000 connections
from the hidden layer to the output neurons, and 20000 output nodes. And
this hybrid net has only one clause in each rules antecedent.
Buckley [19] identies three basic types of elementary fuzzy controllers, Sugeno,
Expert system, and Mamdani. We show how to build a hybrid netural net to
be computationally identical to Sugeno and Expert system fuzzy controller.
Actually, depending on how one computes the defuzzier, the hybrid neural
net could only be approximately the same as the Mamdani controller.
Sugeno control rules are of the type

i
: If e = A
i
and e = B
i
then z
i
=
i
e +
i
(e) +
i
where A
i
, B
i
,
i
,
i
, and
i
are all given e is the error, e is the change in
error.
The input to the controller is values for e and e and one rst evaluates each
rules antecedent as follows:

i
= T(A
i
(e), B
i
(e)),
where T is some t-norm. Next, we evaluate each conclusion given e and e
as
z
i
=
i
e +
i
(e) +
i
,
The output is
=
n

i=1

i
z
i
_
n

i=1

i
.
227
e

1
2
1
2
z2
z1
+1
+2
B1
B2
A2
A1
1
2
e
1 + 2
1 + 2
1z1 = 1
2z2 = 2
A hybrid neural net computationally equivalent to the Sugeno controller is
displayed in Figure 3.19.
For simplicity, we have assumed only two rules. First consider the rst hidden
layer. The inputs to the top two neurons are
1
e+
1
(e) and
2
e+
2
(e).
The transfer function in these two neurons is
f(x) = x +
i
, i = 1, 2
so the outputs are z
i
. All other neurons in Figure 3.19 have linear activation
function f(x) = x. The inputs to the bottom two nodes are
T(A
i
(e), B
i
(e)).
In the rest of the net all weights are equal to one. The output produces
because we aggregate the two input signals using division (
1
+
2
)/(
1
+
2
).
Figure 3.19 Hybrid neural net as Sugeno controller.
The fuzzy controller based on fuzzy expert system was introduced by Buckley,
Hayashi ad Czogala in [22]. The fuzzy control rules are

i
: If e = A
i
and e = B
i
then control action is C
i
where C
i
are triangular shaped fuzzy numbers with centers c
i
, e is the error,
e is the change in error. Given input values e and e each rule is evaluated
228
e
e

1
1
A1
B1
A2
B2
1
2
c1
c2
1
1
producing the
i
given by

i
= T(A
i
(e), B
i
(e)).
Then
i
is assigned to the rules consequence C
i
and the controller takes all
the data (
i
, C
i
) and defuzzies to output . Let
=
n

i=1

i
c
i
_
n

i=1

i
.
A hybrid neural net computationally identical to this controller is shown in
the Figure 3.20 (again, for simplicity we assume only two control rules).
Figure 3.20 Hybrid neural net as a fuzzy expert system controller.
The operations in this hybrid net are similar to those in Figure 3.19.
As an example, we show how to construct a hybrid neural net (called adaptive
network by Jang [97]) which is funcionally equivalent to Sugenos inference
mechanism.
Sugeno and Takagi use the following rules [93]

1
: if x is A
1
and y is B
1
then z
1
= a
1
x +b
1
y

2
: if x is A
2
and y is B
2
then z
2
= a
2
x +b
2
y
The ring levels of the rules are computed by

1
= A
1
(x
0
) B
1
(y
0
)

2
= A
2
(x
0
) B
2
(y
0
),
229
u
v
u
v
min
1
2
A1
B1
A2
B2
a1x + b 1y
a2x + b2 y
x
y
where the logical and can be modelled by any continuous t-norm, e.g

1
= A
1
(x
0
) B
1
(y
0
)

2
= A
2
(x
0
) B
2
(y
0
),
then the individual rule outputs are derived from the relationships
z
1
= a
1
x
0
+b
1
y
0
, z
2
= a
2
x
0
+b
2
y
0
and the crisp control action is expressed as
z
0
=

1
z
1
+
2
z
2

1
+
2
=
1
z
1
+
2
z
2
where
1
and
1
are the normalized values of
1
and
2
with respect to the
sum (
1
+
2
), i.e.

1
=

1
+
2
,
2
=

1
+
2
.
Figure 3.21 Sugenos inference mechanism.
A hybrid neural net computationally identical to this type of reasoning is
shown in the Figure 3.22
230
A1
A2
B1
B2
A1(x0)
A2(x0)
B1(y0)
B2(y0)
1
2
T
T
N
N
1
2

1z1
2z2
z
0
Layer 1 Layer 2 Layer 3 Layer 4 Layer 5
x0
y0
x0
x0 y0
y0
Figure 3.22 ANFIS architecture for Sugenos reasoning method.
For simplicity, we have assumed only two rules, and two linguistic values for
each input variable.
Layer 1 The output of the node is the degree to which the given input
satises the linguistic label associated to this node. Usually, we choose
bell-shaped membership functions
A
i
(u) = exp
_

1
2
_
u a
i1
b
i1
_
2
_
,
B
i
(v) = exp
_

1
2
_
v a
i2
b
i2
_
2
_
,
to represent the linguistic terms, where
{a
i1
, a
i2
, b
i1
, b
i2
}
is the parameter set. As the values of these parameters change, the
bell-shaped functions vary accordingly, thus exhibiting various forms of
membership functions on linguistic labels A
i
and B
i
. In fact, any con-
tinuous, such as trapezoidal and triangular-shaped membership func-
tions, are also quantied candidates for node functions in this layer.
Parameters in this layer are referred to as premise parameters.
231
Layer 2 Each node computes the ring strength of the associated rule.
The output of top neuron is

1
= A
1
(x
0
) B
1
(y
0
) = A
1
(x
0
) B
1
(y
0
),
and the output of the bottom neuron is

2
= A
2
(x
0
) B
2
(y
0
) = A
2
(x
0
) B
2
(y
0
)
Both node in this layer is labeled by T, because we can choose other
t-norms for modeling the logical and operator. The nodes of this layer
are called rule nodes.
Layer 3 Every node in this layer is labeled by N to indicate the
normalization of the ring levels.
The output of top neuron is the normalized (with respect to the sum
of ring levels) ring level of the rst rule

1
=

1
+
2
,
and the output of the bottom neuron is the normalized ring level of
the second rule

2
=

1
+
2
,
Layer 4 The output of top neuron is the product of the normalized
ring level and the individual rule output of the rst rule

1
z
1
=
1
(a
1
x
0
+b
1
y
0
),
The output of top neuron is the product of the normalized ring level
and the individual rule output of the second rule

2
z
2
=
2
(a
2
x
0
+b
2
y
0
),
Layer 5 The single node in this layer computes the overall system
output as the sum of all incoming signals, i.e.
z
0
=
1
z
1
+
2
z
2
.
232
If a crisp traing set {(x
k
, y
k
), k = 1, . . . , K} is given then the parameters
of the hybrid neural net (which determine the shape of the membership
functions of the premises) can be learned by descent-type methods. This
architecture and learning procedure is called ANFIS (adaptive-network-based
fuzzy inference system) by Jang [97].
The error function for pattern k can be given by
E
k
= (y
k
o
k
)
2
where y
k
is the desired output and o
k
is the computed output by the hybrid
neural net.
If the membership functions are of triangular form
A
i
(u) =
_

_
1 (a
i1
u)/a
i2
if a
i1
a
i2
u a
i1
1 (u a
i1
)/a
i3
if a
i1
u a
i1
+a
i3
0 otherwise
B
i
(v) =
_

_
1 (b
i1
v)/b
i2
if b
i1
b
i2
v b
i1
1 (v b
i1
)/b
i3
if b
i1
v b
i1
+b
i3
0 otherwise
then we can start the learning process from the initial values (see Figure
3.23).
Generally, the initial values of the parameters are set in such a way that the
membership functions along each axis satisfy -completeness, normality and
convexity.
233
A1
A2
B1
B2
1
1
1
1/2
1/2
11
12
22
21
A1 B1
not used
not used A2 B2
Figure 3.23 Two-input ANFIS with four fuzzy rules.
Nauck, Klawonn and Kruse [130] initialize the network with all rules that
can be constructed out of all combinations of input and output membership
functions. During the learning process all hidden nodes (rule nodes) that are
not used or produce counterproductive results are removed from the network.
It should be noted however, that these tuning methods have a weak point,
because the convergence of tuning depends on the initial condition.
Exercise 3.2.1 Construct a hybid neural net implementing Tsukumatos
reasoning mechanism with two input variable, two linguistiuc values for each
input variables and two fuzzy IF-THEN rules.
Exercise 3.2.2 Construct a hybid neural net implementing Larsens reason-
ing mechanism with two input variable, two linguistiuc values for each input
variables and two fuzzy IF-THEN rules.
Exercise 3.2.3 Construct a hybid neural net implementing Mamdanis rea-
soning mechanism with two input variables, two linguistiuc values for each
234
input variable and two fuzzy IF-THEN rules.
235
3.2.1 Computation of fuzzy logic inferences by hybrid
neural net
Keller, Yager and Tahani [99] proposed the following hybrid neural net-
work architecture for computation of fuzzy logic inferences. Each basic net-
work structure implements a single rule in the rule base of the form
If x
1
is A
1
and . . . and x
n
is A
n
then y is B
The fuzzy sets which characterize the facts
x
1
is A

1
and . . . and x
n
is A

n
are presented to the input layer of the network.
Let [M
1
, M
2
] contain the support of all the A
i
, plus the support of all the A

we might have as input to the system. Also, let [N


1
, N
2
] contain the support
of B, plus the support of all the B

we can obtain as outputs from the system.


Let M 2 and N be positive integers. Let

j
= M
1
+ (j 1)(M
2
M
1
)/(M 1),

i
= N
1
+ (i 1)(N
2
N
1
)/(N 1)
for 1 i N and 1 j M. The fuzzy set A

i
is denoted by
A

i
= {a

i1
, . . . , a

iM
}
these values being the membership grades of A

i
at sampled points {
1
, . . . ,
M
}
over its domain of discourse.
There are two variations of the activities in the antecedent clause check-
ing layer. In both cases, each antecedent clause of the rule determines the
weights. For the rst variation, the weights w
ij
are the fuzzy set complement
of the antecedent clause, i.e., for the i-th clause
w
ij
= 1 a
ij
The weights are chosen this way because the rst layer of the hybrid neural
net will generate a measure of disagreement between the input possibility
distribution and the antecedent clause distribution. This is done so that as
the input moves away from the antecedent, the amount of disagreement will
236
rise to one. Hence, if each node calculates the similarity between the input
and the complement of the antecedent, then we will produce such a local
measure of disagreement. The next layer combines this evidence.
The purpuse of the node is to determine the amount of disagreement present
between the antecedent clause and the corresponding input data. If the
combination at the k-th node is denoted by d
k
, then
d
k
= max
j
{w
kj
a

kj
} = max
j
{(1 a
kj
) a

kj
}
where corresponds to the operations of multiplication, or minimum.
d
1
k
= max
j
{(1 a
kj
)a

kj
}
or
d
2
k
= max
j
min{(1 a
kj
), a

kj
}
The second form for the antecedent clause checking layer uses the fuzzy set
A
k
themselves as the weights, i.e. in this case
d
3
k
= max
j
|a
kj
a

kj
|
the sup-norm dierence of the two functions A
k
and A

k
.
We set the activation function to the identity, that is, the output of the node
is the value obtained from the combinations of inputs and weights.
The disagreement values for each node are combined at the next level to
produce an overall level of disagreement between the antecedent clauses and
the input data. The disagreement values provide inhibiting signals for the
ring of the rule. The weights
i
on these links correspond to the importance
of the various antecedent clauses. The combination node then computes
1 t = 1 max
i
{
i
d
i
}.
Another option is to compute t as the weighted sum of the
i
s and d
i
s.
237
u1
d1
dn
n
1
b'1
b' N
a'11
w11
wn1
Clause combination layer
Antecedent clause checking layer
1 - max {i di }
u
N
a'
1M
a'
nM a'
n1
Figure 3.24 Hybrid neural network conguration for fuzzy logic inference.
The weights u
i
on the output nodes carry the information from the conse-
quent of rule. If the proposition y is B is characterized by the discrete
possibility distribution
B = {b
1
, . . . , b
N
}
where b
i
= B(
i
) for all
i
in the domain of B, then
u
i
= 1 b
i
.
Each output node forms the value
b

i
= 1 u
i
(1 t) = 1 (1 b
i
)(1 t) = b
i
+t b
i
t
From this equation, it is clear that if t = 0, then the rule res with conclusion
y is B exactly. On the other hand, if the the total disagreement is one,
then the conclusion of ring the rule is a possibility distribution composed
entirely of 1s, hence the conclusion is y is unknown.
This network extends classical (crisp) logic to fuzzy logic as shown by the
following theorems. For simplicity, in each theorem we consider a single
238

M
1
1

i
ai = 1
a1 = 0
antecedent clause rule of the form
If x is A then y is B.
Suppose A is a crsip subset of its domain of discourse. Let us denote
A
the
characteristic function of A, i.e

A
(u) =
_
1 if u A
0 otherwise
and let A be represented by {a
1
, . . . , a
M
}, where a
i
=
A
(
i
).
Figure 3.25 Representation of a crisp subset A.
Theorem 3.2.1 [99] In the single antecedent clause rule, suppose A is a
crisp subset of its domain of discourse. Then the fuzzy logic inference network
produces the standard modus ponens result, i.e. if the input x is A

is such
that A

= A, then the network results in y is B.


Proof. Let A be represented by {a
1
, . . . , a
M
}, where a
i
=
A
(
i
). Then
from A

= A it follows that
d
1
= max
j
{(1 a
j
)a

j
} = max
j
{(1 a
j
)a
j
} = 0
since wherever a
j
= 1 then 1 a
j
= 0 and vice versa. Similarly,
d
2
= max
j
{(1 a
j
) a

j
} = max
j
{(1 a
j
) a
j
} = 0.
Finally, for d
3
we have
d
3
= max
j
|a
j
a

j
| = max
j
|a
j
a
j
| = 0.
239
Hence, at the combination node, t = 0, and so the output layer will produce
b

i
= b
i
+t b
i
t = b
i
+t(1 b
i
) = b
i
.
Theorem 3.2.2 [99] Consider the inference network which uses d
1
or d
2
for clause checking. Suppose that A and A

are proper crisp subsets of their


domain of discourse and let co(A) = {x|x / A} denote the complement of A.
(i) If co(A) A

= , then the network produces the result y is unknown,


i.e. a possibility distribution for y is equal to 1.
(ii) If A

A (i.e. A

is more specic than A), then the result is y is B.


Proof. (i) Since co(A) A

= , there exists a point


i
in the domain such
that

co(A)
(
i
) =
A
(
i
) = 1
In other words the weight w
i
= 1 a
i
= 1 and a

i
= 1. Hence
d
i
= max
j
{(1 a
j
)a

j
} = max
j
{(1 a
j
)a
j
} = 1
for i = 1, 2. So, we have t = 1 at the clause combination node and
b

i
= b
i
+t b
i
t = b
i
+ (1 b
i
) = 1.
(ii) Now suppose that A

A. Then A

co(A) = , and so, d


1
= d
2
= 0.
producing the result y is B.
Theorem 3.2.3 [99] Consider the inference network which uses d
3
for clause
checking. Suppose that A and A

are proper crisp subsets of their domain of


discourse such that A

= A, then the network result is y is unknown.


Proof. (i) Since A

= A, there exists a point


i
such that
a
i
= A(
i
) = A

(
i
) = a

i
.
Then d
3
= max
i
{|a
i
a

i
|} = 1, which ensures that the result is y is un-
known.
240
Theorem 3.2.4 [99] (monotonocity theorem) Consider the single clause in-
ference network using d
1
or d
2
for clause checking. Suppose that A, A

and
A

are three fuzzy sets such that


A

A.
Let the results of inference with inputs x is A

and x is A

be y is B

and y is B

respectively. Then
B B

that is, B

is closer to B than B

.
Proof. For each
i
in the domain of discourse of A,
0 a

j
= A

(
j
) a

j
= A

(
j
) a
j
= A(
j
)
Therefore, at the clause checking node,
(d
i
)

= max
j
{(1 a
j
) a

j
} max
j
{(1 a
j
) a

j
} = (d
i
)

for i = 1, 2. Hence, t

. Finally,
b

i
= b
i
+t

b
i
t

b
i
+t

b
i
t

Clearly, from the above equations, both b

i
and b

i
are larger or equal than b
i
.
This completes the proof.
Intuitively, this theorem states that as the input becomes more specic, the
output converges to the consequent.
Having to discretize all the fuzzy sets in a fuzzy expert system can lead to an
enormous hybrid neural net as we have seen above. It is the use of an hybrid
neural net that dictates the discretization because it processes real numbers.
We can obtain much smaller networks if we use fuzzy neural nets.
Drawing heavily on Buckley and Hayashi [30] we represent fuzzy expert sys-
tems as hybrid fuzzy neural networks.
We recall that a hybrid fuzzy neural network is a neural network with fuzzy
signals and/or fuzzy weights. However,
we can combine X
i
and W
i
using a t-norm, t-conorm, or some other
continuous operation,
241
A'
1
B'
R
we can aggregate P
1
and P
2
with a t-norm, t-conorm, or any other
continuous function
f can be any function from input to output.
Suppose the fuzzy expert system has only one block of rules of the form

i
: If x = A
i
then y is B
i
, 1 x n.
The input to the system is x is A

with nal conclusion y is B

. We
again consider two cases:
Case 1 Combine all rules into one rule and re.
We rst obtain a fuzzy relation R
k
to model the implication in each
rule, 1 k n. Let R
k
(x, y) be some function of A
k
(x) and B
k
(y) for x
in [M
1
, M
2
], y in [N
1
, N
2
]. For example this function can be Mamdanis
min
R
k
(x, y) = min{A
k
(x), B
k
(y)}.
Then we combine all the R
k
, 1 k n, into one fuzzy relation R on
[M
1
, M
2
] [N
1
, N
2
]. For example R(x, y) could be the maximum of the
R
k
(x, y), 1 k n
R(x, y) = max R
k
(x, y)
Given A

we compute the output B

by the compositional rule of infer-


ence as
B

= A

R
For example, one could have
B

(y) = sup
M
1
xM
2
min{A

(x), R(x, y)}


for each y [N
1
, N
2
]. A hybrid fuzzy neural net, the same as this fuzzy
expert system, is shown in Figure 3.26.
Figure 3.26 Combine the rules.
242
A'
R1
Rn
B'
1
1
1
1
There is only one neuron with input weight equal to one. The transfer
functions (which maps fuzzy sets into fuzzy sets) inside the neuron is
the fuzzy relation R. So, we have input A

to the neuron with its output


B

= A

R.
We obtained the simplest possible hybrid fuzzy neural net for the fuzzy
expert system. The major drawback is that there is no hardware avail-
able to implement fuzzy neural nets.
Case 2 Fire the rules individually and then combine their results.
We rst compose A

with each R
k
to get B

k
, the conclusion of the k-th
rule, and then combine all the B

k
into one nal conclusion B

. Let B

k
be dened by the compositional rule of inference as
B

k
= A

R
k
for all y [N
1
, N
2
]. Then
B

(y) = Agg(B

1
(y), . . . , B

n
(y))
for some aggregation operator Agg.
A hybrid fuzzy neural net the same as this fuzzy expert system is
displayed in Figure 3.27.
Figure 3.27 Fire rules rst.
All the weights are equal to one and the fuzzy relations R
k
are the
transfer functions for the neurons in the hidden layer. The input sig-
nals to the output neuron are the B

k
which are aggregated by Agg.
243
The transfer function in the output neuron is the identity (no change)
function.
244
3.3 Trainable neural nets for fuzzy IF-THEN
rules
In this section we present some methods for implementing fuzzy IF-THEN
rules by trainable neural network architectures. Consider a block of fuzzy
rules

i
: If x is A
i
, then y is B
i
(3.1)
where A
i
and B
i
are fuzzy numbers, i = 1, . . . , n.
Each rule in (3.1) can be interpreted as a training pattern for a multilayer
neural network, where the antecedent part of the rule is the input and the
consequence part of the rule is the desired output of the neural net.
The training set derived from (3.1) can be written in the form
{(A
1
, B
1
), . . . , (A
n
, B
n
)}
If we are given a two-input-single-output (MISO) fuzzy systems of the form

i
: If x is A
i
and y is B
i
, then z is C
i
where A
i
, B
i
and C
i
are fuzzy numbers, i = 1, . . . , n.
Then the input/output training pairs for the neural net are the following
{(A
i
, B
i
), C
i
}, 1 i n.
If we are given a two-input-two-output (MIMO) fuzzy systems of the form

i
: If x is A
i
and y is B
i
, then r is C
i
and s is D
i
where A
i
, B
i
, C
i
and D
i
are fuzzy numbers, i = 1, . . . , n.
Then the input/output training pairs for the neural net are the following
{(A
i
, B
i
), (C
i
, D
i
)}, 1 i n.
There are two main approaches to implement fuzzy IF-THEN rules (3.1) by
standard error backpropagation network.
In the method proposed by Umano and Ezawa [166] a fuzzy set is
represented by a nite number of its membership values.
245
x1
xN
Let [
1
,
2
] contain the support of all the A
i
, plus the support of all
the A

we might have as input to the system. Also, let [


1
,
2
] contain
the support of all the B
i
, plus the support of all the B

we can obtain
as outputs from the system. i = 1, . . . , n. Let M 2 and N be
positive integers. Let
x
j
=
1
+ (j 1)(
2

1
)/(N 1)
y
i
=
1
+ (i 1)(
2

1
)/(M 1)
for 1 i M and 1 j N.
A discrete version of the continuous training set is consists of the in-
put/output pairs
{(A
i
(x
1
), . . . , A
i
(x
N
)), (B
i
(y
1
), . . . , B
i
(y
M
))}
for i = 1, . . . , n.
Figure 3.28 Representation of a fuzzy number by membership values.
Using the notations a
ij
= A
i
(x
j
) and b
ij
= B
i
(y
j
) our fuzzy neural
network turns into an N input and M output crisp network, which can
be trained by the generalized delta rule.
246
x1
y1
yM
xN
Ai
Bi
aij
bij
y
Multilayer neural network
j
x
i
Figure 3.29 A network trained on membership values fuzzy numbers.
Example 3.3.1 Assume our fuzzy rule base consists of three rules

1
: If x is small then y is negative,

2
: If x is medium then y is about zero,

3
: If x is big then y is positive,
where the membership functions of fuzzy terms are dened by

small
(u) =
_
1 2u if 0 u 1/2
0 otherwise

big
(u) =
_
2u 1 if 1/2 u 1
0 otherwise

medium
(u) =
_
1 2|u 1/2| if 0 u 1
0 otherwise
247
1
small
medium
big
1
1/2
1
1
negative
positive
about zero
-1
Figure 3.30 Membership functions for small, medium and big.

negative
(u) =
_
u if 1 u 0
0 otherwise

about zero
(u) =
_
1 2|u| if 1/2 u 1/2
0 otherwise

positive
(u) =
_
u if 0 u 1
0 otherwise
Figure 3.31 Membership functions for negative, about zero and positive.
The training set derived from this rule base can be written in the form
{(small, negative), (medium, about zero), (big, positive)}
Let [0, 1] contain the support of all the fuzzy sets we might have as input to
the system. Also, let [1, 1] contain the support of all the fuzzy sets we can
obtain as outputs from the system. Let M = N = 5 and
x
j
= (j 1)/4
248
for 1 j 5, and
y
i
= 1 + (i 1)2/4 = 1 + (i 1)/2 = 3/2 +i/2
for 1 i M and 1 j N. Plugging into numerical values we get x
1
= 0,
x
2
= 0.25, x
3
= 0.5, x
4
= 0.75 and x
5
= 1; and y
1
= 1, y
2
= 0.5, y
3
= 0,
y
4
= 0.5 and y
5
= 1.
A discrete version of the continuous training set is consists of three in-
put/output pairs
{(a
11
, . . . , a
15
), (b
11
, . . . , b
15
)}
{(a
21
, . . . , a
25
), (b
21
, . . . , b
25
)}
{(a
31
, . . . , a
35
), (b
31
, . . . , b
35
)}
where
a
1j
=
small
(x
j
), a
2j
=
medium
(x
j
), a
3j
=
big
(x
j
)
for j = 1, . . . , 5, and
b
1i
=
negative
(y
i
), b
2i
=
about zero
(y
i
), b
3i
=
positive
(y
i
)
for i = 1, . . . , 5. Plugging into numerical values we obtain the following
training set for a standard backpropagation network
{(1, 0.5, 0, 0, 0), (1, 0.5, 0, 0, 0)}
{(0, 0.5, 1, 0.5, 0), (0, 0, 1, 0, 0)}
{(0, 0, 0, 0.5, 1), (0, 0, 0, 0.5, 1)}.
Uehara and Fujise [165] use nite number of -level sets to represent
fuzzy numbers. Let M 2 and let

j
= (j 1)/(M 1), j = 1, . . . , M
be a partition of [0, 1]. Let [A
i
]

j
denote the
j
-level set of fuzzy num-
bers A
i
[A
i
]

j
= {u| A
i
(u)
j
} = [a
L
ij
, a
R
ij
]
for j = 1, . . . , M and [B
i
]

j
denote the
j
-level set of fuzzy number B
i
[B
i
]

j
= {u| B
i
(u)
j
} = [b
L
ij
, b
R
ij
]
249
= 1
j
1 = 0
A
i
a
ij
a
ij
L R
for j = 1, . . . , M. Then the discrete version of the continuous training
set is consists of the input/output pairs
{(a
L
i1
, a
R
i1
, . . . , a
L
iM
, a
R
iM
), (b
L
i1
, b
R
i1
, . . . , b
L
iM
, b
R
iM
)}
where for i = 1, . . . , n.
Figure 3.32 Representation of a fuzzy number by -level sets.
The number of inputs and outputs depend on the number of -level
sets considered. For example, in Figure 3.27, the fuzzy number A
i
is
represented by seven level sets, i.e. by a fourteen-dimensional vector of
real numbers.
Example 3.3.2 Assume our fuzzy rule base consists of three rules

1
: If x is small then y is small,

2
: If x is medium then y is medium,

3
: If x is big then y is big,
where the membership functions of fuzzy terms are dened by

small
(u) =
_

_
1 (u 0.2)/0.3 if 0.2 u 1/2
1 if 0 u 0.2
0 otherwise

big
(u) =
_

_
1 (0.8 u)/0.3 if 1/2 u 0.8
1 if 0.8 u 1
0 otherwise
250
1
small
medium
big
1 1/2 0.2
0.8

medium
(u) =
_
1 4|u 1/2| if 0.25 u 0.75
0 otherwise
Figure 3.33 Membership functions for small, medium and big.
Let M = 6 and let

j
=
j 1
4
, j = 1, . . . , 6
be a partition of [0, 1]. Plugging into numerical values we get
1
= 0,
2
=
0.2,
3
= 0.4,
4
= 0.6,
5
= 0.8 and
6
= 1. Then the discrete version
of the continuous training set is consists of the following three input/output
pairs
{(a
L
11
, a
R
11
, . . . , a
L
16
, a
R
16
), (b
L
11
, b
R
11
, . . . , b
L
16
, b
R
16
)}
{(a
L
21
, a
R
211
, . . . , a
L
26
, a
R
26
), (b
L
21
, b
R
21
, . . . , b
L
26
, b
R
26
)}
{(a
L
31
, a
R
31
, . . . , a
L
36
, a
R
36
), (b
L
31
, b
R
31
, . . . , b
L
36
, b
R
36
)}
where
[a
L
1j
, a
R
1j
] = [b
L
1j
, b
R
1j
] = [small]

j
[a
L
2j
, a
R
2j
] = [b
L
2j
, b
R
2j
] = [medium]

j
and
[a
L
3j
, a
R
3j
] = [b
L
3j
, b
R
3j
] = [big]

j
.
It is easy to see that a
L
1j
= b
L
1j
= 0 and a
R
3j
= b
R
3j
= 1 for 1 j 6. Plugging
into numerical values we obtain the following training set
{(0, 0.5, 0, 0.44, 0, 0.38, 0, 0.32, 0, 0.26, 0, 0.2), (0, 0.5, 0, 0.44, 0, 0.38, 0, 0.32, 0, 0.26, 0, 0.2)}
251
{(0.5, 1, 0.56, 1, 0.62, 1, 0.68, 1, 0.74, 1, 0.8, 1), (0.5, 1, 0.56, 1, 0.62, 1, 0.68, 1, 0.74, 1, 0.8, 1)}
{(0.25, 0.75, 0.3, 0.7, 0.35, 0.65, 0.4, 0.6, 0.45, 0.55, 0.5, 0.5),
(0.25, 0.75, 0.3, 0.7, 0.35, 0.65, 0.4, 0.6, 0.45, 0.55, 0.5, 0.5)}.
Exercise 3.3.1 Assume our fuzzy rule base consists of three rules

1
: If x
1
is small and x
2
is small then y is small,

2
: If x
1
is medium and x
2
is medium then y is medium,

3
: If x
1
is big and x
2
is big then y is big,
where the membership functions of fuzzy terms are dened by

small
(u) =
_

_
1 (u 0.2)/0.3 if 0.2 u 1/2
1 if 0 u 0.2
0 otherwise

big
(u) =
_

_
1 (0.8 u)/0.3 if 1/2 u 0.8
1 if 0.8 u 1
0 otherwise

medium
(u) =
_
1 4|u 1/2| if 0.25 u 0.75
0 otherwise
Assume that [0, 1] contains the support of all the fuzzy sets we might have as
input and output for the system. Derive training sets for standard backprop-
agation network from 10 selected membership values of fuzzy terms.
Exercise 3.3.2 Assume our fuzzy rule base consists of three rules

1
: If x
1
is small and x
2
is big then y is small,

2
: If x
1
is medium and x
2
is small then y is medium,

3
: If x
1
is big and x
2
is medium then y is big,
where the membership functions of fuzzy terms are dened by

small
(u) =
_

_
1 (u 0.2)/0.3 if 0.2 u 1/2
1 if 0 u 0.2
0 otherwise
252

big
(u) =
_

_
1 (0.8 u)/0.3 if 1/2 u 0.8
1 if 0.8 u 1
0 otherwise

medium
(u) =
_
1 4|u 1/2| if 0.25 u 0.75
0 otherwise
Assume that [0, 1] contains the support of all the fuzzy sets we might have as
input and output for the system. Derive training sets for standard backprop-
agation network from 10 selected values of -level sets of fuzzy terms.
253
wij
wi
p
wk
w1
Ap1
Apn
1
i k
1
j n
k
1
3.3.1 Implementation of fuzzy rules by regular FNN
of Type 2
Ishibuchi, Kwon and Tanaka [88] proposed an approach to implement of
fuzzy IF-THEN rules by training neural networks on fuzzy training patterns.
Assume we are given the following fuzzy rules

p
: If x
1
is A
p1
and . . . and x
n
is A
pn
then y is B
p
where A
ij
and B
i
are fuzzy numbers and p = 1, . . . , m.
The following training patterns can be derived from these fuzzy IF-THEN
rules
{(X
1
, B
1
), . . . , (X
m
, B
m
)} (3.2)
where X
p
= (A
p1
, . . . , A
pn
) denotes the antecedent part and the fuzzy target
output B
p
is the consequent part of the rule.
Our learning task is to train a neural network from fuzzy training pattern
set (3.2) by a regular fuzzy neural network of Type 2 from Table 3.1.
Ishibuchi, Fujioka and Tanaka [82] propose the following extension of the
standard backpropagation learning algorithm:
Figure 3.34 Regular fuzzy neural network architecture of Type 2.
254
Net
f(Net)
0
1
f
Suppose that A
p
, the p-th training pattern, is presented to the network. The
output of the i-th hidden unit, o
i
, is computed as
o
pi
= f(
n

j=1
w
ij
A
pj
).
For the output unit
O
p
= f(
k

i=1
w
i
o
pi
)
where f(t) = 1/(1 + exp(t)) is a unipolar transfer function. It should be
noted that the input-output relation of each unit is dened by the extension
principle.
Figure 3.35 Fuzzy input-output relation of each neuron.
Let us denote the -level sets of the computed output O
p
by
[O
p
]

= [O
L
p
(), O
R
p
()], [0, 1]
where O
L
p
() denotes the left-hand side and O
R
p
() denotes the right-hand
side of the -level sets of the computed output.
Since f is strictly monoton increasing we have
[O
p
]

= [f(
k

i=1
w
i
o
pi
)]

= [f(
k

i=1
[w
i
o
pi
]
L
()), f(
k

i=1
[w
i
o
pi
]
R
())],
255

Bp
B ()
L
B ()
R
1
p
p
where
[o
pi
]

= [f(
n

j=1
w
ij
A
pj
)]

= [f(
n

j=1
[w
ij
A
pj
]
L
()), f(
n

j=1
[w
ij
A
pj
]
R
())].
Figure 3.36 An -level set of the target output pattern B
p
.
The -level sets of the target output B
p
are denoted by
[B
p
]

= [B
L
p
(), B
R
p
()], [0, 1]
where B
L
p
() denotes the left-hand side and B
R
p
() denotes the right-hand
side of the -level sets of the desired output.
A cost function to be minimized is dened for each -level set as follows
e
p
() := e
L
p
() +e
R
p
()
where
e
L
p
() =
1
2
(B
L
p
() O
L
p
())
2
e
R
p
() =
1
2
(B
R
p
() O
R
p
())
2
i.e. e
L
p
() denotes the error between the left-hand sides of the -level sets of
the desired and the computed outputs, and e
R
p
() denotes the error between
the left right-hand sides of the -level sets of the desired and the computed
outputs.
256

p
()
1
p
()
p
()
p
L R
Figure 3.37 An -level set of the computed output pattern O
p
.
Then the error function for the p-th training pattern is
e
p
=

e
p
() (3.3)
Theoretically this cost function satises the following equation if we use in-
nite number of -level sets in (3.3).
e
p
0 if and only if O
p
B
p
From the cost function e
p
() the following learning rules can be derived
w
i
:= w
i

e
p
()
w
i
,
for i = 1, . . . , k and
w
ij
:= w
ij

e
p
()
w
ij
for i = 1, . . . , k and j = 1, . . . , n.
The Reader can nd the exact calculation of the partial derivatives
e
p
()
w
i
and
e
p
()
w
ij
in [86], pp. 95-96.
257
3.3.2 Implementation of fuzzy rules by regular FNN
of Type 3
Following [95] we show how to implement fuzzy IF-THEN rules by regular
fuzzy neural nets of Type 3 (fuzzy input/output signals and fuzzy weights)
from Table 3.1.
Assume we are given the following fuzzy rules

p
: If x
1
is A
p1
and . . . and x
n
is A
pn
then y is B
p
where A
ij
and B
i
are fuzzy numbers and p = 1, . . . , m.
The following training patterns can be derived from these fuzzy IF-THEN
rules
{(X
1
, B
1
), . . . , (X
m
, B
m
)}
where X
p
= (A
p1
, . . . , A
pn
) denotes the antecedent part and the fuzzy target
output B
p
is the consequent part of the rule.
The output of the i-th hidden unit, o
i
, is computed as
o
pi
= f(
n

j=1
W
ij
A
pj
).
For the output unit
O
p
= f(
k

i=1
W
i
o
pi
)
where A
pj
is a fuzzy input, W
i
and W
ij
are fuzzy weights of triangular form
and f(t) = 1/(1 + exp(t)) is a unipolar transfer function.
258
W
Wi
p
W
W
Ap1
Apn
1
i k
1
j n
k
1
ij
Figure 3.38 Regular fuzzy neural network architecture of Type 3.
The fuzzy outputs for each unit is numerically calculated for the -level sets
of fuzzy inputs and weights. Let us denote the -level set of the computed
output O
p
by
[O
p
]

= [O
L
p
(), O
R
p
()],
the -level set of the target output B
p
are denoted by
[B
p
]

= [B
L
p
(), B
R
p
()],
the -level sets of the weights of the output unit are denoted by
[W
i
]

= [W
L
i
(), W
R
i
()],
the -level sets of the weights of the hidden unit are denoted by
[W
ij
]

= [W
L
ij
(), W
R
ij
()],
for [0, 1], i = 1, . . . , k and j = 1, . . . , n. Since f is strictly monoton
increasing we have
[o
pi
]

= [f(
n

j=1
W
ij
A
pj
)]

= [f(
n

j=1
[W
ij
A
pj
]
L
()), f(
n

j=1
[W
ij
A
pj
]
R
())]
259
W
ij
w
ij
1
w
ij
2
w
ij
3
[O
p
]

= [f(
k

i=1
W
i
o
pi
)]

= [f(
k

i=1
[W
i
o
pi
]
L
()), f(
k

i=1
[W
i
o
pi
]
R
())]
A cost function to be minimized is dened for each -level set as follows
e
p
() := e
L
p
() +e
R
p
()
where
e
L
p
() =
1
2
(B
L
p
() O
L
p
())
2
, e
R
p
() =
1
2
(B
R
p
() O
R
p
())
2
i.e. e
L
p
() denotes the error between the left-hand sides of the -level sets of
the desired and the computed outputs, and e
R
p
() denotes the error between
the left right-hand sides of the -level sets of the desired and the computed
outputs.
Then the error function for the p-th training pattern is
e
p
=

e
p
() (3.4)
Let us derive a learning algorithm of the fuzzy neural network from the error
function e
p
(). Since fuzzy weights of hidden neurons are supposed to be of
symmetrical triangular form, they can be represented by three parameters
W
ij
= (w
1
ij
, w
2
ij
, w
3
ij
). where w
1
ij
denotes the lower limit, w
2
ij
denotes the
center and w
3
ij
denotes the upper limit of W
ij
.
Figure 3.39 Representation of W
ij
.
Similarly, the weights of the output neuron can be represented by three
parameter W
i
= (w
1
i
, w
2
i
, w
3
i
), where w
1
i
denotes the lower limit, w
2
i
denotes
the center and w
3
i
denotes the upper limit of W
i
.
260
W
i
w
i
1
w
i
2
w
i
3
Figure 3.40 Representation of W
i
.
From simmetricity of W
ij
and W
i
it follows that
w
2
ij
=
w
1
ij
+w
3
ij
2
, w
2
i
=
w
1
i
+w
3
i
2
for 1 j n and 1 i k.
From the cost function e
p
() the following weights adjustments can be de-
rived
w
1
i
(t) =
e
p
()
w
1
i
+ w
1
i
(t 1)
w
3
i
(t) =
e
p
()
w
3
i
+ w
3
i
(t 1)
where is a learning constant, is a momentum constant and t indexes the
number of adjustments, for i = 1, . . . , k, and
w
1
ij
(t) =
e
p
()
w
1
ij
+ w
1
ij
(t 1)
w
3
ij
(t) =
e
p
()
w
3
ij
+ w
3
ij
(t 1)
where is a learning constant, is a momentum constant and t indexes the
number of adjustments, for i = 1, . . . , k and j = 1, . . . , n.
The explicit calculation of above derivatives can be found in ([95], pp. 291-
292).
261
The fuzzy weight W
ij
= (w
1
ij
, w
2
ij
, w
3
ij
) is updated by the following rules
w
1
ij
(t + 1) = w
1
ij
(t) +w
1
ij
(t)
w
3
ij
(t + 1) = w
3
ij
(t) +w
3
ij
(t)
w
2
ij
(t + 1) =
w
1
ij
(t + 1) +w
3
ij
(t + 1)
2
,
for i = 1, . . . , k and j = 1, . . . , n. The fuzzy weight W
i
= (w
1
i
, w
2
i
, w
3
i
) is
updated in a similar manner, i.e.
w
1
i
(t + 1) = w
1
i
(t) +w
1
i
(t)
w
3
i
(t + 1) = w
3
i
(t) +w
3
i
(t)
w
2
i
(t + 1) =
w
1
i
(t + 1) +w
3
i
(t + 1)
2
,
for i = 1, . . . , k.
After the adjustment of W
i
it can occur that its lower limit may become larger
than its upper limit. In this case, we use the following simple heuristics
w
1
i
(t + 1) := min{w
1
i
(t + 1), w
3
i
(t + 1)}
w
3
i
(t + 1) := max{w
1
i
(t + 1), w
3
i
(t + 1)}.
We employ the same heuristics for W
ij
.
w
1
ij
(t + 1) := min{w
1
ij
(t + 1), w
3
ij
(t + 1)}
w
3
ij
(t + 1) := max{w
1
ij
(t + 1), w
3
ij
(t + 1)}.
Let us assume that m input-output pairs
{(X
1
, B
1
), . . . , (X
m
, B
m
)}
where X
p
= (A
p1
, . . . , A
pn
), are given as training data.
We also assume that M values of -level sets are used for the learning of the
fuzzy neural network.
Summary 3.3.1 In this case, the learning algorithm can be summarized as
follows:
262
Step 1 Fuzzy weights are initialized at small random values, the run-
ning error E is set to 0 and E
max
> 0 is chosen
Step 2 Repeat Step 3 for =
1
,
2
, . . . ,
M
.
Step 3 Repeat the following procedures for p = 1, 2, . . . , m. Propagate
X
p
through the network and calculate the -level set of the fuzzy output
vector O
p
. Adjust the fuzzy weights using the error function e
p
().
Step 4 Cumulative cycle error is computed by adding the present error
to E.
Step 5 The training cycle is completed. For E < E
max
terminate the
training session. If E > E
max
then E is set to 0 and we initiate a new
training cycle by going back to Step 2.
263
1
3.4 Tuning fuzzy control parameters by neu-
ral nets
Fuzzy inference is applied to various problems. For the implementation of
a fuzzy controller it is necessary to determine membership functions repre-
senting the linguistic terms of the linguistic inference rules. For example,
consider the linguistic term approximately one. Obviously, the correspond-
ing fuzzy set should be a unimodal function reaching its maximum at the
value one. Neither the shape, which could be triangular or Gaussian, nor the
range, i.e. the support of the membership function is uniquely determined
by approximately one. Generally, a control expert has some idea about the
range of the membership function, but he would not be able to argue about
small changes of his specied range.
Figure 3.41 Gaussian membership function for x is approximately one.
Figure 3.42 Triangular membership function for x is approximately one.
264
1
Figure 3.43 Trapezoidal membership function for x is approximately one.
The eectivity of the fuzzy models representing nonlinear input-output rela-
tionships depends on the fuzzy partition of the input space.
Therefore, the tuning of membership functions becomes an import issue in
fuzzy control. Since this tuning task can be viewed as an optimization prob-
lem neural networks and genetic algorithms [96] oer a possibility to solve
this problem.
A straightforward approach is to assume a certain shape for the membership
functions which depends on dierent parameters that can be learned by a
neural network. This idea was carried out in [139] where the membership
functions are assumed to be symmetrical triangular functions depending on
two parameters, one of them determining where the function reaches its
maximum, the order giving the width of the support. Gaussian membership
functions were used in [81].
Both approaches require a set training data in the form of correct input-
output tuples and a specication of the rules including a preliminary deni-
tion of the corresponding membership functions.
We describe a simple method for learning of membership functions of the
antecedent and consequent parts of fuzzy IF-THEN rules.
Suppose the unknown nonlinear mapping to be realized by fuzzy systems can
be represented as
y
k
= f(x
k
) = f(x
k
1
, . . . , x
k
n
) (3.5)
for k = 1, . . . , K, i.e. we have the following training set
{(x
1
, y
1
), . . . , (x
K
, y
K
)}
For modeling the unknown mapping in (3.5), we employ simplied fuzzy
265
IF-THEN rules of the following type

i
: if x
1
is A
i1
and . . . and x
n
is A
in
then y = z
i
, (3.6)
i = 1, . . . , m, where A
ij
are fuzzy numbers of triangular form and z
i
are real
numbers. In this context, the word simplied means that the individual rule
outputs are given by crisp numbers, and therefore, we can use their weighted
sum (where the weights are the ring strengths of the corresponding rules)
to obtain the overall system output.
Let o
k
be the output from the fuzzy system corresponding to the input x
k
.
Suppose the ring level of the i-th rule, denoted by
i
, is dened by Larsens
product operator

i
=
n

j=1
A
ij
(x
k
j
)
(one can dene other t-norm for modeling the logical connective and), and
the output of the system is computed by the discrete center-of-gravity de-
fuzzication method as
o
k
=
m

i=1

i
z
i
_
m

i=1

i
.
We dene the measure of error for the k-th training pattern as usually
E
k
=
1
2
(o
k
y
k
)
2
where o
k
is the computed output from the fuzzy system corresponding to
the input pattern x
k
and y
k
is the desired output, k = 1, . . . , K.
The steepest descent method is used to learn z
i
in the consequent part of the
fuzzy rule
i
. That is,
z
i
(t + 1) = z
i
(t)
E
k
z
i
= z
i
(t) (o
k
y
k
)

1
+ +
m
,
for i = 1, . . . , m, where is the learning constant and t indexes the number
of the adjustments of z
i
.
Suppose that every linguistic variable in (3.6) can have seven linguistic terms
{NB, NM, NS, ZE, PS, PM, PB}
266
NB
PB
PM
PS ZE
NS NM
-1
1
and their membership function are of triangular form characterized by three
parameters (center, left width, right width). Of course, the membership func-
tions representing the linguistic terms {NB, NM, NS, ZE, PS, PM, PB} can
vary from input variable to input variable, e.g. the linguistic term Negative
Big can have maximum n dierent representations.
Figure 3.44 Initial linguistic terms for the input variables.
The parameters of triangular fuzzy numbers in the premises are also learned
by the steepest descent method.
We illustrate the above tuning process by a simple example. Consider two
fuzzy rules of the form (3.6) with one input and one output variable

1
: if x is A
1
then y = z
1

2
: if x is A
2
then y = z
2
where the fuzzy terms A
1
small and A
2
big have sigmoid membership
functions dened by
A
1
(x) =
1
1 + exp(b
1
(x a
1
))
, A
2
(x) =
1
1 + exp(b
2
(x a
2
))
where a
1
, a
2
, b
1
and b
2
are the parameter set for the premises.
Let x be the input to the fuzzy system. The ring levels of the rules are
computed by

1
= A
1
(x) =
1
1 + exp(b
1
(x a
1
))

2
= A
2
(x) =
1
1 + exp(b
2
(x a
2
))
and the output of the system is computed by the discrete center-of-gravity
defuzzication method as
o =

1
z
1
+
2
z
2

1
+
2
=
A
1
(x)z
1
+A
2
(x)z
2
A
1
(x) +A
2
(x)
.
267
Suppose further that we are given a training set
{(x
1
, y
1
), . . . , (x
K
, y
K
)}
obtained from the unknown nonlinear function f.
Figure 3.44a Initial sigmoid membership functions.
Our task is construct the two fuzzy rules with appropriate membership func-
tions and consequent parts to generate the given input-output pairs.
That is, we have to learn the following parameters
a
1
, b
1
, a
2
and b
2
, the parameters of the fuzzy numbers representing the
linguistic terms small and big,
z
1
and z
2
, the values of consequent parts.
We dene the measure of error for the k-th training pattern as usually
E
k
= E
k
(a
1
, b
1
, a
2
, b
2
, z
1
, z
2
) =
1
2
(o
k
(a
1
, b
1
, a
2
, b
2
, z
1
, z
2
) y
k
)
2
where o
k
is the computed output from the fuzzy system corresponding to the
input pattern x
k
and y
k
is the desired output, k = 1, . . . , K.
The steepest descent method is used to learn z
i
in the consequent part of the
i-th fuzzy rule. That is,
z
1
(t + 1) = z
1
(t)
E
k
z
1
= z
1
(t)

z
1
E
k
(a
1
, b
1
, a
2
, b
2
, z
1
, z
2
) =
268
z
1
(t) (o
k
y
k
)

1
+
2
= z
1
(t) (o
k
y
k
)
A
1
(x
k
)
A
1
(x
k
) +A
2
(x
k
)
z
2
(t + 1) = z
2
(t)
E
k
z
2
= z
2
(t)

z
2
E
k
(a
1
, b
1
, a
2
, b
2
, z
1
, z
2
) =
z
2
(t) (o
k
y
k
)

1
+
2
= z
2
(t) (o
k
y
k
)
A
2
(x
k
)
A
1
(x
k
) +A
2
(x
k
)
where > 0 is the learning constant and t indexes the number of the adjust-
ments of z
i
.
In a similar manner we can nd the shape parameters (center and slope) of
the membership functions A
1
and A
2
.
a
1
(t + 1) = a
1
(t)
E
k
a
1
, b
1
(t + 1) = b
1
(t)
E
k
b
1
a
2
(t + 1) = a
2
(t)
E
k
a
2
, b
2
(t + 1) = b
2
(t)
E
k
b
2
where > 0 is the learning constant and t indexes the number of the ad-
justments of the parameters. We show now how to compute analytically the
partial derivative of the error function E
k
with respect to a
1
, the center of
the fuzzy number A
1
.
E
k
a
1
=

a
1
E
k
(a
1
, b
1
, a
2
, b
2
, z
1
, z
2
) =
1
2

a
1
(o
k
(a
1
, b
1
, a
2
, b
2
, z
1
, z
2
)y
k
)
2
= (o
k
y
k
)
o
k
a
1
,
where
o
k
a
1
=

a
1
_
A
1
(x
k
)z
1
+A
2
(x
k
)z
2
A
1
(x
k
) +A
2
(x
k
)
_
=

a
1
__
z
1
1 + exp(b
1
(x
k
a
1
))
+
z
2
1 + exp(b
2
(x
k
a
2
))
__
_
1
1 + exp(b
1
(x
k
a
1
))
+
1
1 + exp(b
2
(x
k
a
2
))
__
=

a
1
_
z
1
[1 + exp(b
2
(x
k
a
2
))] +z
2
[1 + exp(b
1
(x
k
a
1
))]
2 + exp(b
1
(x
k
a
1
)) + exp(b
2
(x
k
a
2
))
_
=
269
b
1
z
2

2
(2 +
1
+
2
) +b
1

1
(z
1
(1 +
2
) +z
2
(1 +
1
))
(2 +
1
+
2
)
2
where we used the notations
1
= exp(b
1
(x
k
a
1
)) and
2
= exp(b
2
(x
k
a
2
))
.
The learning rules are simplied if we use the following fuzzy partition
A
1
(x) =
1
1 + exp(b(x a))
, A
2
(x) =
1
1 + exp(b(x a))
where a and b are the shared parameters of A
1
and A
2
. In this case the
equation
A
1
(x) +A
2
(x) = 1
holds for all x from the domain of A
1
and A
2
.
Figure 3.44b Symmetrical membership functions.
The weight adjustments are dened as follows
z
1
(t + 1) = z
1
(t)
E
k
z
1
= z
1
(t) (o
k
y
k
)A
1
(x
k
)
z
2
(t + 1) = z
2
(t)
E
k
z
2
= z
2
(t) (o
k
y
k
)A
2
(x
k
)
a(t + 1) = a(t)
E
k
(a, b)
a
270
b(t + 1) = b(t)
E
k
(a, b)
b
where
E
k
(a, b)
a
= (o
k
y
k
)
o
k
a
= (o
k
y
k
)

a
[z
1
A
1
(x
k
) +z
2
A
2
(x
k
)] =
(o
k
y
k
)

a
[z
1
A
1
(x
k
) +z
2
(1 A
1
(x
k
))] = (o
k
y
k
)(z
1
z
2
)
A
1
(x
k
)
a
=
(o
k
y
k
)(z
1
z
2
)bA
1
(x
k
)(1 A
1
(x
k
)) = (o
k
y
k
)(z
1
z
2
)bA
1
(x
k
)A
2
(x
k
).
and
E
k
(a, b)
b
= (o
k
y
k
)(z
1
z
2
)
A
1
(x
k
)
b
= (o
k
y
k
)(z
1
z
2
)(x
k
a)A
1
(x
k
)A
2
(x
k
).
Jang [97] showed that fuzzy inference systems with simplied fuzzy IF-THEN
rules are universal approximators, i.e. they can approximate any continuous
function on a compact set to arbitrary accuracy. It means that the more fuzzy
terms (and consequently more rules) are used in the rule base, the closer is
the output of the fuzzy system to the desired values of the function to be
approximated.
A method which can cope with arbitrary membership functions for the input
variables is proposed in [68, 162, 163]. The training data have to be divided
into r disjoint clusters R
1
, . . . , R
r
. Each cluster R
i
corresponds to a control
rule R
i
. Elements of the clusters are tuples of input-output values of the
form (x, y) where x can be a vector x = (x
1
, . . . , x
n
) of n input variables.
This means that the rules are not specied in terms of linguistic variables,
but in the form of crisp input-output tuples.
A multilayer perceptron with n input units, some hidden layers, and r output
units can be used to learn these clusters. The input data for this learning
task are the input vectors of all clusters, i.e. the set
{x | i y : (x, y) R
i
}.
271
The target output t
u
i
(x) for input x at output unit u
i
is dened as
t
u
i
(x) =
_
1 if there exists y such that (x, y) R
i
0 otherwise
After the network has learned its weights, arbitrary values for x can be taken
as inputs. Then the output at output unit u
i
can be interpreted as the degree
to which x matches the antecedent of rule R
i
, i.e. the function
x o
u
i
is the membership function for the fuzzy set representing the linguistic term
on the left-hand side of rule R
i
.
In case of a Mamdani type fuzzy controller the same technique can be applied
to the output variable, resulting in a neural network which determines the
fuzzy sets for the right-hand sides of the rules.
For Sugeno type fuzzy controller, where each rule yields a crisp output value
together with a number, specifying the matching degree for the antecedent of
the rule, another technique can be applied. For each rule R
i
a neural network
is trained with the input-output tuples of the set R
i
. Thus these r neural
networks determine the crisp output values for the rules R
i
, . . . , R
r
.
These neural networks can also be used to eliminate unnecessary input vari-
ables in the input vector x for the rules R
1
, . . . , R
r
by neglecting one input
variable in one of the rules and comparing the control result with the one,
when the variable is not neglected. If the performance of the controller is
not inuenced by neglecting input variable x
j
in rule R
i
, x
j
is unnecessary
for R
i
and can be left out.
ANFIS (Adaptive Neural Fuzzy Inference Systems) [97] is a great example of
an architecture for tuning fuzzy system parameters from input/output pairs
of data. The fuzzy inference process is implemented as a generalized neural
network, which is then tuned by gradient descent techniques. It is capable
of tuning antecedent parameters as well as consequent parameters of fuzzy
rules which use a softened trapezoidal membership function. It has been
applied to a variety of problems, including chaotic time series prediction and
the IRIS cluster learning problem.
These tuning methods has a weak point, because the convergence of tuning
depends on the initial condition. Ishigami, Fukuda, Shibata and Arai [96]
272
present a hybrid auto-tuning method of fuzzy inference using genetic algo-
rithms and the generalized delta learning rule, which guarantees the optimal
structure of the fuzzy model.
273
3.5 Fuzzy rule extraction from numerical data
Fuzzy systems and neural networks are widely used for function approxi-
mation. When comparing these two technologies, fuzzy systems are more
favorable in that their behavior can be explained based on fuzzy rules and
thus their performance can be adjusted by tuning the rules. But since, in
general, knowledge acquisition is dicult and also the universe of discourse
of each input variable needs to be divided into several intervals, applications
of fuzzy systems are restricted to the elds where expert knowledge is avail-
able and the number of input variables is small. To overcome the problem
of knowledge acquisition, several methods for extracting fuzzy rules from
numerical data have been developed.
In the previous section we described, how neural networks could be used to
optimize certain parameters of a fuzzy rule base.
We assumed that the fuzzy IF-THEN rules where already specied in linguistic
form or as a crisp clustering of a set of correct input-output tuples.
If we are given a set of crisp input-output tuples we can try to extract fuzzy
(control) rules from this set. This can either be done by fuzzy clustering
methods [14] or by using neural networks.
The input vectors of the input-output tuples can be taken as inputs for a
Kohonen self-organizing map, which can be interpreted in terms of linguistic
variables [142]. The main idea for this interpretation is to refrain from the
winner take-all principle after the weights for the self-organizing map are
learned. Thus each output unit u
i
is from being the winner given input
vector x, a matching degree
i
(x) can be specied, yielding the degree to
which x satises the antecedent of the corresponding rule.
Finally, in order to obtain a Sugeno type controller, to each rule (output
unit) a crisp control output value has to be associated. Following the idea of
the Sugeno type controller, we could choose the value

(x,y)S

i
(x)y
_

(x,y)S

i
(x)
where S is the set of known input-output tuples for the controller and i
indexes the rules.
Another way to obtain directly a fuzzy clustering is to apply the modied
Kohonen network proposed in [13].
274
Kosko uses another approach to generate fuzzy-if-then rules from existing
data [107]. Kosko shows that fuzzy sets can be viewed as points in a multi-
dimensional unit hypercube. This makes it possible to use fuzzy associative
memories (FAM) to represent fuzzy rules. Special adaptive clustering algo-
rithms allow to learn these representations (AFAM).
In [156] fuzzy rules with variable fuzzy regions (hyperboxes) are extracted
for classication problems. This approach has a potential applicability to
problems having a high-dimentional input space. But because the overlap of
hyperboxes of dierent classes must be resolved by dynamically expanding,
splitting and contracting hyperboxes, the approach is dicult to apply to
the problems in which several classes overlap.
Abe and Lan [2] suggest a method for extracting fuzzy rules for pattern
classication. The fuzzy rules with variable fuzzy regions were dened by
activation hyperboxes which show the existence region of data for a class
and inhibition hyperboxes which inhibit the existence of the data for that
class. These rules were extracted directly from numerical data by recursively
resolving overlaps between two classes.
Abe and Lan [3] present a method for extracting fuzzy rules directly from
numerical data for function approximation. Suppose that the unknown func-
tion has a one-dimensional output y and an m-dimensional input vector x.
First we divide, [M
1
, M
2
], the universe of discourse of y into n intervals as
follows:
[y
0
, y
1
], (y
1
, y
2
], . . . , (y
n1
, y
n
]
where y
0
= M
1
and y
n
= M
2
. We call the i-th interval the output interval i.
Using the input data whose outputs are in the output interval i, we recursively
dene the input region that generates output in the output interval i.
Namely, rst we determine activation hyperboxes, which dene the input
region corresponding to the output interval i, by calculating the minimum
and maximum values of input data for each output interval.
If the activation hyperbox for the output interval i overlaps with the activa-
tion hyperbox for the output interval j, the overlapped region is dened as
an inhibition hyperbox.
If the input data for output intervals i or/and j exist in the inhibition hy-
perbox, within this inhibition hyperbox, we dene one or two additional
activation hyperboxes; moreover, if two activation hyperboxes are dened
and they overlap, we further dene an additional inhibition hyperbox: this
275
Output interval i
level 1
level 2
level 3
Aii(1)
Ajj(1)
Output interval j
Iij(1)
Aij(2)
Aji(2)
Iij(2)
process is repeated until the overlap is resolved. Fig. 3.45 illustrates this
process schematically.
Figure 3.45 Recursive denition of activation and inhibition hyperboxes.
Based on an activation hyperbox or based on an activation hyperbox and its
corresponding inhibition hyperbox (if generated), a fuzzy rule is dened. Fig.
3.46 shows a fuzzy system architecture, including a fuzzy inference net which
calculates degrees of membership for output intervals and a defuzzier.
For an input vector x, degrees of membership for output intervals 1 to n
are calculated in the inference net and then the output y is calculated by
defuzzier using the degrees of membership as inputs.
276
max
max
min
y
1
i
n
Overlap with interval j
Overlap with interval k
Defuzzifier
Figure 3.46 Architecture of Abe and Lans fuzzy inference system.
The fuzzy inference net consists of four layers at most. The inference net is
sparsely connected. Namely, dierent output intervals have dierent units
for the second to fourth layers and there is no connection among units of
dierent output intervals.
The second layer units consist of fuzzy rules which calculate the degrees
of membership for an input vector x.
The third layer units take the maximum values of inputs from the sec-
ond layer, which are the degrees of membership generated by resolving
overlaps between two output intervals. The number of third layer units
for the output interval i is determined by the number of output in-
tervals whose input spaces overlap with that of the output interval i.
Therefore, if there is no overlap between the input space of the output
interval i and that of any other output intervals, the output interval
i and that of any other output intervals, the network for the output
interval i is reduced to two layers.
The fourth layer unit for the output interval i takes the minimum value
among the maximum values generated by the preceding layer, each
of them is associated with an overlap between two output intervals.
Therefore, if the output interval i overlaps with only one output in-
terval, the network for the output interval i is reduced to three layers.
Calculation of a minimum in the fourth layer resolves overlaps among
277
more than two output intervals. Thus in the process of generating
hyperboxes, we need to resolve only an overlap between two output
intervals at a time.
278
3.6 Neuro-fuzzy classiers
Conventional approaches of pattern classication involve clustering training
samples and associating clusters to given categories. The complexity and lim-
itations of previous mechanisms are largely due to the lacking of an eective
way of dening the boundaries among clusters. This problem becomes more
intractable when the number of features used for classication increases. On
the contrary, fuzzy classication assumes the boundary between two neigh-
boring classes as a continuous, overlapping area within which an object has
partial membership in each class. This viewpoint not only reects the re-
ality of many applications in which categories have fuzzy boundaries, but
also provides a simple representation of the potentially complex partition of
the feature space. In brief, we use fuzzy IF-THEN rules to describe a clas-
sier. Assume that K patterns x
p
= (x
p1
, . . . , x
pn
), p = 1, . . . , K are given
from two classes, where x
p
is an n-dimensional crisp vector. Typical fuzzy
classication rules for n = 2 are like
If x
p1
is small and x
p2
is very large then x
p
= (x
p1
, x
p2
) belongs to Class C
1
If x
p1
is large and x
p2
is very small then x
p
= (x
p1
, x
p2
) belongs to Class C
2
where x
p1
and x
p2
are the features of pattern (or object) p, small and very
large are linguistic terms characterized by appropriate membership functions.
The ring level of a rule

i
: If x
p1
is A
i
and x
p2
is B
i
then x
p
= (x
p1
, x
p2
) belongs to Class C
i
with respect to a given object x
p
is interpreted as the degree of belogness of
x
p
to C
i
. This ring level, denoted by
i
, is usually determined as

i
= T(A
i
(x
p1
), A
2
(x
p2
)),
where T is a triangular norm modeling the logical connective and.
As such, a fuzzy rule gives a meaningful expression of the qualitative aspects
of human recognition. Based on the result of pattern matching between
rule antecedents and input signals, a number of fuzzy rules are triggered in
parallel with various values of ring strength. Individually invoked actions
are considered together with a combination logic. Furthermore, we want the
system to have learning ability of updating and ne-tuning itself based on
newly coming information.
279
The task of fuzzy classication is to generate an appropriate fuzzy partition
of the feature space . In this context the word appropriate means that the
number of misclassied patterns is very small or zero. Then the rule base
should be optimized by deleting rules which are not used.
Consider a two-class classication problem shown in Figure 3.47. Suppose
that the fuzzy partition for each input feature consists of three linguistic
terms {small, medium, big} which are represented by triangular membership
functions.
Both initial fuzzy partitions in Figure 3.47 satisfy 0.5-completeness for each
input variable, and a pattern x
p
is classied into Class j if there exists at
least one rule for Class j in the rule base whose ring strength (dened by
the minimum t-norm) with respect to x
p
is bigger or equal to 0.5. So a rule
is created by nding for a given input pattern x
p
the combination of fuzzy
sets, where each yields the highest degree of membership for the respective
input feature. If this combination is not identical to the antecedents of an
already existing rule then a new rule is created.
However, it can occur that if the fuzzy partition is not set up correctly, or
if the number of linguistic terms for the input features is not large enough,
then some patterns will be missclassied.
280
A1
A2 A3
B1
B2
B3
1
1
1
1/2
1/2
x1
x2
Figure 3.47 Initial fuzzy partition with 9 fuzzy subspaces and 2
misclassied patterns. Closed and open circles represent the given pattens
from Class 1 and Class 2, respectively.
The following 9 rules can be generated from the initial fuzzy partitions shown
in Figure 3.47:

1
: If x
1
is small and x
2
is big then x = (x
1
, x
2
) belongs to Class C
1

2
: If x
1
is small and x
2
is medium then x = (x
1
, x
2
) belongs to Class C
1

3
: If x
1
is small and x
2
is small then x = (x
1
, x
2
) belongs to Class C
1

4
: If x
1
is big and x
2
is small then x = (x
1
, x
2
) belongs to Class C
1

5
: If x
1
is big and x
2
is big then x = (x
1
, x
2
) belongs to Class C
1

6
: If x
1
is medium and x
2
is small then x
p
= (x
1
, x
2
) belongs to Class C
2

7
: If x
1
is medium and x
2
is medium then x
p
= (x
1
, x
2
) belongs to Class C
2

8
: If x
1
is medium and x
2
is big then x
p
= (x
1
, x
2
) belongs to Class C
2

9
: If x
1
is big and x
2
is medium then x
p
= (x
1
, x
2
) belongs to Class C
2
281
where we have used the linguistic terms small for A
1
and B
1
, medium for A
2
and B
2
, and big for A
3
and B
3
.
However, the same rate of error can be reached by noticing that if x
1
is
medium then the pattern (x
1
, x
2
) belongs to Class 2, independently from
the value of x
2
, i.e. the following 7 rules provides the same classication
result

1
: If x
1
is small and x
2
is big then x = (x
1
, x
2
) belongs to Class C
1

2
: If x
1
is small and x
2
is medium then x = (x
1
, x
2
) belongs to Class C
1

3
: If x
1
is small and x
2
is small then x = (x
1
, x
2
) belongs to Class C
1

4
: If x
1
is big and x
2
is small then x = (x
1
, x
2
) belongs to Class C
1

5
: If x
1
is big and x
2
is big then x = (x
1
, x
2
) belongs to Class C
1

6
: If x
1
is medium then x
p
= (x
1
, x
2
) belongs to Class C
2

7
: If x
1
is big and x
2
is medium then x
p
= (x
1
, x
2
) belongs to Class C
2
Figure 3.47a is an example of fuzzy partitions (3 linguistic terms for the rst
input feature and 5 for the second) which classify correctly the patterns.
Figure 3.47a Appropriate fuzzy partition with 15 fuzzy subspaces.
As an other example, Let us consider a two-class classication problem [94].
In Figure 3.48 closed and open rectangulars represent the given from Class
1 and Class 2, respectively.
282
Figure 3.48 A two-dimensional classication problem.
If one tries to classify all the given patterns by fuzzy rules based on a simple
fuzzy grid, a ne fuzzy partition and (6 6 = 36) rules are required.
283
0 1
X1
1
0
X2
0.5
0.5
R
22
R
42
R
45
R
25
A
4
A
1
A
6
B
B
B
1
2
6
Figure 3.49 Fuzzy partition with 36 fuzzy subspaces.
However, it is easy to see that the patterns from Figure 3.48 may be correctly
classied by the following ve fuzzy IF-THEN rules

1
: If x
1
is very small then Class 1,

2
: If x
1
is very large then Class 1,

3
: If x
2
is very small then Class 1,

4
: If x
2
is very large then Class 1,

5
: If x
1
is not very small and x
1
is not very large and
x
2
is not very small and x
2
is not very large then Class 2
Sun and Jang [160] propose an adaptive-network-based fuzzy classier to
solve fuzzy classication problems.
284
A1
A2
B1
B2
T
T
Layer 1
Layer 2
Layer 3
Layer 4
x1
x2
T
T
S
S
C1
C2
Figure 3.49a demonstrates this classier architecture with two input variables
x
1
and x
2
. The training data are categorized by two classes C
1
and C
2
. Each
input is represented by two linguistic terms, thus we have four rules.
Figure 3.49a An adaptive-network-based fuzzy classier.
Layer 1 The output of the node is the degree to which the given input
satises the linguistic label associated to this node. Usually, we choose
bell-shaped membership functions
A
i
(u) = exp
_

1
2
_
u a
i1
b
i1
_
2
_
,
B
i
(v) = exp
_

1
2
_
v a
i2
b
i2
_
2
_
,
to represent the linguistic terms, where
{a
i1
, a
i2
, b
i1
, b
i2
}
is the parameter set. As the values of these parameters change, the
bell-shaped functions vary accordingly, thus exhibiting various forms of
membership functions on linguistic labels A
i
and B
i
. In fact, any con-
tinuous, such as trapezoidal and triangular-shaped membership func-
tions, are also quantied candidates for node functions in this layer.
285
The initial values of the parameters are set in such a way that the
membership functions along each axis satisfy -completeness, normal-
ity and convexity. The parameters are then tuned with a descent-type
method.
Layer 2 Each node generates a signal corresponing to the conjuc-
tive combination of individual degrees of match. The output signal
is the ring strength of a fuzzy rule with respect to an object to be
categorized.
In most pattern classication and query-retrival systems, the conjuc-
tion operator plays an important role and its interpretation context-
dependent.
Since does not exist a single operator that is suitable for all applications,
we can use parametrized t-norms to cope with this dynamic property
of classier design. For example, we can use Hamachers t-norm with
parameter 0
HAND

(a, b) =
ab
+ (1 )(a +b ab)
,
or Yagers t-norm with parameter p > 0
Y AND
p
(a, b) = 1 min{0, [(1 a)
p
+ (1 b)
p
]
1/p
}.
All nodes in this layer is labeled by T, because we can choose any t-
norm for modeling the logical and operator. The nodes of this layer
are called rule nodes.
Features can be combined in a compensatory way. For instance, we
can use the generalized p-mean proposed by Dyckho and Pedrycz:
_
x
p
+y
p
2
_
1/p
, p 1.
We take the linear combination of the ring strengths of the rules at
Layer 3 and apply a sigmoidal function at Layer 4 to calculate the
degree of belonging to a certain class.
If we are given the training set
{(x
k
, y
k
), k = 1, . . . , K}
286
where x
k
refers to the k-th input pattern and
y
k
=
_
(1, 0)
T
if x
k
belongs to Class 1
(0, 1)
T
if x
k
belongs to Class 2
then the parameters of the hybrid neural net (which determine the shape
of the membership functions of the premises) can be learned by descent-
type methods. This architecture and learning procedure is called ANFIS
(adaptive-network-based fuzzy inference system) by Jang [97].
The error function for pattern k can be dened by
E
k
=
1
2
_
(o
k
1
y
k
1
)
2
+ (o
k
2
y
k
2
)
2

where y
k
is the desired output and o
k
is the computed output by the hybrid
neural net.
287
Background
Knowledge
CER
SRE
Interpretation
of
Instructions
Dialogue
Performance
Supervisor

Observation
Judgment
Instructions
Self
Regulating
Explanation
kinematics
module
performance
knowledge
module
Task Performance
3.7 FULLINS
Sugeno and Park [158] proposed a framework of learning based on indirect
linguistic instruction, and the performance of a system to be learned is im-
proved by the evaluation of rules.
Figure 3.50 The framework of FULLINS.
FULLINS (Fuzzy learning based on linguistic instruction) is a mechanism for
learning through interpreting meaning of language and its concepts. It has
the following components:
Task Performance Functional Component
Performs tasks achieving a given goal. Its basic knowledge performing
the tasks is modied through the Self-Regulating Component when a
supervisors linguistic instruction is given.
288
Dialogue Functional Component
An interface to interpret linguistic instructions through dialogue with
a supervisor.
Explanation Functional Component
Explains to a supervisor the elements of a process in performing the
tasks by the Task Performance Component.
Background Knowledge Functional Component
Interpret instructions in the Interpretation Component and modies
the basic performance knowledge in the Self-Regulating Component.
Interpretation Functional Component
Interprets instructions by the meaning elements using the Background
Knowledge and the Dialogue Components. An instruction is assumed
to have some meaning elements, and a meaning element is associated
with values called trends.
Self-Regulating Functional Component
The basic performance knowledge is regulated using the evaluation
rules constructed by the searched meaning elements and the back-
ground knowledge.
We now describe the method of linguistic instructions. A direct instruction
is in the form of entire methods or individual IF-THEN rules. An individual
IF-THEN rule is a rule (basic performance knowledge) to perform a given
goal. An entire method is a set of rules that work together to satisfy a
goal. It is dicult for supervisor to give direct instructions, because the
instructions must be based on precise structure and components of rules
to reach a given goal. An indirect instruction is not a part of the basic
performance knowledge prepared for the system. An indirect instruction is
not given in any specic format, and the contents of the instruction have
macroscopic properties. In FULLINS indirect instructions are interpreted
by meaning elements and their trends. For example, in a driving school
does not give minute instructions about the steering angle of the wheel, the
degree of stepping on the accelerator, etc. when he teaches a student higher
level driving techniques such as turning around a curve, safe driving, etc.
289
After explanating and demonstrating some driving techniques, the instructor
gives the student a macroscopic indirect linguistic instruction based on his
judgement and evaluation of the students performance.
E.g. Turning around a curve, the instructor judges the performance of a
students driving techniques and then gives an instruction to him like this
If you approach near the turning point, turn slowly, turn smoothly, etc.
If an instruction is given, the student interprets it with his internal knowledge
through dialogue with the instructor: Turn around the curve slowly, step on
the brake slightly or step on the accelerator weakly.
Indirect instructions L
i
have three components:
L
i
= [AP][LH
i
][AW]
where LH stands for Linguistic hedges, AW for Atomic words, AP for Auxil-
iary Phrases.
An indirect instruction in a driving school is:
L
i
= [Turn around a curve(AP)][more((LH)][slowly(AW)]
where LH and AP are prepared in the system. AW is interpreted in a com-
bination of meaning elements and their trends.
The meaning of an instruction is restricted by the attached linguistic hedges.
L
i
= [If you approach near the turning point, turn slowly, turn smoothly]
Then the following dialogue can take place between the instructor and the
student
SYS: Do you want me press the accelerator weaker than be-
fore?
SV: RIGHT
SYS: Do you want me steering wheels less than before?
SV: RIGHT
290
m1
m1(+)
m1(-)
m1(0)
The supervisors instruction is interpreted through two questions, because
there exists causal relation between brake and accelerator in the above as-
sumption:
L
i
= [press the brake slightly (m
1
)] and [press on accelerator slowly
(m
2
)] and [turn steering small (m
3
)].
Instructions are entered by the supervisors input-key in FULLINS and by
the instructors voice in the driving school.
Denition 3.7.1 Meaning elements are words or phrases to interpret indi-
rect linguistic instructions.
In case of driving school, the meaning elements are
[degree of strength pressing on brake]
[degree of strength pressing on accelerator]
[degree of steering].
The three trends of [degree of strength pressing on accelerator] are:
[Press on the accelerator strongly] (Positive trend (+)),
[Press on accelerator weakly] (Negative trend (-)),
[Accelerator has nothing to do with the instruction] (No trend
(0)).
Figure 3.51 Three trends of the meaning element m
1
.
Trends of meaning elements can be dened for an element m
1
as follows
m
1
(+) : m
1
contributes to a meaning element of an instruction with
trend (+)
291
m7
m5 m1
m3 m6 m2
m4
m
1
() : m
1
contributes to a meaning element of an instruction with
trend (-)
m
1
(0) : m
1
does not act on a meaning element of an instruction
A set of meaning elements consists of a set of dependent and independent
meaning elements. If the meaning element and its trend, [press on brake
slightly] is selected, then [press on accelerator weakly] is also selected
without having any dialogue with the instructor. The causal net is
m
1
(+) m
2
()
For example, if we have
m
1
(+) m
3
(+) m
4
(), m
5
(+) m
6
(+)
then the corresponding causal net is shown in the Figure
Figure 3.52 Causal net.
Meaning elements are searched through dialogue between the system and
the supervisor, and then Dialogue Meaning Elements Set and the Linguistic
Instruction Knowledge Base are used. Linguistic Instruction Knowledge Base
consists of two memory modules: Atomic Words Memory and Linguistic
Hedges Memory module. Atomic Words Memory is a module in which some
atomic words are memorized. Some linguistic hedges are memorized with
each weight in Linguistic Hedges Memory:
[(non, 0), (slightly, 0.2), (rather, 0.4), (more, 0.6), (pretty, 0.8), (very, 1.0)].
LH entered together with AW is matched with each linguistic hedge prepared
in Linguistic Hedges Memory, then the weight allocated on the hedge is
292
selected. The meaning of the instruction is restricted by LH: the consequent
parts of the evaluation rule constructed by the searched meaning elements
are modied by the weight of LH. The interpretation of linguistic instruction
L
i
= (LH
i
)(m
1
(+))
is the following
L
i
= [Drive(AP
1
)][more(LH
1
)][fast(AW
1
)][than before(AP
2
)]
AW
i
= m
1
(+) : Press on acceleration strongly
The amount of information about a learning object is increased, as the num-
ber of supervisors instructions increases. If a supervisors instruction is
given, Atomic Words Memory is checked if AW
i
of the instruction does or
does not exist in it. If the same one exists in Atomic Words Memory then
AW
i
is interpreted by the matched meaning element without searching mean-
ing element through dialogue.
The evaluation rule is constructed by a combination of the meaning element
and its trend using Constructing evaluation Rules. The meaning of linguistic
instruction is restricted by modifying the consequent part of the evaluation
rule by
H = W
LH
i
R
where R is the maximum value for shifting the consequence parameter by
LH
i
. The gure shows an evaluation rule, where MOD is the modifying value
of the parameter for the consequent part in the basic performance rule. The
modifying value by linguistic hedge [more] is
H = 0.6 R.
The membership functions of the consequent part [fast] are zo, nm
and nb and the ultimate membership functions of the consequent part are
zo

, nm

and nb

.
293
small
fast
zo
zo* more
med
nm
nm*
big
nb
nb*
<MOD>
<MOD>
< D m 2 (+) >
P1
P2
P3
P0
P0
P2
P1
Figure 3.53 Modifying the consequent parts of the rule.
Sugeno and Park illustrate the capabilities of FULLINS by controlling an
unmanned helicopter. The control rules are constructed by using the acquired
knowledge from a pilots experience and knowledge. The objective ight
modes are Objective Line Following Flight System and Figure Eight Flight
System.
Figure 3.54 Objective Line Following and Eight Flight Systems
The performance of the gure eight ight is improved by learning from the
supervisors instructions. The measure of performance of the is given by
the following goals: following the objective line P
1
P
2
, adjusting of the diam-
eter of the right turning circle, and making both diameters small or large
simultaneously.
294
3.8 Applications of fuzzy neural systems
The rst applications of fuzzy neural networks to consumer products ap-
peared on the (Japanese and Korean) market in 1991. Some examples include
air conditioners, electric carpets, electric fans, electric thermo-pots, desk-
type electric heaters, forced-ue kerosene fan heaters, kerosene fan heaters,
microwave ovens, refrigerators, rice cookers, vacuum cleaner, washing ma-
chines, clothes driers, photocopying machines, and word processors.
Neural networks are used to design membership functions of fuzzy systems
that are employed as decision-making systems for controlling equipment. Al-
though fuzzy logic can encode expert knowledge directly using rules with
linguistic labels, it usually takes a lot of time to design and tune the mem-
bership functions which quantitatively dene these linquistic labels. Neural
network learning techniques can automate this process and substantially re-
duce development time and cost while improving performance. The idea
of using neural networks to design membership functions was proposed by
Takagi and Hayashi [162]. This was followed by applications of the gradi-
ent descent method to the tuning of parameters that dene the shape and
position of membership functions. This system tuning is equivalent to learn-
ing in a feed-forward network. This method has been widely used to design
triangle membership functions, Gaussian membership functions, sigmoidal
membership functions, and bell-shaped membership functions. Simple func-
tion shapes, such as triangles, are used for most actual products. The center
and widths of the membership functions are tuned by a gradient method,
reducing the error between the actual fuzzy system output and the desired
output. Figure 3.55 is an example of this type of neural network usage.
295
Temperature
Humidity
Toner density
Image density of solid black
Image density of background
Exposured image density
Exposure lamp control
Grid voltage control
Bias voltage control
Toner density control
Neural network
(as developing tool)
Fuzzy system
Figure 3.55 Photocopier machine (Matsushita Electric).
Nikko Securities uses a neural network to improve the rating of convertible
bonds [140]. The system learns from the reactions of an expert rating in-
struction, which can change according to the actual economic situation. It
analyzes the results, and then uses them to give advice. Their system consists
of a seven-layer neural network. The neural networks internal connections
and synaptic weights are initialized using the knowledge in the fuzzy logic
system; it then learns by backpropagation learning and changes its symbolic
representation accordingly. This representation is then returned to a fuzzy
logic representation, and the system has acquired knowledge. The system
can then give advice based on the knowledge it has acquired. Such a system
is called a neural fuzzy system.
296
Structured
Neural Network
Actual system
Expert
1
5
Symbolic representation
2
4
3
Distributed representation
Fuzzy logic,
Mathematical models,
and Algorithms
Figure 3.56 Neural fuzzy system.
1 Translate experts knowledge into a symbolic representation.
2 Initialize the neural net by symbolic representation.
3 Decrease errors between actual system and neural net by learning.
4 Translate the distributed representation based upon the structure of neural net.
5 Acquire knowledge from the modied symbolic representation.
This seven-layer system had a ratio of correct answers of 96%. A similar
learning system with a conventional three-layered neural network had a ratio
of correct answers of 84%, and was dicult to understand the internal rep-
resentation. The system learned 40 times faster than the three-layer system.
This comparison is evidence of the eectiveness of neural fuzzy systems.
Another way to combine fuzzy systems and neural networks is to connect
them up serially. In the Sanyo electric fan [151], the fan must rotate toward
the user - which requires calculating the direction of the remote controller.
Three infrared sensors in the fans body detect the strengths of the signal
from a remote controller. First, the distance to the remote is calculated by
a fuzzy system. Then, this distance value and the ratios of sensor outputs
are used by a neural network to compute the required direction. The latter
calculation is done by a neural net because neither mathematical models nor
fuzzy reasoning proved good at carrying out this function. The nal product
has an error of 4
o
as opposed to the 10
o
error of statistical regression
methods [138].
Sanyo uses neural networks for adjusting auto-exposure in their photocopying
machine. Moreover, the toner control of this machine is controlled by fuzzy
297
reasoning. Ricoh Co. has applied two neural networks to control electrostatic
latent image conditions at necessary potential, as well as a neural network
to gure optimum developing bias voltage from image density, temperature,
humidity, and copy volume [127].
To make the control smoother, more sensitive, and more accurate one has to
incorporate more sensor data. This becomes more complicated as the input
space increases in dimension. In this approach, a neural net handles the larger
set of sensor inputs and corrects the output of a fuzzy system (which was
designed earlier for the old set of inputs). A complete redesigning of the fuzzy
system is thus avoided. This leads to substantial savings in development time
(and cost), since redesigning the membership functions, which becomes more
dicult as the number of inputs increases, is obviated.
Figure 3.57 shows the schematic of a Hithachi washing mashine. The fuzzy
system shown in the upper part was part of the rst model. Later, an im-
proved model incorporated extra information by using a correcting neural
net, as shown. The additional input (fed only to the net) is electrical con-
ductivity, which is used to measure the opacity/transparency of the water.
Toshiba produces washing mashines which have a similar control system [136].
Sanyo uses a similar approach in its washing machine, although some of the
inputs/outputs are dierent.
298
Clothes mass
Clothes quality
Clothes mass
Clothes quality
Water flow speed
Washing time
Rinsing time
Spinning time
Correcting
value
Neural
Network
Fuzzy
System
Electrical conductivity
Figure 3.57 Schematic of a Hitachi washing mashine [77].
ANFIS (Adaptive Neural Fuzzy Inference Systems) [97] is a great example of
an architecture for tuning fuzzy system parameters from input/output pairs
of data. The fuzzy inference process is implemented as a generalized neural
network, which is then tuned by gradient descent techniques. It is capable of
tuning antecedent parameters as well as consequent parameters of fuzzy rules
which use a softened trapezoidal membership function. It has been applied
to a variety of problems, including chaotic timeseries prediction and the IRIS
cluster learning problem.
This relationship, which is not analytically known, has been dened in terms
of a fuzzy rule set extracted from the data points. The rule extraction was
done using ANFIS, a fuzzy neural system that combines the advantages of
fuzzy systems and neural networks. As a fuzzy system, it does not require a
large data set and it provides transparency, smoothness, and representation
of prior knowledge. As a neural system, it provides parametric adaptability.
299
Bibliography
[1] S. Abe and M.-S. Lan, A classier using fuzzy rules extracted di-
rectly from numerical data, in: Proceedings of IEEE Internat. Conf.
on Fuzzy Systems, San Francisco,1993 1191-1198.
[2] S. Abe and M.-S. Lan, Fuzzy rules extraction directly from numer-
ical data for function approximation, IEEE Trans. Syst., Man, and
Cybernetics, 25(1995) 119-129.
[3] S. Abe and M.-S. Lan, A method for fuzzy rule extraction directly
from numerical data and its application to pattern classication,
IEEE Transactions on Fuzzy Systems, 3(1995) 18-28.
[4] F. Aminzadeh and M. Jamshidi eds., Fuzzy Sets, Neural Networks,
and Distributed Articial Intelligence (Prentice-Hall, Englewood
Clis, 1994).
[5] P.E. An, S. Aslam-Mir, M. Brown, and C.J. Harris, A reinforcement
learning approach to on-line optimal control, in: Proc. of IEEE
International Conference on Neural Networks, Orlando, Fl, 1994
24652471.
[6] K. Asakawa and H. Takagi, Neural Networks in Japan Communi-
cations of ACM, 37(1994) 106-112.
[7] K.. Asai, M. Sugeno and T. Terano, Applied Fuzzy Systems (Aca-
demic Press, New York, 1994).
[8] A. Bastian, Handling the nonlinearity of a fuzzy logic controller
at the transition between rules, Fuzzy Sets and Systems, 71(1995)
369-387.
300
[9] H.R. Berenji, A reinforcement learning-based architecture for fuzzy
logic control, Int. Journal Approximate Reasoning, 6(1992) 267-
292.
[10] H.R. Berenji and P. Khedkar, Learning and tuning fuzzy logic con-
trollers through reinforcements, IEEE Transactions on Neural Net-
works, 3(1992) 724-740.
[11] H.R. Berenji, R.N. Lea, Y. Jani, P. Khedkar, A.Malkani and
J. Hoblit, Space shuttle attitude control by reinforcement learning
and fuzzy logic, in: Proc. IEEE Internat. Conf. on Fuzzy Systems,
San Francisco,1993 1396-1401.
[12] H.R. Berenji, Fuzzy systems that can learn, in: J.M. Zurada,
R.J. Marks and C.J. Robinson eds., Computational Intelligence:
Imitating Life (IEEE Press, New York, 1994) 23-30.
[13] J.C. Bezdek, E.C. Tsao and N.K. Pal, Fuzzy Kohonen clustering
networks, in: Proc. IEEE Int. Conference on Fuzzy Systems 1992,
San Diego, 1992 10351043.
[14] J.C. Bezdek and S.K. Pal eds., Fuzzy Models for Pattern Recogni-
tion (IEEE Press, New York, 1992).
[15] S.A. Billings, H.B. Jamaluddin, and S. Chen. Properties of neural
networks with application to modelling nonlinear systems, Int. J.
Control, 55(1992)193224.
[16] A. Blanco, M. Delgado and I. Requena, Improved fuzzy neural
networks for solving relational equations, Fuzzy Sets and Systems,
72(1995) 311-322.
[17] M. Brown and C.J. Harris, A nonlinear adaptive controller: A com-
parison between fuzzy logic control and neurocontrol. IMA J. Math.
Control and Info., 8(1991) 239265.
[18] M. Brown and C. Harris, Neurofuzzy Adaptive Modeling and Con-
trol (Prentice-Hall, Englewood Clis, 1994).
[19] J.J. Buckley, Theory of the fuzzy controller: An introduction, Fuzzy
Sets and Systems, 51(1992) 249-258.
301
[20] J.J. Buckley and Y. Hayashi, Fuzzy neural nets and applications,
Fuzzy Systems and AI, 1(1992) 11-41.
[21] J.J. Buckley, Approximations between nets, controllers, expert sys-
tems and processes, in: Proceedings of 2nd Internat. Conf. on Fuzzy
Logic and Neural Networks, Iizuka, Japan, 1992 89-90.
[22] J.J. Buckley, Y. Hayashi and E. Czogala, On the equivalence of neu-
ral nets and fuzzy expert systems, Fuzzy Sets and Systems, 53(1993)
129-134.
[23] J.J.Buckley, Sugeno type controllers are universal controllers, Fuzzy
Sets and Systems, 53(1993) 299-304.
[24] J.J. Buckley and Y. Hayashi, Numerical relationships between neu-
ral networks, continuous functions, and fuzzy systems, Fuzzy Sets
and Systems, 60(1993) 1-8.
[25] J.J. Buckley and Y. Hayashi, Hybrid neural nets can be fuzzy con-
trollers and fuzzy expert systems, Fuzzy Sets and Systems, 60(1993)
135-142.
[26] J.J. Buckley and E. Czogala, Fuzzy models, fuzzy controllers and
neural nets, Arch. Theoret. Appl. Comput. Sci., 5(1993) 149-165.
[27] J.J. Buckley and Y. Hayashi, Can fuzzy neural nets approximate
continuous fuzzy functions? Fuzzy Sets and Systems, 61(1993) 43-
51.
[28] J.J .Buckley and Y. Hayashi, Fuzzy neural networks, in:
L.A. Zadeh and R.R. Yager eds., Fuzzy Sets, Neural Networks and
Soft Computing (Van Nostrand Reinhold, New York, 1994) 233-249.
[29] J.J .Buckley and Y. Hayashi, Fuzzy neural networks: A survey,
Fuzzy Sets and Systems, 66(1994) 1-13.
[30] J.J .Buckley and Y. Hayashi, Neural nets for fuzzy systems, Fuzzy
Sets and Systems, 71(1995) 265-276.
[31] G.A.Capenter et al, Fuzzy ARTMAP: A neural network architec-
ture for incremental supervised learning of analog multidimensional
maps, IEEE Transactions on Neural Networks, 3(1992) 698-713.
302
[32] S. Chen, S.A. Billings, and P.M. Grant, Recursive hybrid algorithm
for non-linear system identication using radial basis function net-
works, Int. J. Control, 55(1992) 10511070.
[33] F.C. Chen and M.H. Lin, On the learning and convergence of radial
basis networks, in: Proc. IEEE Int. Conf. Neural Networks, San
Francisco, 1993 983988.
[34] E. Cox, Adaptive fuzzy systems, IEEE Spectrum, , February 1993,
2731.
[35] E. Cox, The Fuzzy system Handbook. A Practitioners Guide to
Building, Using, and Maintaining Fuzzy Systems (Academic Press,
New York, 1994).
[36] D. Dumitrescu, Fuzzy training procedures I, Fuzzy Sets and Sys-
tems, 56(1993) 155-169.
[37] P. Eklund, H. Virtanen and T. Riisssanen, On the fuzzy logic nature
of neural nets, in: Proccedings of Neuro-Nimes, 1991 293300.
[38] P. Eklund and F. Klawonn, A Formal Framework for Fuzzy Logic
Based Diagnosis, in: R.Lowen and M.Roubens eds., Proccedings of
the Fourth IFSA Congress, vol. Mathematics, Brussels, 1991, 58-61.
[39] P. Eklund, M. Fogstr om and J. Forsstr om, A Generic Neuro-
Fuzzy Tool for Developing Medical Decision Support, in: P.Eklund
ed., Proceedings MEPP92,International Seminar on Fuzzy Control
through Neural Interpretations of Fuzzy Sets (

Abo Akademis tryck-


eri,

Abo, 1992) 127.
[40] P. Eklund, F. Klawonn, and D. Nauck, Distributing errors in neural
fuzzy control. in: Proc. 2nd Internat Conf. on Fuzzy Logic and
Neural Networks, Iizuka, Japan, 1992 11391142.
[41] P. Eklund and F. Klawonn, Neural fuzzy logic programming, IEEE
transactions on Neural Networks 3(1992) 815-818.
[42] P. Eklund, Neural Logic: A Basis for Second Generation Fuzzy
Controllers, in: U.H ohle and E.P.Klement eds., Proceedings of 14th
Linz Seminar on Fuzzy Set Theory, Johannes Kepler Universit at,
1992 19-23.
303
[43] P. Eklund and R. Fuller, A neuro-fuzzy approach to medical di-
agnostics, in:Proceedings of EUFIT93 Conference, September 7-
10, 1993, Aachen, Germany (Verlag der Augustinus Buchhandlung,
Aachen, 1993) 810-813.
[44] P. Eklund, J. Forsstr om, A. Holm, M.. Nystr om, and G. Selen,
Rule generation as an alternative to knowledge acquisition: A sys-
tems architecture for medical informatics, Fuzzy Sets and Systems,
66(1994) 195-205.
[45] P. Eklund, Network size versus preprocessing, in: R.R. Yager and
L.A. Zadeh eds., Fuzzy Sets, Neural Networks and Soft Computing
(Van Nostrand, New York, 1994) 250-264.
[46] P. Eklund, A generic system for developing medical decision sup-
port, Fuzzy Systems A.I. Rep. Letters, 3(1994) 71-78.
[47] P. Eklund and J. Forsstr om, Computational intelligence for labo-
ratory information systems, Scand. J. Clin. Lab. Invest., 55 Suppl.
222 (1995) 75-82.
[48] A.O. Esogbue, A fuzzy adaptive controller using reinforcement
learning neural networks, in: Proc. IEEE Internat. Conf. on Fuzzy
Systems, San Francisco,1993 178183.
[49] J. Forsstr om, P. Eklund, H. Virtanen, J. Waxlax and J. L ahdevirta,
DiagaiD: A Connectionists Approach to Determine the Information
Value of Clinical Data, Articial Intelligence in Medicine, 3 (1991)
193-201.
[50] T. Fukuda and T. Shibata, Fuzzy-neuro-GA based intelligent
robotics, in: J.M. Zurada, R.J. Marks and C.J. Robinson eds.,
Computational Intelligence: Imitating Life (IEEE Press, New York,
1994) 352-363.
[51] M. Furukawa and T. Yamakawa, The design algorithms of member-
ship functions for a fuzzy neuron, Fuzzy Sets and Systems, 71(1995)
329-343.
[52] S. Gallant, Neural Network Learning and Expert Systems, MIT
Press, Cambridge, Mass., USA, 1993
304
[53] A. Geyer-Schulz, Fuzzy rule based Expert Systems and Genetic
Learning (Physica-Verlag, Berlin, 1995).
[54] S. Giove, M. Nordio and A. Zorat, An Adaptive Fuzzy Control for
Automatic Dialysis, in: E.P. Klement and W. Slany eds., Fuzzy
Logic in Articial Intelligence, (Springer-Verlag, Berlin 1993) 146-
156.
[55] P.Y. Glorennec, Learning algorithms for neuro-fuzzy networks, in:
A. Kandel and G. Langholz eds., Fuzzy Control Systems (CRC
Press, New York, 1994) 4-18.
[56] A. Gonzalez, R. Perez and J.L. Verdegay, Learning the structure of
a fuzzy rule: A genetic approach, Fuzzy Systems A.I.Rep. Letters,
3(1994) 57-70.
[57] S. Goonatilake and S. Khebbal eds., Intelligent Hybrid Systems,
John Wiley and Sons, New York 1995.
[58] M.M. Gupta and J. Qi, On fuzzy neuron models, in: Proceedings of
International Joint Conference on Neural Networks, Seattle, 1991
431-436.
[59] M.M. Gupta and J. Qi, On fuzzy neuron models, in: L.A. Zadeh
and J. Kacprzyk eds., Fuzzy Logic for the Management of Uncer-
tainty (J. Wiley, New York, 1992) 479-491.
[60] M.M. Gupta, Fuzzy logic and neural networks, Proc. 2nd Internat.
Conf. on Fuzzy logic and Neural Networks, Iizuka, Japan, 1992
157-160.
[61] M.M. Gupta and M.B. Gorzalczany, Fuzzy neuro-computation
technique and its application to modeling and control, in: Proc.
IEEE Internat. Conf on Fuzzy Systems, San Diego, 1992 1271-1274.
[62] M.M. Gupta and D.H. Rao, On the principles of fuzzy neural net-
works, Fuzzy Sets and Systems, 59(1993) 271-279.
[63] S.K. Halgamuge and M. Glesner, Neural networks in designing
fuzzy systems for real world applications, Fuzzy Sets and Systems,
65(1994) 1-12.
305
[64] C.J. Harris, C.G. Moore, and M. Brown, Intelligent control, aspects
of fuzzy logic and neural networks (World Scientic Press, 1993).
[65] C.J. Harris ed., Advances in Intelligent Control (Taylor and Francis,
London, 1994).
[66] Y. Hayashi, J.J. Buckley and E. Czogala, Systems engineering ap-
plications of fuzzy neural networks, Journal of Systems Engineer-
ing, 2(1992) 232-236.
[67] Y. Hayashi, J.J. Buckley and E. Czogala, Fuzzy neural controller,
in: Proc. IEEE Internat. Conf on Fuzzy Systems, San Diego, 1992
197-202.
[68] Y. Hayashi, H. Nomura, H. Yamasaki and N. Wakami, Construc-
tion of fuzzy inference rules by NFD and NDFL, International Jour-
nal of Approximate Reasoning, 6(1992) 241-266.
[69] Y. Hayashi, Neural expert system using fuzzy teaching input, in:
Proc. IEEE Internat. Conf on Fuzzy Systems, San Diego, 1992 485-
491.
[70] Y. Hayashi, J.J. Buckley and E. Czogala, Fuzzy neural network
with fuzzy signals and weight, International Journal of Intelligent
Systems, 8(1992) 527-537.
[71] Y. Hayashi, J.J. Buckley and E. Czogala, Direct fuzzication of
neural network and fuzzied delta rule, Proc. 2nd Internat. Conf.
on Fuzzy logic and Neural Networks, Iizuka, Japan, 1992 73-76.
[72] Y. Hayashi and J.J. Buckley, Direct fuzzication of neural net-
works, in: Proceedings of 1st Asian Fuzzy Systems Symposium,
Singapore, 1993 560-567.
[73] Y. Hayashi and J.J. Buckley, Approximations between fuzzy expert
systems and neural networks, International Journal of Approximate
Reasoning, 10(1994) 63-73.
[74] K. Hirota and W. Pedrycz, Knowledge-based networks in classi-
cation problems, Fuzzy Sets and Systems, 51(1992) 1-27.
306
[75] K. Hirota and W. Pedrycz, OR/AND neuron in modeling fuzzy set
connectives, IEEE Transactions on Fuzzy Systems, 2(994) 151-161.
[76] K. Hirota and W. Pedrycz, Fuzzy modelling environment for de-
signing fuzzy controllers, Fuzzy Sets and Systems, 70(1995) 287-
301.
[77] Hitachi, Neuro and fuzzy logic automatic washing machine and
fuzzy logic drier, Hitachi News Rel., No. 91-024 (Feb. 26, 1991).
Hitachi, 1991 (in Japanese).
[78] S. Horikowa, T. Furuhashi and Y. Uchikawa, On fuzzy modeling
using fuzzy neural networks with the backpropagation algorithm,
IEEE Transactions on Neural Networks, 3(1992).
[79] S. Horikowa, T. Furuhashi and Y. Uchikawa, On identication of
structures in premises of a fuzzy model using a fuzzy neural net-
work, in: Proc. IEEE International Conference on Fuzzy Systems,
San Francisco, 1993 661-666.
[80] K.J. Hunt, D. Sbarbaro-Hofer, R. Zbikowski and P.J. Gawthrop,
Neural networks for control systems - a survey, Automatica,
28(1992) 10831112.
[81] H. Ichihashi, Iterative fuzzy modelling and a hierarchical network,
in: R.Lowen and M.Roubens eds., Proceedings of the Fourth IFSA
Congress, Vol. Engineering, Brussels, 1991 49-52.
[82] H. Ishibuchi, R. Fujioka and H. Tanaka, An architecture of neu-
ral networks for input vectors of fuzzy numbers, in: Proc. IEEE
Internat. Conf on Fuzzy Systems, San Diego, 1992 1293-1300.
[83] H. Ishibuchi, K. Nozaki and H. Tanaka, Distributed representation
of fuzzy rules and its application to pattern classication, Fuzzy
Sets and Systems, 52(1992) 21-32.
[84] H. Ishibuchi and H. Tanaka, Approximate pattern classication
using neural networks, in: R.Lowen and M.Roubens eds., Fuzzy
Logic: State of the Art (Kluwer, Dordrecht, 1993) 225-236.
307
[85] H. Ishibuchi, K. Nozaki and H. Tanaka, Ecient fuzzy partition of
pattern space for classication problems, Fuzzy Sets and Systems,
59(1993) 295-304.
[86] H. Ishibuchi, R. Fujioka and H. Tanaka, Neural networks that learn
from fuzzy IF-THEN rules, IEEE Transactions on Fuzzy Systems,
1(993) 85-97.
[87] H. Ishibuchi, H. Okada and H. Tanaka, Fuzzy neural networks with
fuzzy weights and fuzzy biases, in: Proc. IEEE Internat. Confer-
ence on Neural Networks, San Francisco, 1993 447-452.
[88] H. Ishibuchi, K. Kwon and H. Tanaka, Implementation of fuzzy IF-
THEN rules by fuzzy neural networks with fuzzy weights, in: Pro-
ceedings of EUFIT93 Conference, September 7-10, 1993 Aachen,
Germany, Verlag der Augustinus Buchhandlung, Aachen, 1993 209-
215.
[89] H. Ishibuchi, K. Kwon and H. Tanaka, Learning of fuzzy neural
networks from fuzzy inputs and fuzzy targets, in: Proc. 5th IFSA
World Congress, Seoul, Korea, 1993 147-150.
[90] H. Ishibuchi, K. Nozaki and H. Tanaka, Empirical study on learning
in fuzzy systems, in: Proc. 2nd IEEE Internat. Conference on Fuzzy
Systems, San Francisco, 1993 606-611.
[91] H. Ishibuchi, K. Nozaki, N. Yamamato and H. Tanaka, Genetic op-
erations for rule selection in fuzzy classication systems, in: Proc.
5th IFSA World Congress, Seoul, Korea, 1993 15-18.
[92] H. Ishibuchi, K. Nozaki, N. Yamamato, Selecting fuzzy rules by
genetic algorithm for classication problems, in: Proc. 2nd IEEE
Internat. Conference on Fuzzy Systems, San Francisco, 1993 1119-
1124.
[93] H. Ishibuchi, H. Okada and H. Tanaka, Interpolation of fuzzy IF-
THEN rules by neural networks, International Journal of Approx-
imate Reasoning, 10(1994) 3-27.
308
[94] H. Ishibuchi, K. Nozaki, N. Yamamato and H. Tanaka, Construc-
tion of fuzzy classication systems with rectangular fuzzy rules us-
ing genetic algorithms, Fuzzy Sets and Systems, 65(1994) 237-253.
[95] H. Ishibuchi, K. Kwon and H. Tanaka, A learning algorithm of
fuzzy neural networks with triangular fuzzy weights, Fuzzy Sets
and Systems, 71(1995) 277-293.
[96] H. Ishigami, T. Fukuda, T. Shibita and F. Arai, Structure opti-
mization of fuzzy neural network by genetic algorithm, Fuzzy Sets
and Systems, 71(1995) 257-264.
[97] J.-S. Roger Jang, ANFIS: Adaptive-network-based fuzzy inference
system, IEEE Trans. Syst., Man, and Cybernetics, 23(1993) 665-
685.
[98] J.M. Keller and D. Hunt, Incorporating fuzzy membership func-
tions into the perceptron algorithm, IEEE Transactions on Pat-
tern. Anal. Mach. Intell., 7(1985) 693-699.
[99] J.M. Keller, R.R. Yager and H.Tahani, Neural network implemen-
tation of fuzzy logic, Fuzzy Sets and Systems, 45(1992) 1-12.
[100] J.M. Keller and H.Tahani, Backpropagation neural networks for
fuzzy logic, Information Sciences, 6(1992) 205-221.
[101] J.M. Keller and H.Tahani, Implementation of conjunctive and dis-
junctive fuzzy logic rules with neural networks, International Jour-
nal of Approximate Reasoning, 6(1992) 221-240.
[102] J.M. Keller, R. Krishnapuram, Z.H. Chen and O. Nasraoui, Fuzzy
additive hybrid operators for network-based decision making, In-
ternational Journal of Intelligent Systems 9(1994) 1001-1023.
[103] E. Khan and P. Venkatapuram, Neufuz: Neural network based
fuzzy logic design algorithms, in: Proceedings of IEEE Interna-
tional Conf. on Fuzzy Systems, San Francisco, 1993 647654.
[104] P.S. Khedkar, Learning as adaptive interpolation in neural fuzzy
systems, in: J.M. Zurada, R.J. Marks and C.J. Robinson eds.,
Computational Intelligence: Imitating Life (IEEE Press, New York,
1994) 31-42.
309
[105] Y.S. Kim and S. Mitra, An adaptive integrated fuzzy clustering
model for pattern recognition, Fuzzy Sets and Systems, 65(1994)
297-310.
[106] S.G. Kong and B. Kosko, Adaptive fuzzy systems for backing up a
truck-and-trailer, IEEE Transactions on Neural Networks, 3(1992)
211-223.
[107] B. Kosko, Neural Networks and Fuzzy Systems (Prentice-Hall, En-
glewood Clis, 1992).
[108] R. Krishnapuram and J. Lee, Fuzzy-set-based hierarchical networks
for information fusion in computer vision, Neural Networks, 5(1992)
335-350.
[109] R. Kruse, J. Gebhardt and R. Palm eds., Fuzzy Systems in Com-
puter Science (Vieweg, Braunschweig, 1994).
[110] D.C. Kuncicky, A fuzzy interpretation of neural networks, in: Pro-
ceedings of 3rd IFSA Congress, 1989 113116.
[111] H.K. Kwan and Y.Cai, A fuzzy neural network and its applica-
tion to pattern recognition, IEEE Transactions on Fuzzy Systems,
3(1994) 185-193.
[112] S.C. Lee and E.T. Lee, Fuzzy sets and neural networks, Journal of
Cybernetics 4(1974) 83-103.
[113] S.C. Lee and E.T. Lee, Fuzzy neural networks, Math. Biosci.
23(1975) 151-177.
[114] H.-M. Lee and W.-T. Wang, A neural network architecture for
classication of fuzzy inputs, Fuzzy Sets and Systems, 63(1994)
159-173.
[115] M. Lee, S.Y. Lee and C.H. Park, Neuro-fuzzy identiers and con-
trollers, J. of Intelligent Fuzzy Systems, 6(1994) 1-14.
[116] K.-M. Lee, D.-H. Kwang and H.L. Wang, A fuzzy neural network
model for fuzzy inference and rule tuning,International Journal
of Uncertainty, Fuzziness and Knowledge-Based Systems, 3(1994)
265-277.
310
[117] C.T. Lin and C.S.G. Lee, Neural-network-based fuzzy logic control
and decision system, IEEE Transactions on Computers, 40(1991)
1320-1336.
[118] Y. Lin and G.A. Cunningham III, A new approach to fuzzy-neural
system modeling, IEEE Transactions on Fuzzy systems, 3(1995)
190-198.
[119] C.T. Lin and Y.C. Lu, A neural fuzzy system with linguistic teach-
ing signals, IEEE Transactions on Fuzzy Systems, 3(1995) 169-189.
[120] R.J. Machado and A.F. Rocha, A hybrid architecture for fuzzy
connectionist expert systems, in: A. Kandel and G. Langholz eds.,
Hybrid Architectures for Intelligent Systems (CRC Press, Boca Ra-
ton, FL, 1992).
[121] R.A. Marques Pereira, L. Mich and L. Gaio, Curve reconstruction
with dynamical fuzzy grading and weakly continuous constraints,
in: Proceedings of the 2nd Workshop on Current Issues in Fuzzy
Technologies, Trento, June 1992, (Dipartimento di Informatica e
Studi Aziendali, Universit a di Trento 1993) 77-85.
[122] L. Medsker, Hybrid Neural Network and Expert Systems (Kluwer
Academic Publishers, Boston, 1994).
[123] S.Mitra and S.K.Pal, Neuro-fuzzy expert systems: overview with a
case study, in: S.Tzafestas and A.N. Venetsanopoulos eds., Fuzzy
Reasoning in Information, Decision and Control Systems ( Kluwer,
Dordrecht, 1994) 121-143.
[124] S.Mitra and S.K.Pal, Self-organizing neural network as a fuzzy clas-
sier, IEEE Trans. Syst., Man, and Cybernetics, 24(1994) 385-399.
[125] S.Mitra and S.K.Pal, Fuzzy multi-layer perceptron, inferencing and
rule generation, IEEE Transactions on Neural Networks, 6(1995)
51-63.
[126] S.Mitra, Fuzzy MLP based expert system for medical diagnosis,
Fuzzy sets and Systems, 65(1994) 285-296.
311
[127] T. Morita, M. Kanaya and T. Inagaki, Photo-copier image density
control using neural network and fuzzy theory. in: Proceedings of
the Second International Workshop on Industrial Fuzzy Control and
Intelligent Systems, 1992 10-16.
[128] D. Nauck, F. Klawonn and R. Kruse, Fuzzy sets, fuzzy controllers
and neural networks,Wissenschaftliche Zeitschrift der Humboldt-
Universit at zu Berlin, reihe Medizin, 41(1992) 99-120.
[129] D. Nauck and R. Kruse, A fuzzy neural network learning fuzzy con-
trol rules and membership functions by fuzzy error backpropaga-
tion, in: Proceedings of IEEE Int. Conference on Neural Networks,
San Francisco, 1993 1022-1027.
[130] D. Nauck, F. Klawonn and R. Kruse, Combining neural networks
and fuzzy controllers, in: E.P. Klement and W. Slany eds., Fuzzy
Logic in Articial Intelligence, (Springer-Verlag, Berlin, 1993) 35-
46.
[131] D. Nauck and R. Kruse, NEFCON-I: An X-Window based sim-
ulator for neural fuzzy controllers, in: Proceedings of IEEE Int.
Conference on Neural Networks, Orlando, 1994 1638-1643.
[132] D. Nauck, Fuzzy neuro systems: An overview, in: R. Kruse,
J. Gebhardt and R. Palm eds., Fuzzy systems in Computer Sci-
ence (Vieweg, Wiesbaden, 1994) 91-107.
[133] D. Nauck, Building neural fuzzy controllers with NEFCON-I. in:
R. Kruse, J. Gebhardt and R. Palm eds., Fuzzy systems in Com-
puter Science (Vieweg, wiesbaden, 1994) 141-151.
[134] D. Nauck, F. Klawonn and R. Kruse, Neurale Netze und Fuzzy-
Systeme (Vieweg, wiesbaden, 1994).
[135] D. Nauck and R. Kruse, NEFCLASS - A neuro-fuzzy approach
for the classication of data, in: K.M. George et al eds., Applied
Computing, Proceedings of the 1995 ACM Symposium on Applied
Computing, Nashville, February 26-28, 1995, ACM Press, 1995.
312
[136] R. Narita, H. Tatsumi and H. Kanou, Application of neural net-
works to household applications. Toshba Rev. 46, 12 (December
1991) 935-938. (in Japanese)
[137] J. Nie and D. Linkens, Fuzzy Neural Control - Principles, Algo-
rithms and Applications (Prentice-Hall, Englewood Clis, 1994).
[138] Nikkei Electronics, New trend in consumer electronics: Combining
neural networks and fuzzy logic, Nikkei Elec., 528(1991) 165-169
(In Japanese).
[139] H. Nomura, I. Hayashi and N. Wakami, A learning method of fuzzy
inference rules by descent method, in: Proceedings of the IEEE
International Conference on Fuzzy Systems, San Diego, 1992 203-
210.
[140] H. Okada, N. Watanabe, A. Kawamura and K. Asakawa, Initializ-
ing multilayer neural networks with fuzzy logic. in: Proceedings of
the International Joint Conference on Neural Networks, Baltimore,
1992 239-244.
[141] S.K.Pal and S.Mitra, Fuzzy versions of Kohonens net and MLP-
based classication: Performance evaluation for certain nonconvex
decision regions, Information Sciences, 76(1994) 297-337.
[142] W. Pedrycz and W.C. Card, Linguistic interpretation of self-
organizing maps, in: Proceedings of the IEEE International Con-
ference on Fuzzy Systems, San Diego, 1992 371378.
[143] W. Pedrycz, Fuzzy Control and Fuzzy Systems (Wiley, New York,
1993).
[144] W. Pedrycz, Fuzzy Sets Engineering (CRC Press, Boca Raton,
1995).
[145] C. Posey, A.Kandel and G. Langholz, Fuzzy hybrid systems, in:
A. Kandel and G. Langholz eds., Hybrid architectures for Intelligent
Systems (CRC Press, Boca Raton, Florida, 1992) 174-196.
[146] G.V.S. Rajau and J Zhou, Adaptive hierarchical fuzzy controller,
IEEE Trans. Syst., Man, and Cybernetics, 23(1993) 973-980.
313
[147] A.L. Ralescu ed., Fuzzy Logic in Articial Intelligence, Proc. IJ-
CAI93 Workshop, Chambery, France, Lecture Note in articial
Intelligence, Vol. 847 (Springer, Berlin, 1994).
[148] J. Rasmussen, Diagnostic reasoning in action, IEEE Trans. Syst.,
Man, and Cybernetics, 23(1993) 981-992.
[149] I. Requena and M. Delgado, R-FN: A model of fuzzy neuron, in:
Proc. 2nd Int. Conf. on Fuzzy Logic & Neural Networks, Iizuka,
Japan, 1992 793-796.
[150] T. Riissanen, An Experiment with Clustering, Proceedings
MEPP92, International Seminar on Fuzzy Control through Neu-
ral Interpretations of Fuzzy Sets, Mariehamn,

Aland, June 15-19,
1992,

Abo Akademi tryckeri,

Abo, 1992, 57-65.
[151] Sanyo, Electric fan series in 1991, Sanyo News Rel., (March 14,
1991). Sanyo, 1991 (In Japanese).
[152] E. Sanchez, Fuzzy logic knowledge systems and articial neural net-
works in medicine and biology, in: R.R. Yager and L.A. Zadeh eds.,
An Introduction to Fuzzy Logic Applications in Intelligent Systems
(Kluwer, Boston, 1992) 235-251.
[153] J.D. Schaer, Combinations of genetic algorithms with neural
networks or fuzzy systems, in: J.M. Zurada, R.J. Marks and
C.J. Robinson eds., Computational Intelligence: Imitating Life
(IEEE Press, New York, 1994) 371-382.
[154] R. Serra and G. Zanarini, Complex Systems and Cognitive Pro-
cesses (Springer Verlag, Berlin, 1990).
[155] J.J. Shann and H.C. Fu, A fuzzy neural network for rule acquiring
on fuzzy control system, Fuzzy Sets and Systems, 71(1995) 345-357.
[156] P. Simpson, Fuzzy min-max neural networks: 1.Classication,
IEEE Transactions on Neural Networks, 3(1992) 776-786.
[157] P. Simpson, Fuzzy min-max neural networks: 2.Clustering, IEEE
Transactions on Fuzzy systems, 1(1993) 32-45.
314
[158] M. Sugeno and G.-K. Park, An approach to linguistic instruction
based learning, International Journal of Uncertainty, Fuzziness and
Knowledge-Based Systems, 1(1993) 19-56.
[159] S.M. Sulzberger, N.N.Tschichold-G urman and S.J. Vestli, FUN:
Optimization of fuzzy rule based systems using neural networks,
in: Proc. IEEE Int. Conference on Neural Networks, San Francisco,
1993 312316.
[160] C.-T. Sun and J.-S. Jang, A neuro-fuzzy classier and its appli-
cations, in: Proc. IEEE Int. Conference on Neural Networks, San
Francisco, 1993 9498.
[161] H. Takagi, Fusion technology of fuzzy theory and neural networks
- survey and future directions, in: Proc. First Int. Conf. on Fuzzy
Logic & Neural Networks, 1990 1326.
[162] H. Takagi and I. Hayashi, NN-driven fuzzy reasoning. International
Journal of Approximate Reasoning, 3(1991) 191-212.
[163] H. Takagi, N. Suzuki, T. Koda and Y. Kojima, neural networks
designed on approximate reasoning architecture and their applica-
tions, IEEE Transactions on Neural Networks, 3(1992) 752-760.
[164] I.B.Turksen, Fuzzy expert systems for IE/OR/MS, Fuzzy Sets and
Systems, 51(1992) 1-27.
[165] K. Uehara and M. Fujise, Learning of fuzzy inference criteria with
articial neural network, in: Proc. 1st Int. Conf. on Fuzzy Logic &
Neural Networks, Iizuka, Japan, 1990 193-198.
[166] M. Umano and Y, Ezawa, Execution of approximate reasoning by
neural network, Proceedings of FAN Symposium, 1991 267-273 (in
Japanese).
[167] H. Virtanen, Combining and incrementing fuzzy evidence - Heuris-
tic and formal approaches to fuzzy logic programming, in: R.Lowen
and M.Roubens eds., Proccedings of the fourth IFSA Congress, vol.
Mathematics, Brussels, 1991 200-203.
315
[168] L.-X. Wang and J.M. Mendel, Generating fuzzy rules by learning
from examples, IEEE Trans. Syst., Man, and Cybernetics, 22(1992)
1414-1427.
[169] H. Watanabe et al., Application of fuzzy discriminant analysis for
diagnosis of valvular heart disease, IEEE Transactions on Fuzzy
Systems, 2(1994) 267- 276.
[170] P.J. Werbos, Neurocontrol and fuzzy logic: connections and de-
signs, International Journal of Approximate Reasoning, 6(1992)
185-219.
[171] R.R. Yager, Using fuzzy logic to build neural networks, in: R.Lowen
and M.Roubens eds., Proceedings of the Fourth IFSA Congress,
Vol. Artical intelligence, Brussels, 1991 210-213.
[172] R.R. Yager, Implementing fuzzy logic controllers using a neural
network framework, Fuzzy Sets and Systems, 48(1992) 53-64.
[173] R.R. Yager and L.A. Zadeh eds., Fuzzy Sets, Neural Networks, and
Soft Computing (Van Nostrand Reinhold, New York, 1994).
[174] T. Yamakawa, A neo fuzzy neuron and its applications to system
identication and prediction of chaotic behaviour, in: J.M. Zurada,
R.J. Marks and C.J. Robinson eds., Computational Intelligence:
Imitating Life (IEEE Press, New York, 1994) 383-395.
[175] J. Yan, M. Ryan and J. Power, Using Fuzzy Logic - Towards Intel-
ligent Systems (Prentice-Hall, Englewood Clis, 1994).
[176] Y. Yam and K.S. Leung eds., Future Directions of Fuzzy Theory
and Systems (World Scientic, Singapore, 1994).
316
Chapter 4
Appendix
4.1 Case study: A portfolio problem
Suppose that our portfolio value depends on the currency uctations on the
global nance market. There are three rules in our knowledge base:

1
: if x
1
is L
1
and x
2
is H
2
and x
3
is L
3
then y = 200x
1
+ 100x
2
+ 100x
3

2
: if x
1
is M
1
and x
2
is M
2
and x
3
is M
3
then y = 200x
1
100x
2
+ 100x
3

3
: if x
1
is H
1
and x
2
is H
2
and x
3
is H
3
then y = 200x
1
100x
2
100x
3
where y is the portfolio value, the linguistic variables x
1
, x
2
and x
3
denote
the exchange rates between USD and DEM, USD and SEK, and USD and
FIM, respectively.
The rules should be interpreted as:

1
: If the US dollar is weak against German mark and the US dollar is
strong against the Swedish crown and the US dollar is weak against
the Finnish mark then our portfolio value is positive.

2
: If the US dollar is medium against German mark and the US dollar
is medium against the Swedish crown and the US dollar is medium
against the Finnish mark then our portfolio value is about zero.
317

3
: If the US dollar is strong against German mark and the US dollar is
strong against the Swedish crown and the US dollar is strong against
the Finnish mark then our portfolio value is negative.
Choose triangular membership functions for primary fuzzy sets {L
i
, M
i
, Hs
i
}, i =
1, 2, 3, take the actual daily exchange rates, a
1
, a
2
and a
3
, from newspapers
and evaluate the daily portfolio value by Sugenos reasoning mechanism, i.e.
The ring levels of the rules are computed by

1
= L
1
(a
1
) H
2
(a
2
) L
3
(a
3
),

2
= M
1
(a
1
) M
2
(a
2
) M
3
(a
3
),

3
= H
1
(a
1
) H
2
(a
2
) H
3
(a
3
),
The individual rule outputs are derived from the relationships
y
1
= 200a
1
+ 100a
2
+ 100a
3
y
2
= 200a
1
100a
2
+ 100a
3
y
3
= 200a
1
100a
2
100a
3
The overall system output is expressed as
y
0
=

1
y
1
+
2
y
2
+
3
y
3

1
+
2
+
3
318
min
1
2
L1
L3
H2
M1
M3
M2
H1 H3
3
y1
y3
y2
H2
Figure 4.1 Sugenos reasoning mechanism with three inference rules.
The fuzzy set L
3
describing that USD/FIM is low can be given by the
following membership function
L
3
(t) =
_

_
1 2(t 3.5) if 3.5 t 4
1 if t 3.5
0 if t 4
The fuzzy set M
3
describing that USD/FIM is medium can be given by
the following membership function
M
3
(t) =
_
1 2|t 4| if 3.5 t 4.5
0 otherwise
The fuzzy set H
3
describing that USD/FIM is high can be given by the
following membership function
H
3
(t) =
_

_
1 2(4.5 t) if 4 t 4.5
1 if t 4.5
0 if t 4
319
3.5
4
4.5
1
H3
L3 M3
Figure 4.2 Membership functions for x
3
is low, x
3
is medium and
x
3
is high
The fuzzy set L
2
describing that USD/SEK is low can be given by the
following membership function
L
2
(t) =
_

_
1 2(t 6.5) if 6.5 t 7
1 if t 6.5
0 if t 7
The fuzzy set M
2
describing that USD/SEK is medium can be given by
the following membership function
M
2
(t) =
_
1 2|t 7| if 6.5 t 7.5
0 otherwise
The fuzzy set H
2
describing that USD/SEK is high can be given by the
following membership function
H
2
(t) =
_

_
1 2(7.5 t) if 7 t 7.5
1 if t 7.5
0 if t 7
320
6.5
7
7.5
1
H2
L2 M2
Figure 4.3 Membership functions for x
2
is low, x
2
is medium and
x
2
is high
The fuzzy set L
1
describing that USD/DEM is low can be given by the
following membership function
L
1
(t) =
_

_
1 2(t 1) if 1 t 1.5
1 if t 1
0 if t 1.5
The fuzzy set M
1
describing that USD/DEM is medium can be given by
the following membership function
M
1
(t) =
_
1 2|t 1.5| if 1 t 2
0 otherwise
The fuzzy set H
1
describing that USD/DEM is high can be given by the
following membership function
H
1
(t) =
_

_
1 2(2 t) if 1.5 t 2
1 if t 2
0 if t 1.5
321
1
1.5 2
1
H1
L1 M1
Figure 4.4 Membership functions for x
1
is low, x
1
is medium and
x
1
is high
Table 4.1 shows some mean exchange rates from 1995, and the portfolio
values derived from the fuzzy rule base = {
1
,
2
,
3
, } with the initial
membership functions {L
i
, M
i
, B
i
}, i = 1, 2, 3 for the primary fuzzy sets.
Date USD/DEM USD/SEK USD/FIM Computed PV
January 11, 1995 1.534 7.530 4.779 - 923.2
May 19, 1995 1.445 7.393 4.398 -10.5
August 11, 1995 1.429 7.146 4.229 -5.9
August 28, 1995 1.471 7.325 4.369 -1.4
Table 4.1 Inferred portfolio values.
322
4.2 Exercises
Exercise 4.1 Interpret the following fuzzy set.
Figure 4.5 Fuzzy set.
Solution 4.1 The fuzzy set from the Figure 4.5 can be interpreted as:
x is close to -2 or x is close to 0 or x is close to 2
Exercise 4.2 Suppose we have a fuzzy partition of the universe of discourse
[1000, 1000] with three fuzzy terms {N, ZE, P}, where
N(t) =
_

_
1 (t + 1000)/500 if 1000 t 500
1 if t 1000
0 if t 500
P(t) =
_

_
1 (1000 t)/500 if 500 t 1000
1 if t 1000
0 if t 500
ZE(t) =
_

_
1 if 500 t 500
0 if t 1000
0 if t 1000
1 + (t + 500)/500 if 1000 t 500
1 (t 500)/500 if 500 t 1000
Find the biggest for which this fuzzy partition satises the property -
completeness.
323
1000
-1000
P
N
ZE
-500
500
Figure 4.6 Membership functions of {N, ZE, P}.
Solution 4.2 = 0.5.
Exercise 4.3 Show that if

then the relationship


HAND

(a, b) HAND

(a, b)
holds for all x, y [0, 1], i.e. the family of parametized Hamachers t-norms,
{HAND

}, is monoton decreasing.
Solution 4.3 Let 0

. Then from the relationship

ab + (1

)ab(a +b ab) ab + (1 )ab(a +b ab)


it follows that
HAND

(a, b) =
ab
+ (1 )(a +b ab)

ab

+ (1

)(a +b ab)
= HAND

(a, b).
Which ends the proof.
Exercise 4.4 Consider two fuzzy relations R and G, where R is interpreted
linguistically as x is approximately equal to y and the linguistic interpre-
tation of G is y is very close to z. Assume R and G have the following
membership functions
R =
_
_
_
_
_
y
1
y
2
y
3
x
1
1 0.1 0.1
x
2
0 1 0
x
3
0.9 1 1
_
_
_
_
_
G =
_
_
_
_
_
_
z
1
z
2
z
3
y
1
0.4 0.9 0.3
y
2
0 0.4 0
y
3
0.9 0.5 0.8
_
_
_
_
_
_
324
What is the membership function of their composition?
RG =
_
_
_
_
_
_
y
1
y
2
y
3
x
1
1 0.1 0.1
x
2
0 1 0
x
3
0.9 1 1
_
_
_
_
_
_

_
_
_
_
_
_
_
z
1
z
2
z
3
y
1
0.4 0.9 0.3
y
2
0 0.4 0
y
3
0.9 0.5 0.8
_
_
_
_
_
_
_
=
_
_
_
_
_
_
z
1
z
2
z
3
x
1
? ? ?
x
2
? ? ?
x
3
? ? ?
_
_
_
_
_
_
What can be the linguistic interpretation of R G?
Solution 4.4
RG =
_
_
_
_
_
_
y
1
y
2
y
3
x
1
1 0.1 0.1
x
2
0 1 0
x
3
0.9 1 1
_
_
_
_
_
_

_
_
_
_
_
_
_
z
1
z
2
z
3
y
1
0.4 0.9 0.3
y
2
0 0.4 0
y
3
0.9 0.5 0.8
_
_
_
_
_
_
_
=
_
_
_
_
_
_
z
1
z
2
z
3
x
1
0.4 0.9 0.3
x
2
0 0.4 0
x
3
0.9 0.9 0.8
_
_
_
_
_
_
R G can be interpreted as x is very close to z.
Exercise 4.5 Assume the membership function of the fuzzy set A, big pres-
sure is
A(u) =
_

_
1 if u 5
1 (5 u)/4 if 1 u 5
0 otherwise
Assume the membership function of the fuzzy set B, small volume is
B(v) =
_

_
1 if v 1
1 (v 1)/4 if 1 v 5
0 otherwise
What is the truth value of the proposition?
4 is big pressure 3 is small volume
where is the Lukasiewicz implication.
325
Solution 4.5 Using the denition of Lukasiewicz implication we get
4 is big pressure 3 is small volume A(4) B(3) = min{1A(4)+B(3), 1}
and from A(4) = 0.75 and B(3) = 0.5 we get
4 is big pressure 3 is small volume = min{1 0.75 + 0.5, 1} = 0.75
Exercise 4.6 Let A, A

, B F. Show that the Generalized Modus Ponens


inference rule with G odel implication satises
Basic property: A (A B) = B
Total indeterminance: A (A B) = 1 where 1(t) = 1 for
t IR
Subset property: If A

A then A

(A B) = B
Superset property: A

(A B) = B

B holds for any A

F.
Solution 4.6 The Generalized Modus Ponens inference rule says
premise if x is A then y is B
fact x is A

consequence: y is B

where the consequence B

is determined as a composition of the fact and the


fuzzy implication operator
B

= A

(A B)
that is,
B

(v) = sup
uU
min{A

(u), (A B)(u, v)}, v V.


Let us choose Godel implication operator
A(x) B(y) :=
_
1 if A(x) B(y)
B(y) otherwise
Proof.
326
Basic property.
Let A

= A and let x, y IR be arbitrarily xed. On the one hand from


the denition of G odel implication operator we obtain
min{A(x), A(x) B(y)} =
_
A(x) if A(x) B(y)
B(y) if A(x) > B(y)
That is,
B

(y) = sup
x
min{A(x), A(x) B(y)} B(y)
On the other hand from continuity and normality of A it follows that
there exists an x

IR such that A(x

) = B(y). So
B

(y) = sup
x
min{A(x), A(x) B(y)} min{A(x

), A(x

) B(y)} = B(y)
Total indeterminance. Let x

/ supp(A) be arbitrarily chosen. Then


from A(x

) = 0 it follows that
B

(y) = sup
x
min{1A(x), A(x) B(y)} min{1A(x

), A(x

) B(y)} = 1
for any y IR.
Subset. Let A

(x) A(x), x IR. Then


B

(y) = sup
x
min{A

(x), A(x) B(y)} sup


x
min{A(x), A(x) B(y)} = B(y)
Superset. From A

F it follows that there exists an x

IR such that
A

(x

) = 1. Then
B

(y) = sup
x
min{A

(x), A(x) B(y)}


min{A

(x

), A(x

) B(y)} = A(x

) B(y) B(y)
Which ends the proof.
327
A B'
B
A'
x1
x2

-1
1
0.5
Figure 4.7 GMP with Godel implication.
Exercise 4.7 Construct a single-neuron network, which computes the ma-
terial implication function. The training set is
x
1
x
2
o(x
1
, x
2
)
1. 1 1 1
2. 1 0 0
3. 0 1 1
4. 0 0 1
Solution 4.7 A solution to the material implication function can be
Figure 4.8 A single-neuron network for the material implication.
Exercise 4.8 Suppose we have the following fuzzy rule base.
if x is SMALL and y is BIG then z = x y
if x is BIG and y is SMALL then z = x +y
if x is BIG and y is BIG then z = x + 2y
where the membership functions SMALL and BIG are dened by
328

SMALL
(v) =
_

_
1 if v 1
1 (v 1)/4 if 1 v 5
0 otherwise

BIG
(u) =
_

_
1 if u 5
1 (5 u)/4 if 1 u 5
0 otherwise
Suppose we have the inputs x
0
= 3 and y
0
= 3. What is the output of the
system, z
0
, if we use Sugenos inference mechanism?
Solution 4.8 The ring level of the rst rule is

1
= min{
SMALL
(3),
BIG
(3)} = min{0.5, 0.5} = 0.5
the individual output of the rst rule is
z
1
= x
0
y
0
= 3 3 = 0
The ring level of the second rule is

1
= min{
BIG
(3),
SMALL
(3)} = min{0.5, 0.5} = 0.5
the individual output of the second rule is
z
2
= x
0
+y
0
= 3 + 3 = 6
The ring level of the third rule is

1
= min{
BIG
(3),
BIG
(3)} = min{0.5, 0.5} = 0.5
the individual output of the third rule is
z
3
= x
0
+ 2y
0
= 3 + 6 = 9
and the system output, z
0
, is computed from the equation
z
0
= (0 0.5 + 6 0.5 + 9 0.5)/(0.5 + 0.5 + 0.5) = 5.0
329
1
a
a-
a+

a-(1- )
a+(1- )
Exercise 4.9 Why do we use dierentiable transfer functions in multi-layer
feedforward neural networks?
Solution 4.9 We use dierentiable transfer functions in multi-layer net-
works, because the derivative of the error function is used in the generalized
delta learning rule.
Exercise 4.10 What is the meaning of the error correction learning proce-
dure?
Solution 4.10 The error correction learning procedure is simple enough in
conception. The procedure is as follows: During training an input is put into
the network and ows through the network generating a set of values on the
output units. Then, the actual output is compared with the desired target,
and a match is computed. If the output and target match, no change is made
to the net. However, if the output diers from the target a change must be
made to some of the connections.
Exercise 4.11 Let A = (a, , ) be a triangular fuzzy number. Calculate
[A]

as a function of a, and .
Solution 4.11 The -cut of triangular fuzzy number A = (a, , ) is
[A]

= [a (1 ), a + (1 )], [0, 1].


Especially, [A]
1
= {a} and [A]
0
= [a , a + ].
Figure 4.8 -cut of a triangular fuzzy number.
Exercise 4.12 Consider some alternative with the following scores on ve
criteria
330
Criteria: C
1
C
2
C
3
C
4
C
5
Importance: VH VH M L VL
Score: M L OU VH OU
Calculate the unit score of this alternative.
Solution 4.12 In this case we have
U = min{Neg(V H)M, Neg(V H)L, Neg(M)OU, Neg(L)V H, Neg(V L)OU}
= min{V LM, V LL, MOU, HV H, V HOU} = min{M, L, OU, V H, OU} = L
Exercise 4.13 Let A = (a, ) and B = (b, ) be fuzzy numbers of symmet-
rical triangular form. Calculate their Hausdor distances, D(A, B), as a
function of a, b, and .
Solution 4.13 The -cuts of A and B can be written in the form
[A]

= [a
1
(), a
2
()] = [a (1 ), a + (1 )], [0, 1].
[B]

= [b
1
(), b
2
()] = [b (1 ), b + (1 )], [0, 1].
and from the denition of Hausdor distance
D(A, B) = sup
[0,1]
max{|a
1
() b
1
()|, |a
2
() b
2
()|}
we get
D(A, B) = sup
[0,1]
max{|a b + (1 )( )|, |a b + (1 )( )|}
That is,
D(A, B) = max{|a b + |, |a b + |}.
Exercise 4.14 The error function to be minimized is given by
E(w
1
, w
2
) =
1
2
[(w
2
w
1
)
2
+ (1 w
1
)
2
]
331
Find analytically the gradient vector
E

(w) =
_

1
E(w)

2
E(w)
_
Find analytically the weight vector w

that minimizes the error function such


that
E

(w) = 0.
Derive the steepest descent method for the minimization of E.
Solution 4.14 The gradient vector of E is
E

(w) =
_
(w
1
w
2
) + (w
1
1)
(w
2
w
1
)
_
=
_
2w
1
w
2
1
w
2
w
1
_
and w

(1, 1)
T
is the unique solution to the equation
_
2w
1
w
2
1
w
2
w
1
_
=
_
0
0
_
.
The steepest descent method for the minimization of E reads
_
w
1
(t + 1)
w
2
(t + 1)
_
=
_
2w
1
(t) w
2
(t) 1
w
2
(t) w
1
(t)
_
.
where > 0 is the learning constant and t indexes the number of iterations.
That is,
w
1
(t + 1) = w
1
(t) (2w
1
(t) w
2
(t) 1)
w
2
(t + 1) = w
2
(t) (w
2
(t) w
1
(t))
Exercise 4.15 Let f be a bipolar sigmoidal activation function of the form
f(t) =
2
1 + exp(t)
1.
Show that f satises the following dierential equality
f

(t) =
1
2
(1 f
2
(t)).
332
Solution 4.15 By using the chain rule for derivatives of composed functions
we get
f

(t) =
2exp(t)
[1 +exp(t)]
2
.
From the identity
1
2
_
1
_
1 exp(t)
1 +exp(t)
_
2
_
=
2exp(t)
[1 +exp(t)]
2
we get
2exp(t)
[1 +exp(t)]
2
=
1
2
(1 f
2
(t)).
Which completes the proof.
Exercise 4.16 Let f be a unipolar sigmoidal activation function of the form
f(t) =
1
1 + exp(t)
.
Show that f satises the following dierential equality
f

(t) = f(t)(1 f(t)).


Solution 4.16 By using the chain rule for derivatives of composed functions
we get
f

(t) =
exp(t)
[1 +exp(t)]
2
and the identity
1
[1 +exp(t)]
2
=
exp(t)
1 +exp(t)
_
1
exp(t)
1 +exp(t)
_
veries the statement of the exercise.
Exercise 4.17 Construct a hybid neural net implementing Tsukumatos rea-
soning mechanism with two input variables, two linguistiuc values for each
input variable and two fuzzy IF-THEN rules.
333
Solution 4.17 Consider the fuzzy rule base

1
: if x is A
1
and y is B
1
then z is C
1

2
: if x is A
2
and y is B
2
then z is C
2
where all linguistic terms are supposed to have monotonic membership func-
tions.
The ring levels of the rules are computed by

1
= A
1
(x
0
) B
1
(y
0
)

2
= A
2
(x
0
) B
2
(y
0
),
where the logical and can be modelled by any continuous t-norm, e.g

1
= A
1
(x
0
) B
1
(y
0
)

2
= A
2
(x
0
) B
2
(y
0
),
In this mode of reasoning the individual crisp control actions z
1
and z
2
are
computed as
z
1
= C
1
1
(
1
) and z
2
= C
1
2
(
2
)
and the overall control action is expressed as
z
0
=

1
z
1
+
2
z
2

1
+
2
=
1
z
1
+
2
z
2
where
1
and
1
are the normalized values of
1
and
2
with respect to the
sum (
1
+
2
), i.e.

1
=

1
+
2
,
2
=

1
+
2
.
334
u v
w
u
xo
v
yo
w min
A1
B2
C1
A2
B1
C2
z2
z1
2
1
A1
A2
B1
B2
A1(x0)
A2(x0)
B1(y0)
B2(y0)
1
2
T
T
N
N
1
2

1z1
2z2
z
0
Layer 1 Layer 2 Layer 3 Layer 4 Layer 5
x0
y0
Figure 4.10 Tsukamotos inference mechanism.
A hybrid neural net computationally identical to this type of reasoning is
shown in the Figure 4.11
Figure 4.11 A hybrid neural net (ANFIS architecture) which is
computationally equivalent to Tsukomatos reasoning method.
335
Layer 1 The output of the node is the degree to which the given input
satises the linguistic label associated to this node.
Layer 2 Each node computes the ring strength of the associated rule.
The output of top neuron is

1
= A
1
(x
0
) B
1
(y
0
) = A
1
(x
0
) B
1
(y
0
),
and the output of the bottom neuron is

2
= A
2
(x
0
) B
2
(y
0
) = A
2
(x
0
) B
2
(y
0
)
Both nodes in this layer is labeled by T, because we can choose other
t-norms for modeling the logical and operator. The nodes of this layer
are called rule nodes.
Layer 3 Every node in this layer is labeled by N to indicate the
normalization of the ring levels.
The output of top neuron is the normalized (with respect to the sum
of ring levels) ring level of the rst rule

1
=

1
+
2
,
and the output of the bottom neuron is the normalized ring level of
the second rule

2
=

1
+
2
,
Layer 4 The output of top neuron is the product of the normalized
ring level and the individual rule output of the rst rule

1
z
1
=
1
C
1
1
(
1
).
The output of top neuron is the product of the normalized ring level
and the individual rule output of the second rule

2
z
2
=
2
C
1
2
(
2
),
Layer 5 The single node in this layer computes the overall system
output as the sum of all incoming signals, i.e.
z
0
=
1
z
1
+
2
z
2
.
336
Exercise 4.18 Show that fuzzy inference systems with simplied fuzzy IF-
THEN rules are universal approximators.
Solution 4.18 Consider a fuzzy inference systems with two simplied fuzzy
IF-THEN rules

1
: if x
1
is A
11
and x
2
is A
12
then y = z
1

2
: if x
1
is A
21
and x
2
is A
22
then y = z
2
Suppose that the output of the system = {
1
,
2
} for a given input is
computed by
z =

1
z
1
+
2
z
2

1
+
2
(4.1)
where
1
and
2
denote the ring strengths of the rules with respect to given
input vector. Let z

be the output of the system for some other input.


We recall the Stone-Weierstrass theorem:
Theorem 4.2.1 Let domain K be a compact space of n dimensions, and let
G be a set of continuous real-valued functions on K, satisfying the following
criteria:
1. The constant function f(x) = 1 is in G.
2. For any two points x
1
= x
2
in K, there is an f in G such that f(x
1
) =
f(x
2
).
3. If f
1
and f
2
are two functions in G, then fg and
1
f
1
+
2
f
2
are in G for
any two real numbers
1
and
2
.
Then G is dense in C(K), the set of continuous real-valued functions on K.
In other words, for any > 0 and any function g in C(K), there exists g in
G such that
f g

= sup
xK
|f(x) g(x)| .
Proof. We show that az + bz

, a, b IR and zz

can be written in the


form (4.1) which means that fuzzy inference systems with simplied fuzzy
IF-THEN rules satisfy the conditions of Stone-Weierstrass theorem, i.e. they
can approximate all continuous functions on a compact domain.
337
For az +bz

we get
az +bz

= a

1
z
1
+
2
z
2

1
+
2
+b

1
z

1
+

2
z

1
+

2
=
a(
1
z
1
+
2
z
2
)(

1
+

2
) +b(

1
z

1
+

2
z

2
)(
1
+
2
)
(
1
+
2
)(

1
+

2
)
=

1
(az
1
+bz

1
) +
1

2
(az
1
+bz

2
) +
2

1
(az
2
+bz

1
) +
2

2
(az
2
+bz

2
)

1
+
1

2
+
2

1
+
2

2
So, zz

is the output of a fuzzy inference system with four simplied fuzzy


IF-THEN, where the individual rule outputs are: az
1
+bz

1
, az
1
+bz

2
, az
2
+bz

1
and az
2
+bz

2
, and the ring strengths of the associated rules are
1

1
,
1

2
,

1
and
2

2
, respectively.
Finally, for zz

we obtain
zz

1
z
1
+
2
z
2

1
+
2

1
z

1
+

2
z

1
+

2
=

1
z
1
z

1
+
1

2
z
1
z

2
+
2

1
z
2
z

1
+
2

2
z
2
z

1
+
1

2
+
2

1
+
2

2
So, zz

is the output of a fuzzy inference system with four simplied fuzzy


IF-THEN, where the individual rule outputs are: z
1
z

1
, z
1
z

2
, z
2
z

1
and z
2
z

2
,
and the ring strengths of the associated rules are
1

1
,
1

2
,
2

1
and

2
, respectively.
Which completes the proof.
Exercise 4.19 Let A
1
= (a
1
, ) and A
2
= (a
2
, ) be fuzzy numbers of sym-
metric triangular form. Compute analytically the membership function of
their product-sum, A
1
A
2
, dened by
(A
1
A
2
)(y) = sup
x
1
+x
2
=y
PAND(A
1
(x
1
), A
2
(x
2
)) = sup
x
1
+x
2
=y
A
1
(x
1
)A
2
(x
2
).
Solution 4.19 The membership functions of A
1
= (a
1
, ) and A
2
= (a
2
, )
are dened by
A
1
(t) =
_
1 |a
1
t|/ if |a
1
t|
0 otherwise
338
A
2
(t) =
_
1 |a
2
t|/ if |a
2
t|
0 otherwise
First we show hat the support of the product-sum, A
1
A
2
, is equal to the
sum of the supports of A
1
and A
2
, i.e.
supp(A
1
A
2
) = supp(A
1
) +supp(A
2
) =
(a
1
, a
1
+ ) + (a
2
, a
2
+ ) = (a
1
+a
2
2, a
1
+a
2
+ 2).
Really, the product A
1
(x
1
)A
2
(x
2
) is positive, if and only if A
1
(x
1
) > 0 and
A
2
(x
2
) > 0, i.e. x
1
(a
1
, a
1
+ ) and x
2
(a
2
, a
2
+ ). This means
that (A
1
A
2
)(y) is positive if and only if y is represented as a sum of x
1
from supp(A
1
) and x
2
from supp(A
2
).
From the denition of product-sum it follows that (A
1
A
2
)(y), y [a
1
+
a
2
2, a
1
+a
2
], is equal to the optimal value of the following mathematical
programming problem
_
1
a
1
x

__
1
a
2
y +x

_
(4.2)
subject to a
1
x a
1
, a
2
y x a
2
.
Using Lagranges multipliers method for the solution of (4.2) we get that its
optimal value is
_
1
a
1
+a
2
y
2
_
2
and its unique solution is x = 1/2(a
1
a
2
+ y) (where the derivative of the
objective function vanishes).
In order to determine (A
1
A
2
)(y), y [a
1
+ a
2
, a
1
+ a
2
+ 2], we need to
solve the following mathematical programming problem
_
1
a
1
x

__
1
a
2
y +x

_
(4.3)
subject to a
1
x a
1
+ , a
2
y x a
2
+ .
Using Lagranges multipliers method for the solution of (4.3) we get that its
optimal value is
_
1
y (a
1
+a
2
)
2
_
2
.
339
Summarizing these ndings we obtain that
(A
1
A
2
)(y) =
_

_
_
1 |a
1
+a
2
y|/2
_
2
if |a
1
+a
2
y| 2
0 otherwise
(4.4)
Figure 4.12 Product-sum of fuzzy numbers (1, 3/2) and (2, 3/2).
Exercise 4.20 Let A
i
= (a
i
, ), i N be fuzzy numbers of symmetric tri-
angular form. Suppose that
a :=

i=1
a
i
exists and it is nite. Find the limit distribution of the product-sum
n

i=1
A
i
when n .
Solution 4.20 Let us denote B
n
the product sum of A
i
, i = 1, . . . , n, i.e.
B
n
= A
1
A
n
Making an induction argument on n we show that
B
n
(y) =
_

_
_
1
|a
1
+ +a
n
y|
n
_
n
if |a
1
+ +a
n
y| n
0 otherwise
(4.5)
From (4.4) it follows that (4.5) holds for n = 2. Let us assume that it holds
for some n N. Then using the denition of product-sum we obtain
B
n+1
(y) = (B
n
+A
n+1
)(y) = sup
x
1
+x
2
=y
B
n
(x
1
)A
n+1
(x
2
) =
340
sup
x
1
+x
2
=y
_
1
|a
1
+ +a
n
x
1
|
n
_
2
_
1
|a
n+1
x
2
|

_
=
_
1
|a
1
+ +a
n+1
y|
(n + 1)
_
n+1
.
This ends the proof. From (4.5) we obtain the limit distribution of B
n
s as
lim
n
B
n
(y) = lim
n
_
1
|a
1
+ +a
n
y|
n
_
n
= exp(
|a y|

).
Figure 4.13 The limit distribution of the product-sum of A
i
, i N.
Exercise 4.21 Suppose the unknown nonlinear mapping to be realized by
fuzzy systems can be represented as
y
k
= f(x
k
) = f(x
k
1
, . . . , x
k
n
) (4.6)
for k = 1, . . . , K, i.e. we have the following training set
{(x
1
, y
1
), . . . , (x
K
, y
K
)}
For modeling the unknown mapping in (4.6), we employ three simplied
fuzzy IF-THEN rules of the following type
if x is small then y = z
1
if x is medium then y = z
2
if x is big then y = z
3
where the linguistic terms A
1
= small, A
2
= medium and A
3
= big
are of triangular form with membership functions (see Figure 4.14)
A
1
(v) =
_

_
1 if v c
1
(c
2
x)/(c
2
c
1
) if c
1
v c
2
0 otherwise
341
c1 c2 c3
1
A1
A2
A3
A
2
(u) =
_

_
(x c
1
)/(c
2
c
1
) if c
1
u c
2
(c
3
x)/(c
3
c
2
) if c
2
u c
3
0 otherwise
A
3
(u) =
_

_
1 if u c
3
(x c
2
)/(c
3
c
2
) if c
2
u c
3
0 otherwise
Derive the steepest descent method for tuning the premise parameters {c
1
, c
2
, c
3
}
and the consequent parameters {y
1
, y
2
, y
3
}.
Figure 4.14 Initial fuzzy partition with three linguistic terms.
Solution 4.21 Let x be the input to the fuzzy system. The ring levels of
the rules are computed by

1
= A
1
(x),
2
= A
2
(x),
3
= A
3
(x),
and the output of the system is computed by
o =

1
z
1
+
2
z
2
+
3
z
3

1
+
2
+
3
=
A
1
(x)z
1
+A
2
(x)z
2
+A
3
(x)z
3
A
1
(x) +A
2
(x) +A
3
(x)
= A
1
(x)z
1
+A
2
(x)z
2
+A
3
(x)z
3
where we have used the identity A
1
(x) +A
2
(x) +A
3
(x) = 1 for all x [0, 1].
We dene the measure of error for the k-th training pattern as usually
E
k
= E
k
(c
1
, c
2
, c
3
, z
1
, z
2
, z
3
) =
1
2
(o
k
(c
1
, c
2
, c
3
, z
1
, z
2
, z
3
) y
k
)
2
342
where o
k
is the computed output from the fuzzy system corresponding to the
input pattern x
k
and y
k
is the desired output, k = 1, . . . , K.
The steepest descent method is used to learn z
i
in the consequent part of the
i-th fuzzy rule. That is,
z
1
(t + 1) = z
1
(t)
E
k
z
1
= z
1
(t) (o
k
y
k
)A
1
(x
k
)
z
2
(t + 1) = z
2
(t)
E
k
z
2
= z
2
(t) (o
k
y
k
)A
2
(x
k
)
z
3
(t + 1) = z
3
(t)
E
k
z
3
= z
3
(t) (o
k
y
k
)A
3
(x
k
)
where x
k
is the input to the system, > 0 is the learning constant and t
indexes the number of the adjustments of z
i
.
In a similar manner we can tune the centers of A
1
, A
2
and A
3
.
c
1
(t+1) = c
1
(t)
E
k
c
1
, c
2
(t+1) = c
2
(t)
E
k
c
2
, c
3
(t+1) = c
3
(t)
E
k
c
3
,
where > 0 is the learning constant and t indexes the number of the adjust-
ments of the parameters.
The partial derivative of the error function E
k
with respect to c
1
can be
written as
E
k
c
1
= (o
k
y
k
)
o
k
c
1
= (o
k
y
k
)
(x c
1
)
(c
2
c
1
)
2
(z
1
z
2
)
if c
1
x
k
c
2
, and zero otherwise.
It should be noted that the adjustments of a center can not be done inde-
pendently of other centers, because the inequality
0 c
1
(t + 1) < c
2
(t + 1) < c
3
(t + 1) 1
must hold for all t.
343
Index
C

distance, 55
R-implication, 61
S-implication, 61
-cut, 14, 259
-completeness, 106, 233
Lukasiewicz t-conorm, 27
Lukasiewicz t-norm, 26
activation function, 159, 212
activation hyperbox , 275
AND fuzzy neuron, 213
andlike OWA operator, 126
ANFIS architecture, 233
ARIC architecture, 209
arithmetic mean, 122
articial neuron, 158
averaging operator, 121
bag, 119
ball and beam system, 113
basic property of GMP, 71
binary fuzzy relation, 33
bipolar activation function, 178
Cartesian product, 36
center-of-gravity defuzzication, 95
centroid defuzzication, 115
clause checking layer, 237
complement of a fuzzy set, 24
composition of fuzzy relations, 37
compositional rule of inference, 69
conjunction rule, 68
convex fuzzy set, 15
crisp control action, 99
crisp relation, 31
cumulative error, 166, 180, 263
data base strategy, 106
defuzzication, 95
degree of membership, 12
delta learning rule, 176
descent method, 172
discrete Hamming distance, 56
discretization, 107
disjunction rule, 68
downhill direction, 172
empty fuzzy set, 21
entailment rule, 68
equality of fuzzy sets, 20
error correction learning, 163
error function, 233, 260, 266, 269,
287
error signal, 175
Eucledean norm, 162
extended addition, 46
extended division, 47
extended multiplication, 47
extended subtraction, 46
extension principle, 42
feature space, 280
344
ring strength, 99, 229, 267, 334
FNN of Type 1, 218
FNN of Type 2, 254
FNN of Type 5, 218
FNN of Type 6, 218
FNN of Type 3, 258
Funahashis theorem, 188
fuzzication operator, 91
fuzzy classication, 280
fuzzy control rule, 88
fuzzy implication, 61
fuzzy logic controller, 88
fuzzy mapping, 48
fuzzy max, 52
fuzzy min, 52
fuzzy neuron, 213
fuzzy number, 15
fuzzy number of type LR, 18
fuzzy partition, 64, 108, 280, 323
fuzzy point, 21
fuzzy quantity, 12
fuzzy relation, 33, 324
fuzzy rule extraction, 275
fuzzy screening system, 133
fuzzy set, 12
fuzzy subsethood, 20
fuzzy training set, 254
Gaines implication, 62
Gaussian membership function, 115
generalized p-mean, 122, 286
generalized delta rule, 186
Generalized Modus Ponens, 69
Generalized Modus Tollens, 70
genetic algorithms, 265
geometric mean, 122
gradient vector, 173
Godel implication, 62
Hamachers t-conorm, 27
Hamachers t-norm, 26, 324
Hamming distance, 56
harmonic mean, 122
Hausdor distance, 55
height defuzzication, 97
hidden layer, 184
hybrid fuzzy neural network, 221
hybrid neural net, 212, 335
identity quantier, 131
implication-OR fuzzy neuron, 215
individual rule output, 99, 117, 230,
318
inference mechanism, 99
inhibition hyperbox, 276
intersection of fuzzy sets, 23
Kleene-Dienes implication, 62
Kleene-Dienes- Lukasiewicz, 62
Kohonens learning algorithm, 193
Kwan and Cais fuzzy neuron, 215
Larsen implication, 62
Larsens inference mechanism, 104
learning of membership functions,
265
learning rate, 165, 269
linear activation function, 174
linear combination of fuzzy num-
bers, 49
linear threshold unit, 160
linguistic modiers, 64
linguistic quantiers, 128
linguistic variable, 63
Mamdani implication, 62
Mamdanis inference mechanism, 99
Mamdani-type FLC, 89
345
maximum t-conorm, 27
measure of andness, 125
measure of dispersion, 127
measure of orness, 125
MICA operator, 120
middle-of-maxima method, 96
minimum t-norm, 26
Modus Ponens, 68
Modus Tollens, 70
negation rule, 68
Nguyens theorem, 51
normal fuzzy set, 14
OR fuzzy neuron, 214
orlike OWA operator, 126
overall system output, 99, 117, 318
OWA operator, 123
parity function, 169
partial derivative, 173
perceptron learning, 165
portfolio value, 317
probabilistic t-conorm, 27
probabilistic t-norm, 26
projection of a fuzzy relation, 35
projection rule, 68
quasi-arithmetic mean, 122
regular fuzzy neural net, 218
regular neural net, 212
removal of the threshold, 161
scalar product, 162
simplied fuzzy rules, 266
single-layer feedforward net, 164
singleton fuzzier, 115
slope, 269
steepest descent method, 266, 268
Stone-Weierstrass theorem, 189
subset property of GMP, 72
Sugenos inference mechanism, 102
sup-T composition, 38
sup-T compositional rule, 69
superset property of GMP, 72
supervised learning, 184
support, 14
t-conorm-based union, 28
t-norm implication, 62
t-norm-based intersection, 28
threshold-level, 160
total indeterminance, 71
trade-os , 121
training set, 163, 268
trapezoidal fuzzy number, 18
triangular conorm, 26
triangular fuzzy number, 17
triangular norm, 25
Tsukamotos inference mechanism,
100
Uehara and Fujises method, 249
Umano and Ezawas method , 245
union of fuzzy sets, 23
unipolar activation function, 178
unit score, 136
universal approximators, 115
universal fuzzy set, 21
unsupervised learning, 191
week t-norm, 26
weight vector, 159
window type OWA operator, 125
XOR problem, 169
Yagers t-conorm, 27
Yagers t-norm, 26, 286
346

You might also like