You are on page 1of 13

Fuzzy Sets and Systems 112 (2000) 2739 www.elsevier.

com/locate/fss

A fuzzy backpropagation algorithm


Stefka Stoevaa , Alexander Nikov b;
a Bulgarian b Technical

National Library, V. Levski 88, BG-1504 Soa, Bulgaria University of Soa, P.O. Box 41, BG-1612 Soa, Bulgaria Received November 1994; revised March 1998

Abstract This paper presents an extension of the standard backpropagation algorithm (SBP). The proposed learning algorithm is based on the fuzzy integral of Sugeno and thus called fuzzy backpropagation (FBP) algorithm. Necessary and su cient conditions for convergence of FBP algorithm for single-output networks in case of single- and multiple-training patterns are proved. A computer simulation illustrates and conrms the theoretical results. FBP algorithm shows considerably greater convergence rate compared to SBP algorithm. Other advantages of FBP algorithm are that it reaches forward to the target value without oscillations, requires no assumptions about probability distribution and independence of input data. The convergence conditions enable training by automation of weights tuning process (quasi-unsupervised learning) pointing out the interval where the target value belongs to. This supports acquisition of implicit knowledge and ensures wide application, e.g. for creation of adaptable user interfaces, assessment of products, intelligent data analysis, etc. c 2000 Elsevier Science B.V. All rights reserved. Keywords: Neural networks; Learning algorithm; Fuzzy logic; Multicriteria analysis

1. Introduction Recently, the interest in neural networks has grown dramatically: it is expected that neural network models will be useful both as models of real brain functions and as computational devices. One of the most popular neural networks is the layered feedforward neural network with a backpropagation (BP) leastmean-square learning algorithm [17]. Its topology is shown in Fig. 1. The network edges connect the processing units called neurons. With each neuron input there is associated a weight, representing its relative
Corresponding author. E-mail address: nikov@vmei.acad.bg (A. Nikov)

importance in the set of the neurons inputs. The inputs values to each neuron are accumulated through the net function to yield the net value: the net value is a weighted linear combination of the neurons inputs values. For the purpose of multicriteria analysis [13,20,21] a hierarchy of criteria is used to determine an overall pattern evaluation. The hierarchy can be encoded into a hierarchical neural network where each neuron corresponds to a criterion. The input neurons of the network correspond to single criterion. The hidden and output neurons correspond to complex criteria. As evaluation function it can be used as the net function of the neurons. However, the criteria can be combined linearly when it is assumed that they are independent.

0165-0114/00/$ - see front matter c 2000 Elsevier Science B.V. All rights reserved. PII: S 0 1 6 5 - 0 1 1 4 ( 9 8 ) 0 0 0 7 9 - 7

28

S. Stoeva, A. Nikov / Fuzzy Sets and Systems 112 (2000) 2739

Fig. 1. Example neural network with single-output neuron and a hidden layer.

But in practice the criteria are correlated to some degree. The linear evaluation function is unable to capture relationship between the criteria. In order to overcome this drawback of standard backpropagation (SBP) algorithm we propose a fuzzy extension called fuzzy backpropagation (FBP) algorithm. It determines the net value through the fuzzy integral of Sugeno [3] and thus does not assume independence between the criteria. Another advantage of FBP algorithm is that it reaches always forward to the target value without oscillations and there is no possibility to fall into local minimum. Necessary and su cient conditions for convergence of FBP algorithm for single-output networks in case of single- and multiple-training patterns are proved. The results of computer simulation are reported and analysed: FBP algorithm shows considerably greater convergence rate compared to SBP algorithm.

us can be viewed as fuzzy-logic-based extension of the standard one. SBP algorithm is an integrative gradient algorithm designed to minimise the mean-squared error between the actual output and the desired output by modifying network weights. In the following we use for simplicity a network with one hidden layer but the results are valid also for networks with more than one hidden layers. Let us consider a layered feedforward neural network with n0 inputs, n1 hidden neurons and single-output neuron (cf. Fig. 1). The inputoutput relation of each neuron of the neural network is dened as follows: Hidden neurons
n0

net(1) = i
j=1

w(1) xj ; ij i = 1; 2; : : : ; n1 :

a(1) = f(net(1) ); i i Output neurons


n1

2. Description of the algorithm 2.1. Standard backpropagation algorithm (SBP algorithm) First we describe the standard backpropagation algorithm [17,18] because FBP algorithm proposed by

net(2) =
i=1

(2) w1i a(1) ; i

a(2) = f(net(2) ); where X = (x1 ; x2 ; : : : ; xn0 ) is the vector of patterns inputs.

S. Stoeva, A. Nikov / Fuzzy Sets and Systems 112 (2000) 2739

29

The activation functions f(net) can be linear ones or Fermi functions of type f(net) = 1 1+ e4 (net ) :

When there are more than one training pattern (Xm ; t m ); m = 1; 2; : : : ; M , the sum-squared error is dened as E= 1 2
M

(t m a(2) )2 : m
m=1

Analogous to the writing and reading phases, there are also two phases in the supervised learning BP network. There is a learning (training) phase when a training data set is used to determine the weights that dene the neural model. So the task of the BP algorithm is to nd the optimal weights to minimise the error between the target value and the actual response. Then the trained neural model will be used later in the retrieving phase to process and evaluate real patterns. Let the patterns output corresponding to the vector of patterns inputs X be called the target value t. Then the learning of the neural network for training pattern (X; t) is performed in order to minimise the squared-error between the target and the actual response E = 1 (t a(2) )2 : 2 The weights are changed according to the following formula:
new old w ij = w ij +

2.2. Fuzzy backpropagation algorithm (FBP algorithm) Recently, many neuro-fuzzy models have been introduced [2,6,8,10,11]. The following extension of the standard BP algorithm to fuzzy BP algorithm is proposed. For yielding the net value neti the inputs values of the ith neuron are aggregated. The mapping is mathematically described by the fuzzy integral of Sugeno that relies on psychological background. Let xj [0; 1] R; j = 1; : : : ; n0 , and all network weights w(l) [0; 1] R, where l = 1; 2. Let P(J ) be ij the power set of the set J of the network inputs (indices), where J ij is the jth input of the ith neuron. Further on for simplicity, network inputs with their indices are identied. Let g : P(J ) [0; 1] be a function dened as follows: g() = 0; . . .

w ij ;

where w ij =
(l) (l) i aj : (l) i a(l) j

g({ J ij }) = w(1) ; ij . . .

1 6 j 6 n0 ;

[0; 1] denotes the learning rate, is the error signal relevant to the ith neuron, and the signal at jth neuron input. The error signal is obtained recursively by backpropagation. Output layer
(2)

(1) (1) (1) g({ Jij1 ; Jij2 ; : : : ; Jijr }) = wij1 wij2 wijr ;

where { j1 ; j2 ; : : : ; jr } {1; 2; : : : ; n0 }; . . . g({ Ji1 ; Ji2 ; : : : J ij ; : : : ; Jin0 }) = 1; where the notations and stand for the operations MAX and MIN, respectively, in the unit real interval [0; 1] R. It is proved [20] that the function g is a fuzzy measure on P( J ). Therefore, the functional assumes the form of the Sugeno integral over the nite reference set J . Let h : P(I ) [0; 1] be a function, dened in a similar way.

= f (net(2) )(t a(2) );

(1)

where the derivation of the activation function f (net) = 4net(1 net): Hidden layer
(1) i

= f (net(1) ) i
(2)

(2)

(2) w1i ;

where

is determined by formula (1).

30

S. Stoeva, A. Nikov / Fuzzy Sets and Systems 112 (2000) 2739

Hidden neurons net(1) = i


GP( J )

pG

xp g(G) ; i = 1; 2; : : : ; n1 :
pG

2.3. Necessary and su cient conditions for convergence of FBP algorithm for single-output networks The following nontrivial necessary and su cient conditions for convergence of FBP algorithm in case of single-output neural networks will be formulated and proved: the interval between the minimum and maximum of the patterns inputs shall include the patterns output, i.e. the target value. 2.3.1. Convergence condition in case of single training pattern Denition 1. FBP algorithm is convergent to the target value t if there exists a number s0 N + such that the following equalities hold: t = a(2) (s0 ) = a(2) (s0 + 1) = a(2) (s0 + 2) = ; w(l) (s0 ) = w(l) (s0 + 1) = w(l) (s0 + 2) = ; l = 1; 2: ij ij ij The following lemma concerns the outputs of the neurons at the hidden layer. Lemma 1. (i) If a(1) (s)t then a(1) (s + 1) t. i i (ii) If a(1) (s)t then a(1) (s + 1) 6 t. i i The proof of this lemma is similar to that one of the lemma in [19]. The following lemma concerns the output of the output neuron: Lemma 2. (i) If a(2) (s)t then a(2) (s + 1) t; (ii) If a(2) (s)t then a(2) (s + 1) 6 t. Proof. In the sequel for each set G P(I ), hs (G) =
pG (2) w1p (s);

a(1) = f(net(1) ); i i Output neurons net(2) =


GP( I )

a(1) h(G) ; p

a(2) = f(net(2) ): The activation functions f(net) are linear ones or equal to f(net) = 1 1 + e4(net0:5) ;

in order to obtain f(net) [0; 1]. Therefore, the activation values a(l) [0; 1], where i l = 1; 2. The error signal is obtained by back propagation. Output layer
(2)

= |t a(2) |:

(2)

Hidden layer
(1) i

= |t a(1) |; i

i = 1; : : : ; n1 :

(3)

new In order to obtain w ij [0; 1] the weights are handled as follows: Case t = a(2) : No adjustment of the weights is necessary. Case ta(2) : old new old If w ij t then w ij = 1 (w ij + w ij ). old new old If w ij t then w ij = w ij . (2) Case ta : old new old If w ij t then w ij = 0 (w ij w ij ). old new old If w ij 6 t then w ij = w ij , where w ij = (l) ; (l) is determined by formula (2) i i resp. (3). In the following let us denote the iteration steps of the proposed algorithm by s, the corresponding weights by w(l) (s) and the activation values by a(l) (s). ij i

vs (G) =
pG

a(1) (s); p

a2 (s) =

[vs (G) hs (G)]:


GP(I )

(i) Since a(2) (s)t, and P(I ) is a nite set, there exists a set Gs P(I ) such that vs (Gs ) hs (Gs ) = a(2) (s)t;

S. Stoeva, A. Nikov / Fuzzy Sets and Systems 112 (2000) 2739

31

i.e. vs (Gs ) a(2) (s)t and hs (Gs ) a(2) (s)t:

(2) (2) If w1ps+1 (s) 6 a(2) (s)t, the weight w1ps+1 (s + 1) is produced through the formula (2) (2) w1ps+1 (s + 1) = 1 (w1ps+1 (s) + |t a(2) (s)|) (2) 6 1 (w1ps+1 (s) + |t a(2) (s)|) (2) = 1 (w1ps+1 (s) + t a(2) (s))

By Lemma 1 it follows that vs+1 (Gs ) t. Since Gs is a nite set, there exists an input ps Gs , such that (2) (2) w1ps (s) = hs (Gs ). Therefore, the weight w1ps (s + 1) is produced through the formula
(2) (2) w1ps (s + 1) = 0 (w1ps (s) |t a(2) (s)|) (2) 0 (w1ps (s) |t a(2) (s)|)

6 1 (a(2) (s) + t a(2) (s)) = 1t = t: Hence a(2) (s + 1) 6 t. Theorem 1. FBP algorithm is convergent to the target value t i for the neurons at the input layer the following condition holds: j j (a(0) t & a(0) 6 t): j j (4)

(2) 0 (w1ps (s)

a (s) + t)
(2)

0 (a(2) (s) a(2) (s) + t) = 0t = t:


(2) So, vs+1 (Gs ) w1ps (s + 1) t. Let

a(2) (s + 1) = vs+1 (Gs+1 ) hs+1 (Gs+1 ) for some set Gs+1 P(I ). Then a(2) (s + 1) = vs+1 (Gs+1 ) hs+1 (Gs+1 )
(2) vs+1 (Gs ) w1ps (s + 1) t:

Proof. The if part of Theorem 1 is proved by assuming that condition (4) is not satised. Suppose there is no input j J such that a(0) t. j Then for each input j J , 1 6 j 6 n0 , it holds that a(0) t. Hence, for each set G P(J ) it holds that j v(G) =
pG

a(0) t: p

Thus, a(2) (s + 1) t. (ii) Since a(2) (s)t, and P(I ) is a nite set, there exists a set Gs P(I ) such that vs (Gs ) hs (Gs ) = a(2) (s)t: Let a (s + 1) = vs+1 (Gs+1 ) hs+1 (Gs+1 )
(2)

So, at each iteration step s it holds that a(1) (s) = i t


GP(J )

[v(G) gis (G)]


GP(J )

gis (G)

for some set Gs+1 P(I ). Since Gs+1 is a nite set, there exists an input ps+1 Gs+1 , such that
(2) w1ps+1 (s + 1) = hs+1 (Gs+1 ):

= t 1=t
(1) for each i, 1 6 i 6 n1 , where gis (G) = pG wip (s). Therefore, at each iteration step s it holds that

Then it holds: ta(2) (s) = vs (Gs ) hs (Gs )


(2) vs (Gs+1 ) w1ps+1 (s):

a(2) (s) =

[vs (G) hs (G)]


GP(I )


pG

a(1) (s) hs (G) p

=
GP(I )

If vs (Gs+1 ) 6 a (s)t, then by Lemma 1 it follows that vs+1 (Gs+1 ) 6 t, and thus a(2) (s + 1)t.
(2)

[t hs (G)]
GP(I )

32

S. Stoeva, A. Nikov / Fuzzy Sets and Systems 112 (2000) 2739

= t
GP(I )

hs (G)

= t 1 = t: Thus, in such case FBP algorithm is not convergent to the target value t. The assumption that there is no input j J , such that a(0) 6 t, implies in a similar way that at each j iteration step s, a(2) (s)t holds. Thus, in this case FBP algorithm is not convergent to the target value t, either. The only if part of Theorem 1 is proved by assuming that FBP algorithm is not convergent to the target value t. Hence, there exists no iteration step s0 N + , such that a(2) (s0 ) = t, i.e. at each iteration step s it holds either a(2) (s)t or a(2) (s)t. If a(2) (1)t is fullled, then at each iteration step s, a(2) (s)t is fullled, too. The proof is performed through mathematical induction. Let a(2) (1)t be fullled indeed. Since FBP algorithm is not convergent by Lemma 2, it follows that a(2) (2)t. Let now at the sth iteration step a(2) (s)t be fullled. Since FBP algorithm is not convergent by Lemma 2, it follows that a(2) (s + 1)t. According to the mathematical induction axiom at each iteration step s, a(2) (s)t is satised. Since I is a nite set, at each iteration step s there exists a set Gs P(I ) such that vs (Gs ) hs (Gs ) = a (s)t;
(2)

The assumption that a(2) (s)t holds, implies in a similar way that condition (4) is not satised in that case, either. 2.3.2. Convergence condition in case of multiple-training patterns Denition 2. In case of multiple-training patterns (Xm ; t); m = 1; : : : ; M , FBP algorithm is convergent to the target value t if there exists a number of s0 N + such that the following equalities hold: t = a(2) (s0 ) = a(2) (s0 + 1) = a(2) (s0 + 2) = ; m m m m = 1; : : : ; M; w(l) (s0 ) = w(l) (s0 + 1) = w(l) (s0 + 2) = ; ij ij ij l = 1; 2: Theorem 2. Let more than one training pattern (Xm ; t); m = 1; : : : ; M; be given. Let a(0) ; a(0) be inmj mj puts of the mth pattern; such that Theorem 1 holds; i.e. a(0) t and a(0) 6 t. Let mj mj
M M

a(0)j = m
m=1

a(0) mj

and

a(0) j = m
m=1

a(0) : mj

Then FBP algorithm is convergent to the target value t i the following condition holds: a(0)j t & a(0) j 6 t: m m (5)

i.e. vs (Gs )t and hs (Gs )t. Let s0 N + be the iteration step, at which the process of adjusting all the weights of the network comes to an end. Then the above inequalities can be fullled only if hs0 (Gs0 ) = 1, i.e. if set Gs0 is equal to I , Gs0 I , and for each input i, 1 6 i 6 n1 , a(1) (s0 )t holds. i Since J is a nite set, for each neuron i, 1 6 i 6 n1 , at the hidden layer there exists a set Gis0 P(J ), such that v(Gis0 ) gis0 (Gis0 ) = a(1) (s0 )t; i i.e. v(Gis0 )t and gis0 (Gis0 )t. Since the process of adjusting all the weights has already been stopped, the above inequalities can be fullled only if gis0 (Gis0 ) = 1, i.e. if set Gis0 is equal to J , Gis0 J , and for each input j, 1 6 j 6 n0 , a(0) t holds. Therefore, j condition (4) is not satised.

Proof. Let condition (5) be satised. Since the inequalities t 6 a(0)j 6 a(0) and t a(0) j a(0) hold m mj m mj for each m; m = 1; : : : ; M , according to Theorem 1. FBP algorithm is convergent to the target value t for each m; m = 1; : : : ; M. Let now FBP algorithm be convergent to the target value t for each m; m = 1; : : : ; M . Then according to Theorem 1 the inequalities a(0) t and a(0) 6 t mj mj hold for each m = 1; : : : ; M . Therefore, condition (5) is satised, too. 3. Simulation results With the help of a computer simulation we demonstrate the proposed FBP algorithm using two neural

S. Stoeva, A. Nikov / Fuzzy Sets and Systems 112 (2000) 2739

33

Fig. 2. Example neural network with single-output neuron in the case of single- and multiple-training patterns.

networks with single-output neuron in case of singletraining pattern (cf. Fig. 2) and in case of multiple training patterns (cf. Figs. 2 and 7). The FBP algorithm is compared with SBP algorithm (cf. Fig. 8). 3.1. Single-training pattern In case of single-training pattern FBP algorithm was iterated with linear activation function, training accuracy 0.001 (error goal), learning rate = 1:0 and

target value t = 0:8 (cf. Fig. 2). According to Theorem 1 FBP algorithm is convergent to target value t belonging to the interval between the minimal and maximal input values [a(0) ; a(0) ] = [0:1; 0:9]. Fig. 3 ilj j lustrates that for target values t = 0:2; 0:5 and 0.8 belonging to the interval [0:1; 0:9] the network was trained only for 2, respectively, for 3 steps. If the target value (t = 0:0 and 1.0) is outside the interval [0:1; 0:9] the network output cannot reach it and the training process is not convergent. This conrms the

34

S. Stoeva, A. Nikov / Fuzzy Sets and Systems 112 (2000) 2739

Fig. 3. Output values in case of single-training pattern for di erent target values.

Fig. 4. Output values in case of multiple-training patterns for di erent target values.

S. Stoeva, A. Nikov / Fuzzy Sets and Systems 112 (2000) 2739

35

Fig. 5. Sum-squared error in case of multiple-training patterns for di erent target values.

theoretical results. Here the nal output values are 0:1 and 0:9 for t = 0:0 and for t = 1:0, respectively. 3.2. Multiple training patterns FBP algorithm was iterated with ve training patterns (cf. Fig. 2), linear activation function, error goal 0:001 and learning rate = 1:0. The interval between the maximum of minimal input values and minimum of maximal input values of the training patterns is [a(0) j ; a(0)j ] = [0:4; 0:8]. Thus, according m m to Theorem 2 for the target values t = 0:4; 0:6 and 0:8 belonging to the interval [0:4; 0:8] the convergence of FBP algorithm is guaranteed. As illustrated in Fig. 4, FBP algorithm trains the network only for 10 steps. If the target value (t = 0:0 and 1:0) is outside the interval [0:4; 0:8] the output value oscillates. There is no convergence outside this interval. This conrms the theoretical result. The sum-squared error for the targets t = 0:4; 0:6 and 0:8 reaches the error goal 0:001 only for two epochs (cf. Fig. 5) where during one epoch all ve training patterns are learned. For target values outside the interval [0:4; 0:8] the network cannot be trained

and the nal sum-squared error remains constant equal to 0:59 for t = 0:0 and 0:26 for t = 1:0. From the denition of FBP algorithm (cf. Section 2.2) it follows that its convergence speed is maximal for the learning rate = 1:0. Fig. 6 illustrates this theoretical result. In case of single- and multiple-training patterns only 3 and 10 steps, respectively, are needed for training the network at learning rate = 1:0. If the learning rate decreases the training steps needed increase. For example for = 0:01 589 training steps in case of single-training pattern and 870 training steps in case of multiple-training patterns are needed. 3.3. Comparison of FBP algorithm with SBP algorithm The convergence speed of FBP algorithm was compared with that one of SBP algorithm (cf. Fig. 8) for a network with four hidden layers (cf. Fig. 7), 100 randomly generated training patterns, randomly generated initial network weights, error goal 0:02 and target value t = 0:8 belonging to the interval [a(0) ; a(0) ] = [0:4; 0:8]: FBP algorithm is signicantly j j

36

S. Stoeva, A. Nikov / Fuzzy Sets and Systems 112 (2000) 2739

Fig. 6. Dependence between learning rate and training steps needed for single-training pattern (STP) and multiple-training patterns (MTP).

Fig. 7. Example neural network with single-output neuron and four hidden layers.

S. Stoeva, A. Nikov / Fuzzy Sets and Systems 112 (2000) 2739

37

Fig. 8. Comparison of FBP and SBP algorithms in terms of steps needed for convergence in case of multiple-training patterns for two learning rates (LR).

faster (seven epochs) than SBP algorithm (5500 epochs) for learning rate = 0:01. SBP algorithm is not convergent for learning rate 0:01. In Fig. 8 this is shown for = 1:0. For the same learning rate FBP algorithm is convergent only for two epochs. Numerous other experiments with di erent neural networks, learning rates and training patterns conrmed the greater convergence speed of FBP algorithm over SBP algorithm. Of course, this speed depends also on the initial network weights.

4. Conclusions In this paper we propose a fuzzy extension of the backpropagation algorithm: fuzzy backpropagation algorithm. This learning algorithm uses as net function the fuzzy integral of Sugeno. Necessary and su cient conditions for its convergence for single-output networks in case of single- and multiple-training patterns are dened and proved. A computer simulation conrmed these theoretical results.

FBP algorithm has a number of advantages compared to SBP algorithm: (1) greater convergence speed implying signicant reduction in computation time what is important in case of large sized neural networks, (2) reaches always forward to the target value without oscillations and there is no possibility to fall into local minimum, (3) requires no assumptions about probability distributions and independence of input data (patterns inputs), (4) does not require initial weights to be di erent from each other, (5) enables the automation of the weights tuning process (quasi-unsupervised learning) by pointing out the interval where the target value belongs to [14,15]. The target values are determined: automatically by computer in the interval, specied by input data, semi-automatically by experts that dene the direction and degree of target value changes. Thus the actual target value is determined by

38

S. Stoeva, A. Nikov / Fuzzy Sets and Systems 112 (2000) 2739


old w ij new w ij

fuzzy sets relevant to predened linguistic expressions. Another advantage of FBP algorithm is that it presents a generalisation of the fuzzy perceptron algorithm [19]. FBP algorithm can support knowledge acquisition and presentation especially in cases of uncertain implicit knowledge, e.g. for construction of Common KADS models in creative design process [5]; for intelligent data analysis [1]; for creating adaptive and adaptable interfaces [4], for ergonomic assessment of products [9], etc. It can be implemented as a module of a CAKE-tool like CoMo-Kit [12] or as a stand-alone tool for knowledge acquisition in systems for analysis and design like ERGON EXPERT [7] and ISO 9241 evaluator [16]. The ability of FBP algorithm for quasi-unsupervised learning was successfully implemented for the creation of adaptable user interface of the information system of Bulgarian parliament [15] and usability evaluation of hypermedia user interfaces [17]. Finally, we should say that this study is not yet applied to many complex problems and the results are not compared with di erent modications of SBP algorithm. However, the current results are encouraging for continuation of our work. We will have to make many experiments and practical implementations of FBP algorithm.

weight w(l) (s) ij weight w(l) (s + 1) ij the empty set the set of real numbers the set of positive integers the set of (indices of) network inputs the set of (indices of) neurons at the hidden layer the power set of the set I max operation in unit interval [0; 1] min operation in unit interval [0; 1] activation value a(l) relevant to the mth pattern the patterns output of the mth pattern fuzzy measure on the set P(J ) fuzzy measure on the set P(I )

R N+ J I P(I ) a(l) m tm g h

References
[1] J. Angstenberger, M. Fochem, R. Weber, Intelligent data analysis methods and applications, in: H.-J. Zimmermann (Ed.), Proc. 1st Eur. Congr. on Fuzzy and Intelligent Technologies EUFIT93, Verlag Augustinius Buchhandlung, Aachen, 1993, pp. 1027 1030. [2] M. Brown, C. Harris, Neurofuzzy Adaptive Modelling and Control, Prentice-Hall, New York, 1994. [3] D. Dubois, H. Prade, Fuzzy Sets and Systems: Theory and Applications, Academic Press, New York, 1980. [4] G. Grunst, R. Oppermann, C.G. Thomas, Adaptive and adaptable systems, in: P. Hoschka (Ed.), A New Generation of Support Systems, Lawrence Erlbaum Associates, Hillsdale, 1996, pp. 29 46. [5] R. de Hoog et al., The Common KADS model set, ESPRIT Project P5248 KADS-II, University of Amsterdam, Amsterdam, 1993. [6] E. Ikonen, K. Najim, Fuzzy neural networks and application to the FBC process, IEE Proc. Control Theory Appl. 143 (3) (1996) 259 269. [7] M. Jager, K. Hecktor, W. Laurig, Ergon expert a knowledgebased system for the evaluation and design of manual materials handling, in: Y. Qu innec, F. Daniellou (Eds.), e Proc. 11th Congr. of IEA Designing for Everyone, Taylor & Francis, London, 1991, pp. 411 413. [8] M. Keller, H. Tahani, Backpropagation neural network for fuzzy logic, Inform. Sci. 62 (1992) 205 221. [9] J.-H. Kirchner, Ergonomic assessment of products: some general considerations, in: Proc. 13th Triennial Congr. International Ergonomics Association, vol. 2, Tampere, Finland, 1997, pp. 56 58. [10] C.-H. Lin, C.-T. Lin, An ART-based fuzzy adaptive learning control network, IEEE Trans. Fuzzy Systems 5 (4) (1997) 477 496.

Appendix 1: Notation list xj net(l) i a(l) i w(l) ij f(net) t


(l) i

s w(l) (s) ij a(l) (s) i w ij

the jth patterns input net value of the ith neuron at the lth layer activation value of the ith neuron at the lth layer weight of the jth input of the ith neuron at the lth layer activation function the patterns output, i.e. target value error signal relevant to the ith neuron at the lth layer the sth iteration step weight w(l) at the sth step ij activation value a(l) at the sth step i weight change

S. Stoeva, A. Nikov / Fuzzy Sets and Systems 112 (2000) 2739 [11] D. Nauck, F. Klawonn, R. Kruse, Foundations on NeuroFuzzy Systems, Wiley, Chichester, 1997. [12] S. Neubert, F. Maurer, A tool for model-based knowledge engineering, Proc. 3rd KADS Meeting, Munich, 1993, pp. 97 112. [13] A. Nikov, G. Matarazzo, A methodology for human factors analysis of o ce automation systems, Technol. Forecasting Social Change 44 (2) (1993) 187 197. [14] A. Nikov et al., ISSUE: an intelligent software system for usability evaluation of hypermedia user interfaces, in: A.F. Ozok, G. Salvendy (Eds.), Advances in Applied Ergonomics, USA Publishing Co., West Lafayette, 1996, pp. 1038 1041. [15] A. Nikov, S. Delichev, S. Stoeva, A neuro-fuzzy approach for user interface adaptation implemented in the information system of Bulgarian parliament, in: Proc. 13th Int. Ergonomics Congr. IEA97, vol. 5, Tampere, Finland, 1997, pp. 100 112. [16] R. Oppermann, H. Reiterer, Software evaluation using 9241 evaluator, Behaviour Inform. Technol. 16 (4=5) (1997) 232 245.

39

[17] R. Rojas, Neural Networks, A Systematic Introduction, Springer, Berlin, 1996. [18] D.E. Rumelhardt, G.E. Hinton, R.J. Williams, Learning internal representations by error propagation, in: D.E. Rumelhardt, J.L. McClelland, PDP Research Group (Eds.), Parallel Distributed Processing. Exploration in the Microstructure of Cognition, vol. 1, foundations, MIT Press, Cambridge, MA, 1986, pp. 318 362. [19] S. Stoeva, A weight-learning algorithm for fuzzy production systems with weighting coe cients, Fuzzy Sets and Systems 48 (1) (1992) 87 97. [20] S. Stoeva, A. Nikov, A fuzzy knowledge-based mechanism for computer-aided ergonomic evaluation of industrial products, Simulation Control C 25 (3) (1991) 17 30. [21] S. Wang, N.P. Archer, A neural network technique in modelling multiple criteria multiple person decision making, Comput. Oper. Res. 21 (2) (1994) 127 142.

You might also like