Professional Documents
Culture Documents
E({X}, R; M) = (6)
p(X|C1 )P(C1 )
p(C1 |X) =
p(X|C1 )P(C1 ) + p(X|C2 )P(C2 )
(1) ∑ ∑ R(Ci ,C(X))H (p(Ci |X; M) − δ(Ci ,C(X)))
i X
The P(Ck ) are a priori class probabilities. Thus p(C1 |X) = where i runs over all different classes and X over all train-
σ(y(X)), where the function y(X) is: ing vectors, C(X) is the true class of the vector X p and
the function H(·) should be monotonic and positive; quite
p(X|C1 )P(C1 ) often the quadratic function or entropy-based function are
y(X) = ln = W·X−θ (2)
p(X|C2 )P(C2 ) used.
where The elements of the risk matrix R(C i ,C j ) are propor-
tional to the risk of assigning the C i class when the true
W = (X2 − X1)T Σ−1 = W · X − θ (3) class is C j (in the simplest case R(Ci ,C j ) = 1 − δi j ), and M
specifies all adaptive parameters and variable procedures of
and θ = θ(X1 , X2 , Σ, P(C1 ), P(C2 )). The posterior proba- the classification model that may affect the cost function.
bility is thus given by a specific logistic output function. Regularization terms aimed at minimization of the com-
For more than two classes normalized exponential func- plexity of the classification model are frequently added to
tions (called also softmax functions) are obtained by the the cost function, allowing to avoid the overfitting prob-
same reasoning: lems. To minimize the leave-one-out error the sum runs
over all training examples X p and the model used to spec-
exp(yk (X))
p(Ck |X) = (4) ify the classifier should not contain the X p vector in the
∑i exp(yi (X)) reference set while p(Ci |X p ) is computed.
These normalized exponential functions may be inter- A simplified version of the cost function is also useful:
preted as probabilities. They are provided in a natural way
by multilayer perceptron networks (MLPs). If one of the
probabilities is close to 1 or to 0 the situation is clear. Oth- C j (X p ) ← j = argmax p(Ci |X p ; M) (7)
i
erwise X belongs to the border area and a unique classi-
fication may not be possible. The domain expert should
E({X} ; M) = ∑ K (C(X p ) − C j (X p ))
p
decide if it makes sense to introduce a new, mixed class,
or to acknowledge that insufficient information is available where C j (X p ) corresponds to the best recommendation of
for accurate classification. the classifier and the kernel function K(·, ·) measures sim-
2.1 Measures of classifier performance ilarity of the classes. A general expression is:
Measures of classifier performance based on accuracy
of confusion matrices F(Ci ,C j ) do not allow to evaluate
E ({X} ; M) = ∑ K d X (i) , R Err X (i) (8)
their usefulness. Introduction of risk matrices or use of i
receiver-operator characteristic (ROC) curves [5] does not
solve the problem either. For example in the local regression based on the mini-
If the standard approach fails to provide sufficiently ac- mal distance approaches [6] the error function is:
curate results for some classes one should either attempt
to create new classes or to minimize the number of errors
between a temporary new class composed of two or more E({X} ; M) = ∑ K(D(X p , Xre f ))(F(X p ; M) − y p )2 (9)
p
distinct classes. This requires a modification of the stan-
dard cost function. Let C(X; M) be the class predicted by where yi are the desired values for X i and F(Xi ; M) are
the model M and p(Ci |X; M) the probability of class Ci . In the values predicted by the model M. Here the kernel fun-
most cases the cost function minimizes the classification ction K(d) measures the influence of the reference vectors
error and has either a quadratic form: on the total error. For example, if K(d) has a sharp high
peak around d = 0 the function F(X; M) will fit the values
corresponding to the reference input vectors almost exactly
E2 ({X}, R; M) = ∑ ∑ (p(Ci |X) − δ(Ci ,C(X)))2 (5)
i
and will make large errors for other values. In classifica-
X
tion problems kernel function will determine the size of the
or – since continuos output values are provided by some neighborhood around the known cases in which accurate
models – minimization of risk for overall classification: classification is required.
Suppose that both off-diagonal elements F12 and F21 of classification. Displaying such probabilities allows for a
the confusion matrix are large, i.e. the two classes are fre- detailed evaluation of the new data also in cases where
quently mixed. In this case we can try to separate these two analysis of rules is too complicated. A more detailed analy-
classes from all the others using an independent classifier. sis of these probabilities based on confidence intervals and
The joint class is designated C1,2 and the model trained probabilistic confidence intervals has recently been pre-
with the following cost function: sented by Jankowski [7]. Confidence intervals are calcu-
lated individually for a given input vector while logical
rules are extracted for the whole training set. Confidence
Ed ({X} ; M) = ∑ H (p(C1,2 |X; M) − δ(C1,2 ,C(X))) intervals measure maximal deviation from the given fea-
X
ture value Xi (assuming that other features of the vector X
+ ∑ ∑ H (p(Ck |X; M) − δ(Ck ,C(X))) (10) are fixed) that do not change the most probable classifica-
k>2 X
tion of the vector X. If this vector lies near the class border
Training with such error function provides new, possi- the confidence intervals are narrow, while for vectors that
bly simpler, decision borders. In practice one should use are typical for their class confidence intervals should be
classifier first and only if classification is not sufficiently wide. These intervals facilitate precise interpretation and
reliable (several probabilities are almost equal) try to elim- allow to analyze the stability of sets of rules.
inate subsets of classes. If joining pairs of classes is not For some classification models probabilities
sufficient triples and higher combinations may be consid- p(Ci |X; ρ, M) may be calculated analytically. For the
ered. crisp rule classifiers [8] a rule R [a,b] (X), which is true if
X ∈ [a, b] and false otherwise, is fulfilled by a Gaussian
3 Calculation of probabilities number GX with probability:
Some classifiers do not provide probabilities, therefore
it is not clear how to optimized them for elimination of
classes instead of selection of the most probable class. A p(R[a,b] (GX ) = T ) ≈ σ(β(X − a)) − σ(β(X − b)) (11)
universal solution independent of any classifier system is
described below. where the logistic
√ function σ(βX) = 1/(1 + exp(−βX))
Real input values X are obtained by measurements that has β = 2.4/ 2s(X) slope. For large uncertainty s(X) this
are carried with finite precision. The brain uses not only probability is significantly different from zero well outside
large receptive fields for categorization, but also small the interval [a, b]. Thus crisp logical rules for data with
receptive fields to extract feature values. Instead of a Gaussian distribution of errors are equivalent to fuzzy rules
crisp number X a Gaussian distribution G X = G(Y ; X, SX ) with “soft trapezoid” membership functions defined by the
centered around X with dispersion S X should be used. difference of the two sigmoids, used with crisp input value.
Probabilities p(Ci |X; M) may be computed for any clas- The slope of these membership functions, determined by
sification model M by performing a Monte Carlo sam- the parameter β, is inversely proportional to the uncertainty
pling from the joint Gaussian distribution for all con- of the inputs. In the C-MLP2LN neural model [9] such
tinuous features G X = G(Y; X, SX ). Dispersions SX = membership functions are computed by the network “lin-
(s(X1 ), s(X2 ) . . . s(XN )) define the volume of the input guistic units” L(X; a, b) = σ(β(X − a)) − σ(β(X − b)). Re-
space around X that has an influence on computed prob- lating the slope β to the input uncertainty allows to calcu-
abilities. One way to “explore the neighborhood” of X and late probabilities in agreement with the Monte Carlo sam-
see the probabilities of alternative classes is to increase the pling. Another way of calculating probabilities, based on
fuzziness SX defining s(Xi ) = (Xi,max − Xi,min )ρ, where the the softmax neural outputs p(C j |X; M) = O j (X)/ ∑i Oi (X)
parameter ρ defines a percentage of fuzziness relatively to has been presented in [7].
the range of Xi values. After uncertainty of inputs has been taken into account
With increasing ρ values the probabilities p(C i |X; ρ, M) probabilities p(Ci |X; M) depend in a continuous way on
change. Even if a crisp rule-based classifier is used non- intervals defining linguistic variables. The error function:
zero probabilities of classes alternative to the winning class
will gradually appear. The way in which these probabil- 1
2∑ ∑ (p(Ci |X; M) − δ(C(X),Ci ))2
ities change shows how reliable is the classification and E(M, S) = (12)
X i
what are the alternatives worth remembering. If the proba-
bility p(Ci |X; ρ, M) changes rapidly around some value ρ 0 depends also on uncertainties of inputs S. Several vari-
the case X is near classification border and an analysis of ants of such models may be considered, with Gaussian or
p(Ci |X; ρ0 , si , M) as a function of each s i = s(Xi ), i = 1 . . . N conical (triangular-shaped) assumptions for input distribu-
is needed to see which features have strong influence on tions, or neural models with bicentral transfer functions in
the first hidden layer. Confusion matrix computed using pointing out to the similarity of the new case X to one or
probabilities instead of the number of yes/no errors allows more of the prototype cases R i .
for optimization of the error function using gradient-based Similar result is obtained if the linear discrimination
methods. This minimization may be performed directly or analysis (LDA) is used - instead of sharp decision border
may be presented as a neural network problem with a spe- in the direction perpendicular to LDA hyperplane a soft lo-
cial network architecture. Uncertainties s i of the values of gistic function is used, corresponding to a neural network
features may be treated as additional adaptive parameters with singe neuron. The weights and bias are fixed by the
for optimization. To avoid too many new adaptive param- LDA solution, only the slope of the function is optimized.
eters optimization of all, or perhaps of a few groups of s i
uncertainties, is replaced by common ρ factors defining the 4 Real-life example
percentage of assumed uncertainty for each group. Hepatobiliary disorders data, used previously in sev-
This approach leads to the following important im- eral studies [17, 11, 12, 16], contains medical records of
provements for any rule-based system: 536 patients admitted to a university affiliated Tokyo-based
hospital, with four types of hepatobiliary disorders: alco-
• Crisp logical rules provide basic description of the holic liver damage (AL), primary hepatoma (PH), liver cir-
data, giving maximal comprehensibility. rhosis (LC) and cholelithiasis (CH). The records includes
results of 9 biochemical tests and sex of the patient. The
• Instead of 0/1 decisions probabilities of classes same 163 cases as in [17] were used as the test data.
p(Ci |X; M) are obtained. In the previous work three fuzzy sets per each input
were assigned using recommendation of the medical ex-
• Inexpensive gradient method are used allowing for perts. A fuzzy neural network was constructed and trained
optimization of very large sets of rules. until 100% correct answers were obtained on the training
• Uncertainties of inputs s i provide additional adaptive set. The accuracy on the test set varied from less than
parameters. 60% to a peak of 75.5%. Although we quote this result in
the Table 1 below it seems impossible to find good criteria
• Rules with wider classification margins are obtained, that will predict when the training on the test set should be
overcoming the brittleness problem of some rule- stopped. Fuzzy rules equivalent to the fuzzy network were
based systems. derived but their accuracy on the test set was not given.
This data has also been analyzed by Mitra et al. [18, 16]
Wide classification margins are desirable to optimize using a knowledge-based fuzzy MLP system with results
the placement of decision borders, improving generaliza- on the test set in the range from 33% to 66.3%, depending
tion of the system. If the vector X of an unknown class on the actual fuzzy model used.
is quite typical to one of the classes Ck increasing uncer- For this dataset classification using crisp rules was
tainties si of Xi inputs to a reasonable value (several times not too successful. The initial 49 rules obtained by C-
the real uncertainty, estimated for a given data) should MLP2LN procedure gave 83.5% on the training and 63.2%
not decrease the p(Ck |X; M) probability significantly. If on the test set. Optimization did not improve these results
this is not the case X may be close to the class border significantly. On the other hand fuzzy rules derived using
and analysis of p(Ci |X; ρ, si , M) as a function of each s i the FSM network, with Gaussian as well as with triangu-
is needed. These probabilities allow to evaluate the influ- lar functions, gave similar accuracy of 75.6-75.8%. Fuzzy
ence of different features on classification. If simple rules neural network used over 100 neurons to achieve 75.5%
are available such explanation may be satisfactory. Other- accuracy, indicating that good decision borders in this case
wise to gain understanding of the whole data a similarity- are quite complex and many logical rules will be required.
based approach to classification and explanation is worth Various results for this dataset are summarized in Table 1.
trying. Prototype vectors R i are constructed using a clus- FSM gives about 60 Gaussian or triangular membership
terization, dendrogram or a decision tree algorithm and a functions achieving accuracy of 75.5-75.8%. Rotation of
similarity measure D(X, R) is introduced. Positions of the these functions (i.e. introducing linear combination of in-
prototype vectors R i , parameters of the similarity measures puts to the rules) does not improve this accuracy. We have
D(·) and other adaptive parameters of the system are then also made 10-fold crossvalidation tests on the mixed data
optimized using a general framework for similarity-based (training plus test data), achieving similar results. Many
methods [10]. This approach includes radial basis func- methods give rather poor results on this dataset, includ-
tion networks, clusterization procedures, vector quantiza- ing various variants of the instance-based learning (IB2-
tion methods and generalized nearest neighbor methods as IB4, except for the IB1c, which is specifically designed
special examples. An explanation in this case is given by to work with continuous input data), statistical methods
Table 1: Results for the hepatobiliary disorders. Accuracy AL PH LC CH
on the training and test sets, in %. Top results are achieved AL 70 6 3 3
eliminating classes or predicting pairs of classes. All cal- PH 3 121 3 1 (13)
culations are ours except where noted. LC 1 8 77 2
CH 0 0 0 72
Method Training set Test set Looking at the confusion matrix one may notice that the
FSM-50, 2 most prob. classes 96.0 92.0 main problem comes from predicting AL or LC when the
FSM-50, class 2+3 combined 96.0 87.7 true class is PH. The number of vectors that are classified
FSM-50, class 1+2 combined 95.4 86.5 incorrectly with high confidence (probability over 0.9) in
Neurorule [11] 85.8 85.6 the training data is 10 and in the test data 7 (only 4.3%).
Neurolinear [11] 86.8 84.6 Rejection of these cases increases confidence in classifica-
1-NN, weighted (ASA) 83.4 82.8 tion, as shown in Fig. 1.
FSM, 50 networks 94.1 81.0 99
1-NN, 4 features 76.9 80.4
98.5
K* method – 78.5
98
kNN, k=1, Manhattan 79.1 77.9
FSM, Gaussian functions 93 75.6 97.5
[4] C. M. Bishop, Neural Networks for Pattern Recogni- [17] Y. Hayashi, A. Imura, K. Yoshida, “Fuzzy neural ex-
tion, Oxford University Press, 1995. pert system and its application to medical diagnosis",
in: 8th International Congress on Cybernetics and
[5] J.A. Swets, “Measuring the accuracy of diagnostic Systems, New York City 1990, pp. 54-61
systems.” Science, Vol. 240, pp. 1285-93, 1988
[18] S. Mitra, R. De, S. Pal, "Knowledge based fuzzy
[6] C.G. Atkenson, A.W. Moor and S. Schaal, “Lo-
MLP for classification and rule generation", IEEE
cally weighted learning,” Artificial Intelligence Re- Transactions on Neural Networks 8, 1338-1350, 1997
view, Vol. 11, pp. 75-113, 1997
[7] Duch, W., Jankowski, N., Adamczak, R.,
Grabczewski,
˛ K. (2000) Optimization and Inter-
pretation of Rule-Based Classifiers. Intelligent
Information Systems IX, Springer Verlag (in print)