You are on page 1of 11

European Journal of Operational Research 148 (2003) 514–524

www.elsevier.com/locate/dsw

Decision Aiding

Analyzing AHP-matrices by regression


Pertti Laininen *, Raimo P. H€
am€
al€
ainen
Systems Analysis Laboratory, Helsinki University of Technology, P.O. Box 1100, Fin-02015 Hut Helsinki, Finland
Received 26 October 2000; accepted 15 April 2002

Abstract

In the analytic hierarchy process (AHP) the decision maker makes comparisons between pairs of attributes or al-
ternatives. In real applications the comparisons are subject to judgmental errors. Many AHP-matrices reported in the
literature are found to be such that the logarithm of the comparison ratio can be sufficiently well modeled by a normal
distribution with a constant variance. On the basis of this model we present the formulae for the evaluation of the
standard deviations of the estimates of the AHP-weights obtained by regression analysis. In order to eliminate the effect
of an outlier in the comparison ratios a robust regression technique is elaborated, and compared with the eigenvector
method and the logarithmic least squares regression. A dissimilarity matrix approach is presented for the statistical
simultaneous comparisons of the AHP-weights. The results are illustrated by simulation experiments.
Ó 2002 Elsevier Science B.V. All rights reserved.

Keywords: Analytical hierarchy process; Eigenvector method; Regression; Robust regression; Multiple comparisons; Simultaneous
comparisons

1. Introduction example, an inconsistency can also be seen as


random variation. There are many studies showing
The standard method to calculate the values for that in human decision making inconsistencies can
the weights from an analytic hierarchy process be expected. This is widely acknowledged both in
(AHP)-matrix is to take the eigenvector corre- the AHP literature and elsewhere (for instance in
sponding to the largest eigenvalue of the matrix, Fischoff et al., 1980). Also, Genest and Rivest
and then to normalize the sum of the components (1994) give a discussion about inconsistency of
equal to one (Saaty, 1977, 1980). A drawback of comparisons made by experts or decision makers.
this method is that there is no practical statistical The comparison scale also has an effect on the
theory behind it. A statistical approach is needed if inconsistencies (P€oyh€onen et al., 1997; Salo and
one thinks that the ratios of the relative impor- H€am€al€ainen, 1997). In this paper we do not con-
tance of entities contain random fluctuations. For sider the effects of the comparison scale.
The regression approach as a logarithmic least
square has been proposed by many authors (for
*
Corresponding author. Tel.: +358-9-451-3061; fax: +358-9-
instance De Jong, 1984; Saaty and Vargas, 1984;
451-3096. Crawford and Williams, 1985; Fichtner, 1986;
E-mail address: pertti.laininen@hut.fi (P. Laininen). Alho and Kangas, 1996). De Jong (1984) gives a

0377-2217/03/$ - see front matter Ó 2002 Elsevier Science B.V. All rights reserved.
doi:10.1016/S0377-2217(02)00430-7
P. Laininen, R.P. H€am€al€ainen / European Journal of Operational Research 148 (2003) 514–524 515

profound statistical treatment of the use of regres- each attribute. These comparisons give local
sion analysis for analyzing AHP-matrices. In this weights of the attributes and alternatives. The
approach the ratio rij , the relative value of attribute global weights of the decision alternatives are
i compared to attribute j, given by the decision calculated by product sums of the local attribute
maker, is considered a value of a random variable. weights and alternative weights. In this process the
The natural logarithms logðrij Þ are regressed with accuracy of the estimates of the local weights is a
the dummy variables corresponding to the attri- matter of vital importance. In this paper we dis-
butes. De Jong suggests the use of the coefficient cuss the analysis of the AHP-matrix that gives the
of determination of the regression equation as a local weights.
measure of consistency in the decision maker’s re- Let us take an AHP-matrix of size m  m with
sponses. This quantity has a close relationship with the entries rij , i; j ¼ 1; . . . ; m. rij is the relative value
the residual mean square of the regression, which of attribute i compared to attribute j as perceived
Crawford and Williams (1985) propose as their by the decision maker. The entries rij make the
measure of consistency. This method often gives pairwise comparisons data with the reciprocal re-
very similar weights as the eigenvector method. lation rji ¼ 1=rij for i; j ¼ 1; . . . ; m.
Genest and Rivest (1994) prove analytically that Let w1 ; . . . ; wm (w1 > 0; . . . ; wm > 0) be the true
the difference between the estimates of the weights weights of the attributes with the condition
given by the eigenvector method and the regression w1 þ    þ wm ¼ 1, and let v1 ; . . . ; vm (v1 > 0; . . . ;
analysis is of order Oðr2 Þ, where r2 is the variance of vm > 0) be values of the attributes so that the nor-
logðrij Þ. The use of linear models and the experi- malized weights w1 ; . . . ; wm can be calculated from
mental design in AHP are discussed in Alho and v1 ; . . . ; vm by normalizing the sum equal to one.
Kangas (1996). They present a theory of variance Now, the observation rij is an observation from
components model and the least squares estimation the ratio vi =vj , and the logarithm logðrij Þ is an ob-
of preferences. In Alho and Kangas (1997) a Bayesian servation from logðvi =vj Þ ¼ logðvi Þ  logðvj Þ. The
extension of the regression technique is presented. analysis begins by modeling the observation error.
The regression approach enables to estimate the Vargas (1982) assumes that the random vari-
weights and their standard errors. Also, it enables ables rij follow a gamma distribution, and then the
the use of a robust regression (Montgomery and joint distribution of the estimates of the weights
Peck, 1992) discussed by Laininen and H€ am€ al€
ainen w1 ; . . . ; wm is a Dirichlet distribution. The marginal
(1999). The robust regression is robust for excep- distributions of the Dirichlet distribution are beta
tional values of rij . Like in the regression analysis distributions, a fact that gives an idea to model the
an exceptional value is called an outlier. In general, individual distribution of each estimated weight as
statistical regression software are equipped with a a beta distribution. Differently from Vargas we
possibility to study if there are outliers. For in- assume that rij is a logarithmic normally distrib-
stance, one can observe the standardized deleted uted random variable. The decision maker uses
residuals. Robust regression is a technique that integers from one to nine in her/his comparisons,
automatically gives a solution that is not affected and it is clear that logðrij Þ is not exactly normally
by outliers. distributed. But, one can find many AHP-matrices
The regression approach enables the statistical reported in the literature on AHP-applications
simultaneous comparisons of AHP-weights, too, (for instance Kangas et al., 1992; Leskinen, 2000)
which gives a deeper insight into the differences in where the residuals of the logarithmic least square
the weights. behave like the residuals of a regression with
normally distributed errors. We will model the
1.1. Statistical model analysis of AHP-matrices by the regression anal-
ysis on the basis of the model
In the AHP attributes are compared in a com-
parison matrix, and the decision alternatives are logðrij Þ  N ðlogðvi Þ  logðvj Þ; r2 Þ;
compared in a comparison matrix with respect to i ¼ 1; . . . ; m  1; j ¼ 2; . . . ; m; ð1Þ
516 P. Laininen, R.P. H€am€al€ainen / European Journal of Operational Research 148 (2003) 514–524

where r2 is a constant variance, and the random r ¼ 0:4 which is assumed to be typical in experi-
variables logðrij Þ can be supposed to be uncorre- mental AHP-matrices. For instance, in the data of
lated. We will introduce this approach in Section Leskinen (2000) the value of r varies from 0.4 to
2. This approach is identical with the well-known 0.8. The results given by r ¼ 0:4 remain similar for
model rij ¼ expðlogðvi Þ  logðvj Þ þ ij Þ, where the the value r ¼ 0:8, too.
random variables ij follow normal distribution. Efron and Tibshirani (1993) present studies on
The standard errors of the weights given by the the number of simulations needed in different sit-
regression approach can be calculated from the uations. In the estimation of a standard error 200
solution of the regression analysis by linearizing simulations are generally enough. Here we have
the formula of the weights. We will present a way used 2000 simulations. In the estimation of the
to calculate the standard errors based on model quantile of a small tail probability we have made
(1). 40 000 simulations.
Model (1) is in accordance with the geometric
measurement scale rij ¼ expðsdij Þ, where s > 0 is
2. Standard errors of the estimates of the weights
a measurement scale parameter, and dij , i; j ¼
1; . . . ; m are values which are dependent on the
Let us take an AHP-matrix of size m  m with
decision maker (Lootsma, 1993).
the entries rij , i, j ¼ 1, . . ., m comprising the pair-
Robust regression is a regression technique that
wise comparisons data. We first convert the data
enables to estimate the weights so that an outlier
into a linear regression form by a logarithmic
will not have an effect on the solution. We will
transformation.
introduce the idea of the robust regression, and
We write
give illustrations of its use in Section 3.
After regression analysis statistical comparisons logðrij Þ ¼ yij ; logðvi Þ  logðvj Þ ¼ bi  bj ;
of the weights can be done. Let rij be a comparison
ratio given by the decision maker. One can calcu- i; j ¼ 1; . . . ; m: ð2Þ
late the corresponding value r^ij given by the re- According to the model (1), for the total of
gression model for all i 6¼ j. A large difference mðm  1Þ=2 comparisons the following equations
between rij and ^rij tells about problems in the can be written:
comparison of the attributes i and j. In order to
statistically test the hypothesis H0 : w1 ¼    ¼ wm yij ¼ bi  bj þ ij ; i ¼ 1; . . . ; m  1;
for all i 6¼ j we establish a dissimilarity matrix that j ¼ 2; . . . ; m; ð3Þ
includes a test statistic for every difference logðvi Þ 
logðvj Þ. Observe that logðvi Þ  logðvj Þ ¼ 0 ) wi  where ij , the residuals, are uncorrelated random
wj ¼ 0, and wi  wj ¼ 0 ) logðvi Þ  logðvj Þ ¼ 0. variables which are normally distributed with ex-
The significances of the test statistics are evaluated pectation E½ij ¼ 0, and with constant variance
simultaneously. This will be presented in Section 4. Var½ij ¼ r2 . Let y be the mðm  1Þ=2  1 vector
of yij , b the m  1 vector of bi ,  the mðm  1Þ=
1.2. Simulation experiments 2  1 vector of ij , and X the mðm  1Þ=2  m
matrix of the dummy variables given by the system
We have illustrated the questions by simulation of Eq. (3). The equations can be written
experiments. In the simulation experiment AHP-
matrices are generated by using the model (1). An y ¼ Xb þ : ð4Þ
entry rij is obtained by adding into the expectation
a normally distributed random error with the The relationship between the weight wi and the
standard deviation r, and then rounding its ex- parameters bj is
ponential to the nearest fraction formed by inte-
gers from one to nine. In general, we present expðbi Þ
wi ¼ ; i ¼ 1; . . . ; m: ð5Þ
simulation results calculated by using the value expðb1 Þ þ    þ expðbm Þ
P. Laininen, R.P. H€am€al€ainen / European Journal of Operational Research 148 (2003) 514–524 517

The estimates for the parameters bi are then ance s2i that is estimated by the regression analysis.
calculated by minimizing
P some function d of the Then expðb^i Þ for i ¼ 1; . . . ; m  1 follow a log-
residuals ij , dðij Þ. If dðij Þ ¼ 2ij , the least normal distribution with expectation
squares regression is done. The solution needs an
E½expðb^i Þ ¼ expðbi þ s2i =2Þ; ð7Þ
additional constraint for the parameters because
the model (4) is over-parametrized. In fact, only and variance
the differences bi  bj can be estimated from the  
comparison matrix. These differences are sufficient Var½expðb^i Þ ¼ expð2bi þ s2i Þ expðs2i Þ  1 : ð8Þ
to determine the weights wi . Take for instance See for instance Ghahramani (1996). Because of
bi  bj ¼ dij , or bi ¼ bj þ dij . Then the sum b^i þ b^j being normally distributed (see
Rao, 1968) the expectation of the product
expðbj þ dij Þ expðb^i Þ expðb^j Þ can be calculated according to
wi ¼
expðbj þ d1j Þ þ    þ expðbj þ dmj Þ Eq. (7), and the variance–covariance matrix
expðdij Þ Cov½expðb^i Þ; expðb^j Þ will have the entries
¼ : ð6Þ
expðd1j Þ þ    þ expðdmj Þ
Cov½expðb^i Þ; expðb^j Þ ¼ expðbi þ bj þ ðs2i þ s2j Þ=2Þ
Thus, the value for bj can be chosen arbitrarily.  ðexpðCov½b^i ; b^j Þ  1Þ
A practical constraint is bm ¼ 0 (or vm ¼ 1). Then
the last column of the matrix X can be rejected, ð9Þ
and the new matrix X is of size mðm  1Þ= for i 6¼ j. Since b^m ¼ 0, a constant, E½expðb^m Þ ¼ 1,
2  ðm  1Þ, and the vector b is of size ðm  1Þ  1,
T Var½expðb^m Þ ¼ 0, and Cov½expðb^i Þ; expðb^m Þ ¼ 0
b ¼ ðb1 ; . . . ; bm1 Þ . The method gives a unique
for i ¼ 1; . . . ; m  1.
^
solution b1 ; . . . ; bm1 with b^m ¼ 0. In the least
^
The estimates for the covariances are calculated
squares regression the estimates b^1 ; . . . ; b^m1 are d b^i ; b^j
by inserting the estimates b^i , s^2i , and Cov Cov½
unbiased and normally distributed, and the co-
into formula (9). The estimate of the variance s^2i
variance matrix Cov½b^i ; b^j is a standard result of d b^i ; b^i . The inserting estimates are
is equal to Cov Cov½
the regression analysis.
results of the regression analysis.
The regression analysis gives b^i estimates,
^ By forming the derivatives of wi in terms of
bm ¼ 0, and the estimates of wi weights are ob-
expðbi Þ in formula (5) the covariance matrix
tained by inserting into formula (5).
Cov½w ^ j can be written as
^i ; w
 
owi
2.1. Standard errors of the estimates Cov½w ^j ¼
^i ; w
oebj mðm1Þ
The usual way to evaluate the variances of the  Cov½expðb^i Þ; expðb^j Þ ðm1Þðm1Þ
estimates w ^ i is to linearize formula (5) in terms of  T
owi
the parameters bi , and then evaluate the variance  : ð10Þ
of the linear expression (see for instance Alho and oebj ðm1Þm
Kangas, 1996). But formula (5) is strongly non- Here we have
linear with respect to the bi , especially for positive 2 3
w1  1 w1  w1
values of bi , and the linearization may give very 6 w2 7
  6 w2  1  w2 7
incorrect values for the variances. Better estimates owi 6 7
¼ wm 6         7:
for the variances can be obtained by calculating oebj 6 7
mðm1Þ 4 wm1 wm1  wm1  1 5
the variances and covariances of the functions
wm wm  wm
expðb^i Þ, i ¼ 1; . . . ; m theoretically which is possible
under the model (1). Then we need to linearize the ð11Þ
formula (5) in terms of expðbi Þ only. The estimate The estimates of the covariances are calculated
b^i is unbiased and normally distributed with vari- by inserting the estimates of the weights into
518 P. Laininen, R.P. H€am€al€ainen / European Journal of Operational Research 148 (2003) 514–524

(11) and the estimate of the covariance matrix vation vector of the response Y in n trials, X is
Cov½expðb^i Þ; expðb^j Þ into (10). the design matrix of size n  p of the p regressors
T
The standard error Se½w ^ i is then calculated as a X1 ; . . . ; Xp , b ¼ ðb1 ; . . . ; b p Þ is the parameter vec-
T
square root of the diagonal element ði; iÞ of the tor, and  ¼ ð1 ; . . . ; n Þ is a random vector of
matrix (10). independent random errors i , residuals with ex-
pectation zero. Let xi ¼ ðxi1 ; . . . ; xip Þ be the ith row
2.2. Simulation of the standard error of the matrix X.
The general approach to the estimation of the
By simulation experiments we can see that the parameters b is to minimize a function d of the
standard errors Se½w ^ i given by (10) and (11) are residuals,
practically unbiased. We generated 2000 5  5
AHP-matrices from the matrix with equal weights X
n X
n

by using r ¼ 0:4, and calculated the standard de- min dði Þ ¼ min dðyi  xi bÞ: ð12Þ
b b
i¼1 i¼1
viations of the 2000 values given by the eigenvector
method and given by the logarithmic least squares
regression. The values for the standard deviations An estimate of this type is often called an M-
were for each of the five weights and for both of the estimate, where M stands for maximum likelihood.
estimation methods 0.032. In each simulation the There are many possibilities for the form of the
standard errors of the weight estimates, Se½w ^ i , function d which is usually treated as a function
were calculated both using formulae (10) and (11) of the stadardized residual z. If we take dðzÞ ¼
and the method of linearizing the formula of wi z2 =2, the regression is the usual least squares re-
with respect to bj parameters. The mean of the gression. The function dðzÞ ¼ jzj gives the so called
standard errors Se½w^ i was 0.032 for each weight wi . absolute deviation regression.
The mean values are equal to the values of the The estimates of the parameters are calculated
corresponding standard deviations. Alho and by deriving the Eq. (12) with respect to each bi and
Kangas (1997) note that the values of the standard putting the derivatives equal to zero. It is known
errors calculated by linearizing with respect to bj that in the case of dðzÞ ¼ z2 =2 the equation system
parameters are often too small. In our simulations resulting from the derivation is a linear one, the
the values were in all cases less than the values given least squares normal equations. If the function
by (10) and (11) deviating 40% as its maximum. dðzÞ is of some other type, then the equation sys-
Simulations based on matrices with unequal tem is not linear. However, the equations can still
weights give results which are consistent with the be approximated by the equations of a weighted
above results. linear regression, and the values for the esti-
When using the standard errors of the weights in mates of bi can be solved by iteration by carrying
practice one should notice that the sampling dis- out weighted linear regressions. The weights are
tributions of the estimates of the weights having functions of the residuals, and their functional
values near zero or near one are heavily skew. form is
The sampling distributions given by our simula-
d 0 ðzÞ
tions resembled the beta distribution proposed by gðzÞ ¼ : ð13Þ
z
Vargas (1982).
For the least squares regression the weight
3. Robust regression analysis and AHP function is gðzÞ ¼ 1. The idea in the robust re-
gression is to give a weight equal to one if the
The theory of robust regression can be found in absolute value of the residual z is small. In the case
standard statistical texts (see for instance Mont- of a large standardized residual, for instance an
gomery and Peck, 1992). absolute value greater than two, the value of the
We take a regression model in general form (4) weight function should be smaller than one. If the
T
y ¼ Xb þ , where y ¼ ðy1 ; . . . ; yn Þ is the obser- residual is large enough, an outlier, the weight
P. Laininen, R.P. H€am€al€ainen / European Journal of Operational Research 148 (2003) 514–524 519

should be zero. In analyzing AHP-matrices we we have m AHP-weights and p ¼ m  1 regressors


have tried to find such a weight function that the in the regression analysis.
robust regression works like the logarithmic least The estimates for the bi are iterated by taking at
squares regression if there is only normal random the first step Gði; iÞ ¼ 1, i ¼ 1; . . . ; n, and calcu-
variation, but if there is an outlier, then the robust lating the first estimates. At the second step the
regression gives a solution which is not strongly weights Gði; iÞ are calculated by using the weight
affected by this. The reader should avoid confu- function with the known residuals from the first
sions and note that the regression weight functions step. At the third step the residuals given by the
are not the same as the AHP-weights. We suggest second step are used, and so on. After some iter-
the following weight function for the robust re- ations the values of the estimates do not change,
gression: and the final results are taken from the last step.
 This method is known as the iteratively reweighted
1 if jzj 6 b least squares regression (Montgomery and Peck,
gðzÞ ¼ ð14Þ
expðaðjzj  bÞÞ if jzj > b: 1992).
There are some differences between the dis-
Here a and b are positive constants. If we have tributions of the estimates given by the usual
the values for the residuals zi , the corresponding weighted regression and this iteratively reweighted
values for the weights gðzi Þ, i ¼ 1; . . . ; n can be regression. The estimates (15) of the regression
calculated. coefficients are normally distributed asymptoti-
The statistical computer programs in general cally only, and the variance covariance matrix (19)
contain a module for robust regression. However, is an approximation. One can raise the question
one should note that the programs do not neces- concerning the normalization of the residuals. We
sarily use exactly same algorithms as we have used. have used the standard error (18) at each step of
In the following we give all formulae for calcula- iteration, but there are also other kinds of pro-
tions of our results. posals (see Montgomery and Peck, 1992). This
Let G be a n  n matrix with entries is one reason for differences between statistical
Gði; iÞ ¼ gðzi Þ, i ¼ 1; . . . ; n, and Gði; jÞ ¼ 0 for all computer programs.
i 6¼ j ¼ 1; . . . ; n. Then the estimates b for the pa-
rameters b are
1 3.1. Choosing the values for a and b
b ¼ ðXT GXÞ XT Gy: ð15Þ
Let MSE be the mean square If for a residual jzj 6 b, then the residual has the
T same weight as in the least squares regression. If
MSE ¼ ðy  XbÞ Gðy  XbÞ=ðn  pÞ; ð16Þ
b ¼ 2, then about 95% of the residuals have the
and H the hat matrix weight of the least squares regression. For residu-
1 als jzj > b the weight is determined by the pa-
H ¼ XðXT GXÞ XT G: ð17Þ
rameter a. The greater the value a > 0 is the more
The standardization of the residual yi  xi b is rapidly the weight of the residual diminishes to
done by dividing it with the standard error zero.
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi We have studied the robust regression by sim-
Se½i ¼ MSEð1  H ði; iÞÞ; ð18Þ
ulation experiments. The idea was to seek such a
where H ði; iÞ is the entry ði; iÞ of H. The variance weight function that robust regression gives results
covariance matrix of the estimates b ¼ b^ is which are consistent with the results given by the
logarithmic least squares regression, if there is
Cov½b ¼ ðXT GXÞ1 MSE: ð19Þ
usual random variation only, and that an outlier
The results in (15)–(19) are standard results has a minor effect on the weights given by the
from regression analysis (see for instance Mont- robust regression. As we can see the weight func-
gomery and Peck, 1992). In our AHP-application tion (14) with a ¼ 1 and b ¼ 2 works well for this
520 P. Laininen, R.P. H€am€al€ainen / European Journal of Operational Research 148 (2003) 514–524

purpose. At the same time we compare the results means given by the eigenvector method indicate a
with the eigenvector method, too. small bias.
Let us take an AHP-matrix A of size 5  5: Our simulations by using the value r ¼ 0:8 give
2 3 results which are consistent with the results in
1 2 5 5 9 Table 1. The effect of the values of a and b is such
6 1=2 1 5=2 5=2 9=2 7 that, if we choose b < 2 or a > 1 the standard
6 7
A¼6
6 1=5 2=5 1 1 9=5 7
7: ð20Þ deviations given by the robust regression become
4 1=5 2=5 1 1 9=5 5 larger.
1=9 2=9 5=9 5=9 1
3.2. The effect of an outlier
The matrix A is fully consistent so that the
consistency ratio (Saaty, 1980) takes the value In the following comparisons we demonstrate
CR ¼ 0. Both the eigenvector method, the loga- how the eigenvector method, the ordinary least
rithmic least squares regression, and the robust re- squares regression, and the robust regression work
gression give the same weights: w1 ¼ 0:4972, w2 ¼ in the presence of an outlier. We have changed the
0:2486, w3 ¼ 0:0994, w4 ¼ 0:0994, w5 ¼ 0:0552. We value of an entry of the consistent matrix A used
call these the correct weights. before. The values of the weights can be seen in the
In order to compare the eigenvector method, the last column in Table 1.
logarithmic least squares regression, and the robust The correct value of the entry Að1; 4Þ is 5, and we
regression we generated 2000 random matrices changed it to 7 together with Að4; 1Þ ¼ 1=7. The
from A by adding into each entry logðAði; jÞÞ a resulting consistency ratio is CR ¼ 0:3. This modi-
normally distributed random error with expecta- fication does not have an effect on the weights given
tion zero and standard deviation 0.4, and rounding by the robust regression but the weights given by
its exponential to the nearest fraction formed by the eigenvector method change: w1 ¼ 0:5183, w2 ¼
the integers from one to nine. The value 0.4 for the 0:2411, w3 ¼ 0:0964, w4 ¼ 0:0905, w5 ¼ 0:0536.
standard deviation represents a typical one in ex- Also the corresponding weights given by the loga-
perimental data of Leskinen (2000). For each ma- rithmic least squares regression change: w1 ¼
trix the weights given by the eigenvector method, 0:5173, w2 ¼ 0:2418, w3 ¼ 0:0967, w4 ¼ 0:0904,
the least squares regression, and the robust re- w5 ¼ 0:0537. If we modify the value of the entry
gression were calculated. The means and the stan- Að1; 4Þ a little more so that Að1; 4Þ ¼ 9 and
dard deviations are shown in the following table. Að4; 1Þ ¼ 1=9, then CR ¼ 0:9. The weights given by
One can see from Table 1 that the values of the the robust regression are exactly the same as before
means and the values of the standard deviations the modification, but the weights given by the two
given by the logarithmic least squares regression other methods change radically. The eigenvec-
and the robust regression are practically identical, tor method gives the weights w1 ¼ 0:5356, w2 ¼
and the means are near to the correct weights. The 0:2343, w3 ¼ 0:0937, w4 ¼ 0:0843, and w5 ¼ 0:0521.

Table 1
The means and the standard deviations of the weights from 2000 simulations
Weight Mean eigenv. StDev eigenv. Mean log.ls. StDev log.ls. Mean rob.reg. StDev rob.reg. Correct weight
w1 0.482 0.048 0.493 0.052 0.493 0.054 0.4972
w2 0.254 0.040 0.250 0.041 0.250 0.042 0.2486
w3 0.103 0.019 0.101 0.019 0.101 0.020 0.0994
w4 0.102 0.020 0.100 0.020 0.100 0.020 0.0994
w5 0.059 0.010 0.056 0.011 0.056 0.011 0.0552
Eigenv: ¼ weights calculated by the eigenvector method. Log:ls: ¼ weights calculated by the logarithmic least squares regression.
Rob:reg: ¼ weights calculated by the robust regression with a ¼ 1 and b ¼ 2.
P. Laininen, R.P. H€am€al€ainen / European Journal of Operational Research 148 (2003) 514–524 521

The logarithmic least squares regression gives the least once in the paired tests, then H0 of the
values w1 ¼ 0:5321, w2 ¼ 0:2366, w3 ¼ 0:0946, composite hypothesis is rejected. The paired test
w4 ¼ 0:0841, and w5 ¼ 0:0526. H0 : bi ¼ bj can be carried out by using the test
As these examples show the eigenvector method statistic
and the logarithmic least squares regression are
b^i  b^j
quite sensitive to outliers. The robust regression tij ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ;
automatically reduces the effect of an outlier. d b^i ; b^i þ Cov
Cov½
Cov d b^j ; b^j  2 Cov
Cov½ d b^i ; b^j
Cov½
i < j ¼ 1; . . . ; m: ð22Þ
4. Statistical simultaneous comparison of the The denominator in (22) is the standard error of
weights the difference b^i  b^j . For i ¼ j we put tii ¼ 0. From
a consistent comparison matrix the w ^ i estimates are
As a result of the regression analysis one ob- obtained so that the standard errors of the esti-
taines the estimates for the AHP-weights and the mates are zero. In this case tij takes the value in-
standard errors of the estimates, too. But there are finity for i 6¼ j if wi  wj 6¼ 0. All values tij ,
other features in the regression analysis which help i; j ¼ 1; . . . ; m establish an m  m matrix T that we
the analyst interpret the results. call the dissimilarity matrix. If the model (1) is valid
The estimated matrix A ^ contains the values of and bi  bj ¼ 0 (or wi  wj ¼ 0), then according
the entries given by the regression model: to standard regression theory the standardized
difference tij follows the t-distribution with a
A^ði; jÞ ¼ expðb^i  b^j Þ; i; j ¼ 1; . . . ; m: ð21Þ
df ¼ n  m þ 1 degrees of freedom. So, matrix T
A comparison between the entries Aði; jÞ given provides a simple way to the statistical simulta-
by the decision maker and the estimated entries neous comparison of the AHP-weights in order to
A^ði; jÞ gives an insight into the inconsistency of the decide which of them differ statistically signifi-
statements given by the decision maker. A high cantly. The critical value tc ðaÞ for the statistic tij is
difference between the observed and the estimated the value that maxðjtij jÞ exceeds with the proba-
values tells about the inconsistency of the decision bility a. In the theory of linear models the critical
maker’s statements. value is usually given by the assumption that the
By using the results of the regression analysis a estimates b^i and b^j are independent, and the critical
multiple comparison of the weights can be done. value is then obtained from the Studentized range
This means testing the hypothesis H0 : w1 ¼    ¼ distribution. But here the estimates are correlated,
wm at a risk level a. The easy way to make the test is and so the critical values must be simulated. In
to test the hypothesis H0 : b1 ¼    ¼ bm . Namely, Table 2 we give simulated critical values for m ¼
if wi ¼ wj then bi ¼ bj , and vice versa. For testing 3; . . . ; 10 and a ¼ 0.05. If some of the values of jtij j
the hypothesis concerning the b parameters we in the matrix T exceeds the corresponding critical
have the theory of linear models in use (see for value in Table 2, then the hypothesis H0 : w1 ¼
instance Miller, 1981).    ¼ wm is rejected at the risk level 0.05.
The simultaneous hypothesis H0 : b1 ¼    ¼ bm A fine feature of the test statistic (22) is that it is
can be tested by testing all of the hypotheses independent on the scale parameter of the Loot-
H0 : bi ¼ bj , i 6¼ j ¼ 1; . . . ; m. If H0 is rejected at sma’s exponential scale (Leskinen, 2000).

Table 2
Critical values tc ð0:05Þ of tij at the risk level a ¼ 0:05
m 3 4 5 6 7 8 9 10
tc ð0:05Þ 18.94 4.79 3.76 3.46 3.39 3.34 3.36 3.37
The values represent 95% quantiles of 40 000 simulated values for the distribution of jtij j with equal correct weights.
522 P. Laininen, R.P. H€am€al€ainen / European Journal of Operational Research 148 (2003) 514–524

4.1. Grouping of the weights Weight w1 w2 w3 w4 w5 w6


^i
w 0.316 0.139 0.036 0.125 0.236 0.148
If the hypothesis H0 : w1 ¼    ¼ wm is rejected Se ½w
^i 0.110 0.061 0.018 0.056 0.093 0.062
the question arises how the weights can be grouped
into homogenous subgroups. The weights wi and
wj belongs to the same subgroup if they do not The values of the standard error are dependent
differ significantly from each other, but in the case on the values of the weights. A large weight has a
of a significant difference they are grouped into large standard error, and a small weight has a
different subgroups. There are several grouping small standard error. Robust regression gives here
methods in the theory of linear models (see for the same values for the weights as the logarithmic
instance Miller, 1981). Here our problem is not least squares regression: w ^ 1 ¼ 0:316, w
^ 2 ¼ 0:139,
a really statistical testing problem. Therefore, the w^ 3 ¼ 0:036, w^ 4 ¼ 0:125, w
^ 5 ¼ 0:236, w^ 6 ¼ 0:148.
grouping method does not need to handle the error The estimates calculated by the eigenvector method
rates as exactly as possible. The method presented differ slightly from these values: w ^ 1 ¼ 0:321,
by Fisher (see Miller, 1981) is suitable. By this w^ 2 ¼ 0:140, w^ 3 ¼ 0:035, w
^ 4 ¼ 0:128, w^ 5 ¼ 0:237,
method the weights wi and wj are grouped into the w^ 6 ¼ 0:139. Because there is no difference between
same subgroup if jtij j does not exceed the usual the logarithmic least squares and the robust re-
critical value in the t-test. This critical value can be gression there is no problem with the outliers.
found from the table of the t-distribution. The dissimilarity matrix T is the following:
If the decision maker has a clear opinion of the 2 3
priorities of the attributes under comparison then 0 1:60 4:23 1:80 0:57 1:48
6 1:60 0 2:63 0:21 1:03 0:12 7
the discovered subgroups are disjoint. In a more 6 7
6 4:23 2:63 0 2:42 3:66 2:75 7
uncertain case some of the weights may belong to T¼6
6 1:80 0:21
7:
6 2:42 0 1:24 0:32 7
7
two subgroups. 4 0:57 1:03 3:66 1:24 0 0:91 5
Example: In the following we analyze an actual
1:48 0:12 2:75 0:32 0:91 0
AHP-matrix given by Saaty (2000, p. 113). In his
example Saaty asked his son to evaluate three high In the test of H0 : w1 ¼    ¼ w6 the critical va-
schools according to six criteria with the charac- lue for tij is tc ð0:05Þ ¼ 3.46 in Table 2. As we can
teristics: learning, friends, school life, vocational see from the matrix T there are two values greater
training, college preparation, and music classes. than 3.46. Thus the hypothesis H0 is rejected at the
He also weighted the relative importance of these risk level 0.05. Here the decision maker, indeed,
criteria with regard to the satisfaction with the has an opinion that all the characteristics are not
school. The comparison ratios of the criteria in the equally important. After this result grouping into
above mentioned order are given in the following subgroups is allowed.
matrix. The test statistic tij has 10 degrees of freedom,
2 3 and the corresponding t-value is tð0:05Þ ¼ 2:228
1 4 3 1 3 4 (see the table of t-distribution). From the dissimi-
6 1=4 1 7 3 1=5 1 7 larity matrix T we can see that all values jt3j j,
6 7
6 7 j ¼ 1; 2; 4; 5; 6 are greater than 2.228, and these are
6 1=3 1=7 1 1=5 1=5 1=6 7
A¼6
6 1
7: the only values greater than 2.228. The weights, or
6 1=3 5 1 1 1=3 7
7 characteristics, can be grouped into two disjoint
6 7
4 1=3 5 5 1 1 3 5 groups: fw3 g, and fw1 ; w2 ; w4 ; w5 ; w6 g.
1=4 1 6 3 1=3 1 The statistical conclusion is the following: For
this decision maker the criterion school life is the
The logarithmic least squares regression gives the only one having the different weight than the other
following estimates and the standard errors for criteria. It is smaller than the other weights among
the weights w1 ; . . . ; w6 of the characteristics learn- which he does not have differences compared to
ing, . . ., music classes: each other.
P. Laininen, R.P. H€am€al€ainen / European Journal of Operational Research 148 (2003) 514–524 523

5. Conclusions values for the dissimilarity matrix at the risk level


0.05 are calculated by simulations, and tabulated.
In AHP the ratio statements about the relative If the hypothesis H0 is rejected the weights are
importance of the weights given by a decision grouped into homogenous subgroups by using the
maker can be thought of to include random vari- Fisher’s approach. The purpose of the simulta-
ations. We have presented a new analysis method neous comparison is to evaluate to what extent the
for AHP-matrices which is based on the hypothe- decision maker is able to find differences in the
sis that the logarithm of the comparisons ratio importances of the attributes or alternatives, and
follows a normal distribution with a constant to what extent his/her comparisons are random
variance. The method includes the use of robust entries only. If there are significant differences
regression together with the eigenvector method between the weights, and if the grouping leads into
or the logarithmic least squares regression, and two or more disjoint subgroups, then the decision
the statistical multiple comparison of the weights maker has succeeded in differentiating the priori-
by testing the hypothesis H0 : w1 ¼    ¼ wm and ties.
grouping the weights. Of course, in some special cases the decision
In the robust regression we have developed such maker may have the opinion that all the attributes
a weighting function of the residuals that the ro- or alternatives are equally important.
bust regression and the logarithmic least squares
regression give similar values for the weight esti-
mates and their standard errors if there is normal References
random variation only. The presence of an outlier
does not have a significant effect on the estimates Alho, J.M., Kangas, J., 1997. Analyzing uncertainties in
given by the robust regression. In practice, if the experts’ opinions of forest plan performance. Forest Science
weight estimates calculated by the eigenvector 43, 521–528.
method or by the logarithmic least squares re- Alho, J.M., Kangas, J., Kolehmainen, O., 1996. Uncertainty in
expert predictions of the ecological consequences of forest
gression are near the values given by the robust plans. Appl. Stat. 45, 1–14.
regression, then there do not exist any outlier Crawford, G., Williams, C., 1985. A note on the analysis of
problems. But if the results given by the robust subjective judgement matrices. Journal of Mathematical
regression differ considerably from the results of Psychology 29, 387–405.
the standard methods, then the comparison matrix De Jong, P., 1984. A statistical approach to Saaty’s scaling
method for priorities. Journal of Mathematical Psychology
should be reassessed. If there is an actual error in 28, 467–478.
the comparison ratios, the error must be corrected. Efron, B., Tibshirani, R.J., 1993. An Introduction to the
A rough comparison of the weight estimates Bootstrap. Chapman & Hall, New York.
calculated by different methods can be done by Fichtner, J., 1986. On deriving priority vectors from matrices of
studying the differences of the estimates with re- pairwise comparisons. Socio-Economic Planning Sciences
20, 399–405.
spect to the standard errors of the weights given by Fischoff, B., Slovic, B., Lichtenstein, S., 1980. Knowing what
the robust regression. We have presented how the you want: measuring labile values. In: Wallsten, T.S. (Ed.),
standard errors of the weight estimates can be Cognitive processes in choice and decision behavior. Law-
calculated in the case that the comparison ratios rence Erlbaum Associates, Hillsdale, pp. 285.
follow a lognormal distribution. Genest, C., Rivest, L.-P., 1994. A statistical look at Saaty’s
method of estimating pairwise preferences expressed on a ratio
In the simultaneous comparison of the weights scale. Journal of Mathematical Psychology 38, 477–496.
we test the hypothesis H0 : w1 ¼    ¼ wm by test- Ghahramani, S., 1996. Fundamentals of Probability. Prentice-
ing H0 : b1 ¼    ¼ bm , where bi ¼ logðvi ), and vi is Hall, New Jersey.
the value of the attribute or alternative i. The last Kangas, J., Matero, J., Pukkala, T., 1992. Analyyttisen
modification of the test allows the use of the theory hierarkiaprosessin k€aytt€ o metsien monik€ayt€ on suunni-
ttelussa-tapaustutkimus (Using the analytic hierarchy pro-
of linear models. The test is done by studying the cess in planning of multiple-use forestry––a case study).
dissimilarity matrix formed by the t-statistics for Finnish Forest Research Institute Research Notes 412, 48,
the paired tests H0 : bi ¼ bj , i 6¼ j. The critical in Finnish.
524 P. Laininen, R.P. H€am€al€ainen / European Journal of Operational Research 148 (2003) 514–524

Laininen, P., H€
am€al€
ainen, R.P., 1999. Analyzing AHP-matrices Rao, C.R., 1968. Linear Statistical Inference and Its Applica-
by robust regression. In: Proceedings of the Fifth Interna- tions. John Wiley, New York.
tional Symposium on the Analytic Hierarchy Process Saaty, T.L., 1977. A scaling method for priorities in hierarchical
(ISAHP’99), Kobe, Japan, August 12–14. structures. Journal of Mathematical Psychology 15, 234–
Leskinen, P., 2000. Measurement scales and scale independence 281.
in the analytic hierarchy process. Journal of Multi-criteria Saaty, T.L., 1980. The Analytic Hierarchy Process. McGraw-
Decision Analysis 9, 163–174. Hill, New York.
Lootsma, F.A., 1993. Scale sensitivity in the multiplicative Saaty, T.L., 2000. Fundamentals of Decision Making and
AHP and SMART. Journal of Multi-criteria Decision Priority Theory with The Analytic Hierarchy Process. RWS
Analysis 2, 87–110. Publications, Pittsburgh.
Miller Jr., R.G., 1981. Simultaneous Statistical Inference, Saaty, T.L., Vargas, F., 1984. Comparison of eigenvalue,
second ed. Springer Verlag, New York. logarithmic least squares and least squares methods in
Montgomery, D.C., Peck, E.A., 1992. Introduction to Linear estimating ratios. Mathematical Modelling 5, 309–324.
Regression Analysis. John Wiley, New York. Salo, A.A., H€am€al€ainen, R.P., 1997. On the measurement of
P€
oyh€onen, M.A., H€ am€al€ainen, R.P., Salo, A.A., 1997. preferences in the analytic hierarchy process. Journal of
An experiment on the numerical modelling of verbal ratio Multi-criteria Decision Analysis 6/6, 309–319.
statements. Journal of Multi-criteria Decision Analysis 6, Vargas, L.G., 1982. Reciprocal matrices with random coeffi-
1–10. cients. Mathematical Modelling 3, 69–81.

You might also like