Professional Documents
Culture Documents
www.elsevier.com/locate/dsw
Decision Aiding
Abstract
In the analytic hierarchy process (AHP) the decision maker makes comparisons between pairs of attributes or al-
ternatives. In real applications the comparisons are subject to judgmental errors. Many AHP-matrices reported in the
literature are found to be such that the logarithm of the comparison ratio can be sufficiently well modeled by a normal
distribution with a constant variance. On the basis of this model we present the formulae for the evaluation of the
standard deviations of the estimates of the AHP-weights obtained by regression analysis. In order to eliminate the effect
of an outlier in the comparison ratios a robust regression technique is elaborated, and compared with the eigenvector
method and the logarithmic least squares regression. A dissimilarity matrix approach is presented for the statistical
simultaneous comparisons of the AHP-weights. The results are illustrated by simulation experiments.
Ó 2002 Elsevier Science B.V. All rights reserved.
Keywords: Analytical hierarchy process; Eigenvector method; Regression; Robust regression; Multiple comparisons; Simultaneous
comparisons
0377-2217/03/$ - see front matter Ó 2002 Elsevier Science B.V. All rights reserved.
doi:10.1016/S0377-2217(02)00430-7
P. Laininen, R.P. H€am€al€ainen / European Journal of Operational Research 148 (2003) 514–524 515
profound statistical treatment of the use of regres- each attribute. These comparisons give local
sion analysis for analyzing AHP-matrices. In this weights of the attributes and alternatives. The
approach the ratio rij , the relative value of attribute global weights of the decision alternatives are
i compared to attribute j, given by the decision calculated by product sums of the local attribute
maker, is considered a value of a random variable. weights and alternative weights. In this process the
The natural logarithms logðrij Þ are regressed with accuracy of the estimates of the local weights is a
the dummy variables corresponding to the attri- matter of vital importance. In this paper we dis-
butes. De Jong suggests the use of the coefficient cuss the analysis of the AHP-matrix that gives the
of determination of the regression equation as a local weights.
measure of consistency in the decision maker’s re- Let us take an AHP-matrix of size m m with
sponses. This quantity has a close relationship with the entries rij , i; j ¼ 1; . . . ; m. rij is the relative value
the residual mean square of the regression, which of attribute i compared to attribute j as perceived
Crawford and Williams (1985) propose as their by the decision maker. The entries rij make the
measure of consistency. This method often gives pairwise comparisons data with the reciprocal re-
very similar weights as the eigenvector method. lation rji ¼ 1=rij for i; j ¼ 1; . . . ; m.
Genest and Rivest (1994) prove analytically that Let w1 ; . . . ; wm (w1 > 0; . . . ; wm > 0) be the true
the difference between the estimates of the weights weights of the attributes with the condition
given by the eigenvector method and the regression w1 þ þ wm ¼ 1, and let v1 ; . . . ; vm (v1 > 0; . . . ;
analysis is of order Oðr2 Þ, where r2 is the variance of vm > 0) be values of the attributes so that the nor-
logðrij Þ. The use of linear models and the experi- malized weights w1 ; . . . ; wm can be calculated from
mental design in AHP are discussed in Alho and v1 ; . . . ; vm by normalizing the sum equal to one.
Kangas (1996). They present a theory of variance Now, the observation rij is an observation from
components model and the least squares estimation the ratio vi =vj , and the logarithm logðrij Þ is an ob-
of preferences. In Alho and Kangas (1997) a Bayesian servation from logðvi =vj Þ ¼ logðvi Þ logðvj Þ. The
extension of the regression technique is presented. analysis begins by modeling the observation error.
The regression approach enables to estimate the Vargas (1982) assumes that the random vari-
weights and their standard errors. Also, it enables ables rij follow a gamma distribution, and then the
the use of a robust regression (Montgomery and joint distribution of the estimates of the weights
Peck, 1992) discussed by Laininen and H€ am€ al€
ainen w1 ; . . . ; wm is a Dirichlet distribution. The marginal
(1999). The robust regression is robust for excep- distributions of the Dirichlet distribution are beta
tional values of rij . Like in the regression analysis distributions, a fact that gives an idea to model the
an exceptional value is called an outlier. In general, individual distribution of each estimated weight as
statistical regression software are equipped with a a beta distribution. Differently from Vargas we
possibility to study if there are outliers. For in- assume that rij is a logarithmic normally distrib-
stance, one can observe the standardized deleted uted random variable. The decision maker uses
residuals. Robust regression is a technique that integers from one to nine in her/his comparisons,
automatically gives a solution that is not affected and it is clear that logðrij Þ is not exactly normally
by outliers. distributed. But, one can find many AHP-matrices
The regression approach enables the statistical reported in the literature on AHP-applications
simultaneous comparisons of AHP-weights, too, (for instance Kangas et al., 1992; Leskinen, 2000)
which gives a deeper insight into the differences in where the residuals of the logarithmic least square
the weights. behave like the residuals of a regression with
normally distributed errors. We will model the
1.1. Statistical model analysis of AHP-matrices by the regression anal-
ysis on the basis of the model
In the AHP attributes are compared in a com-
parison matrix, and the decision alternatives are logðrij Þ N ðlogðvi Þ logðvj Þ; r2 Þ;
compared in a comparison matrix with respect to i ¼ 1; . . . ; m 1; j ¼ 2; . . . ; m; ð1Þ
516 P. Laininen, R.P. H€am€al€ainen / European Journal of Operational Research 148 (2003) 514–524
where r2 is a constant variance, and the random r ¼ 0:4 which is assumed to be typical in experi-
variables logðrij Þ can be supposed to be uncorre- mental AHP-matrices. For instance, in the data of
lated. We will introduce this approach in Section Leskinen (2000) the value of r varies from 0.4 to
2. This approach is identical with the well-known 0.8. The results given by r ¼ 0:4 remain similar for
model rij ¼ expðlogðvi Þ logðvj Þ þ ij Þ, where the the value r ¼ 0:8, too.
random variables ij follow normal distribution. Efron and Tibshirani (1993) present studies on
The standard errors of the weights given by the the number of simulations needed in different sit-
regression approach can be calculated from the uations. In the estimation of a standard error 200
solution of the regression analysis by linearizing simulations are generally enough. Here we have
the formula of the weights. We will present a way used 2000 simulations. In the estimation of the
to calculate the standard errors based on model quantile of a small tail probability we have made
(1). 40 000 simulations.
Model (1) is in accordance with the geometric
measurement scale rij ¼ expðsdij Þ, where s > 0 is
2. Standard errors of the estimates of the weights
a measurement scale parameter, and dij , i; j ¼
1; . . . ; m are values which are dependent on the
Let us take an AHP-matrix of size m m with
decision maker (Lootsma, 1993).
the entries rij , i, j ¼ 1, . . ., m comprising the pair-
Robust regression is a regression technique that
wise comparisons data. We first convert the data
enables to estimate the weights so that an outlier
into a linear regression form by a logarithmic
will not have an effect on the solution. We will
transformation.
introduce the idea of the robust regression, and
We write
give illustrations of its use in Section 3.
After regression analysis statistical comparisons logðrij Þ ¼ yij ; logðvi Þ logðvj Þ ¼ bi bj ;
of the weights can be done. Let rij be a comparison
ratio given by the decision maker. One can calcu- i; j ¼ 1; . . . ; m: ð2Þ
late the corresponding value r^ij given by the re- According to the model (1), for the total of
gression model for all i 6¼ j. A large difference mðm 1Þ=2 comparisons the following equations
between rij and ^rij tells about problems in the can be written:
comparison of the attributes i and j. In order to
statistically test the hypothesis H0 : w1 ¼ ¼ wm yij ¼ bi bj þ ij ; i ¼ 1; . . . ; m 1;
for all i 6¼ j we establish a dissimilarity matrix that j ¼ 2; . . . ; m; ð3Þ
includes a test statistic for every difference logðvi Þ
logðvj Þ. Observe that logðvi Þ logðvj Þ ¼ 0 ) wi where ij , the residuals, are uncorrelated random
wj ¼ 0, and wi wj ¼ 0 ) logðvi Þ logðvj Þ ¼ 0. variables which are normally distributed with ex-
The significances of the test statistics are evaluated pectation E½ij ¼ 0, and with constant variance
simultaneously. This will be presented in Section 4. Var½ij ¼ r2 . Let y be the mðm 1Þ=2 1 vector
of yij , b the m 1 vector of bi , the mðm 1Þ=
1.2. Simulation experiments 2 1 vector of ij , and X the mðm 1Þ=2 m
matrix of the dummy variables given by the system
We have illustrated the questions by simulation of Eq. (3). The equations can be written
experiments. In the simulation experiment AHP-
matrices are generated by using the model (1). An y ¼ Xb þ : ð4Þ
entry rij is obtained by adding into the expectation
a normally distributed random error with the The relationship between the weight wi and the
standard deviation r, and then rounding its ex- parameters bj is
ponential to the nearest fraction formed by inte-
gers from one to nine. In general, we present expðbi Þ
wi ¼ ; i ¼ 1; . . . ; m: ð5Þ
simulation results calculated by using the value expðb1 Þ þ þ expðbm Þ
P. Laininen, R.P. H€am€al€ainen / European Journal of Operational Research 148 (2003) 514–524 517
The estimates for the parameters bi are then ance s2i that is estimated by the regression analysis.
calculated by minimizing
P some function d of the Then expðb^i Þ for i ¼ 1; . . . ; m 1 follow a log-
residuals ij , dðij Þ. If dðij Þ ¼ 2ij , the least normal distribution with expectation
squares regression is done. The solution needs an
E½expðb^i Þ ¼ expðbi þ s2i =2Þ; ð7Þ
additional constraint for the parameters because
the model (4) is over-parametrized. In fact, only and variance
the differences bi bj can be estimated from the
comparison matrix. These differences are sufficient Var½expðb^i Þ ¼ expð2bi þ s2i Þ expðs2i Þ 1 : ð8Þ
to determine the weights wi . Take for instance See for instance Ghahramani (1996). Because of
bi bj ¼ dij , or bi ¼ bj þ dij . Then the sum b^i þ b^j being normally distributed (see
Rao, 1968) the expectation of the product
expðbj þ dij Þ expðb^i Þ expðb^j Þ can be calculated according to
wi ¼
expðbj þ d1j Þ þ þ expðbj þ dmj Þ Eq. (7), and the variance–covariance matrix
expðdij Þ Cov½expðb^i Þ; expðb^j Þ will have the entries
¼ : ð6Þ
expðd1j Þ þ þ expðdmj Þ
Cov½expðb^i Þ; expðb^j Þ ¼ expðbi þ bj þ ðs2i þ s2j Þ=2Þ
Thus, the value for bj can be chosen arbitrarily. ðexpðCov½b^i ; b^j Þ 1Þ
A practical constraint is bm ¼ 0 (or vm ¼ 1). Then
the last column of the matrix X can be rejected, ð9Þ
and the new matrix X is of size mðm 1Þ= for i 6¼ j. Since b^m ¼ 0, a constant, E½expðb^m Þ ¼ 1,
2 ðm 1Þ, and the vector b is of size ðm 1Þ 1,
T Var½expðb^m Þ ¼ 0, and Cov½expðb^i Þ; expðb^m Þ ¼ 0
b ¼ ðb1 ; . . . ; bm1 Þ . The method gives a unique
for i ¼ 1; . . . ; m 1.
^
solution b1 ; . . . ; bm1 with b^m ¼ 0. In the least
^
The estimates for the covariances are calculated
squares regression the estimates b^1 ; . . . ; b^m1 are d b^i ; b^j
by inserting the estimates b^i , s^2i , and Cov Cov½
unbiased and normally distributed, and the co-
into formula (9). The estimate of the variance s^2i
variance matrix Cov½b^i ; b^j is a standard result of d b^i ; b^i . The inserting estimates are
is equal to Cov Cov½
the regression analysis.
results of the regression analysis.
The regression analysis gives b^i estimates,
^ By forming the derivatives of wi in terms of
bm ¼ 0, and the estimates of wi weights are ob-
expðbi Þ in formula (5) the covariance matrix
tained by inserting into formula (5).
Cov½w ^ j can be written as
^i ; w
owi
2.1. Standard errors of the estimates Cov½w ^j ¼
^i ; w
oebj mðm1Þ
The usual way to evaluate the variances of the Cov½expðb^i Þ; expðb^j Þðm1Þðm1Þ
estimates w ^ i is to linearize formula (5) in terms of T
owi
the parameters bi , and then evaluate the variance : ð10Þ
of the linear expression (see for instance Alho and oebj ðm1Þm
Kangas, 1996). But formula (5) is strongly non- Here we have
linear with respect to the bi , especially for positive 2 3
w1 1 w1 w1
values of bi , and the linearization may give very 6 w2 7
6 w2 1 w2 7
incorrect values for the variances. Better estimates owi 6 7
¼ wm 6 7:
for the variances can be obtained by calculating oebj 6 7
mðm1Þ 4 wm1 wm1 wm1 1 5
the variances and covariances of the functions
wm wm wm
expðb^i Þ, i ¼ 1; . . . ; m theoretically which is possible
under the model (1). Then we need to linearize the ð11Þ
formula (5) in terms of expðbi Þ only. The estimate The estimates of the covariances are calculated
b^i is unbiased and normally distributed with vari- by inserting the estimates of the weights into
518 P. Laininen, R.P. H€am€al€ainen / European Journal of Operational Research 148 (2003) 514–524
(11) and the estimate of the covariance matrix vation vector of the response Y in n trials, X is
Cov½expðb^i Þ; expðb^j Þ into (10). the design matrix of size n p of the p regressors
T
The standard error Se½w ^ i is then calculated as a X1 ; . . . ; Xp , b ¼ ðb1 ; . . . ; b p Þ is the parameter vec-
T
square root of the diagonal element ði; iÞ of the tor, and ¼ ð1 ; . . . ; n Þ is a random vector of
matrix (10). independent random errors i , residuals with ex-
pectation zero. Let xi ¼ ðxi1 ; . . . ; xip Þ be the ith row
2.2. Simulation of the standard error of the matrix X.
The general approach to the estimation of the
By simulation experiments we can see that the parameters b is to minimize a function d of the
standard errors Se½w ^ i given by (10) and (11) are residuals,
practically unbiased. We generated 2000 5 5
AHP-matrices from the matrix with equal weights X
n X
n
by using r ¼ 0:4, and calculated the standard de- min dði Þ ¼ min dðyi xi bÞ: ð12Þ
b b
i¼1 i¼1
viations of the 2000 values given by the eigenvector
method and given by the logarithmic least squares
regression. The values for the standard deviations An estimate of this type is often called an M-
were for each of the five weights and for both of the estimate, where M stands for maximum likelihood.
estimation methods 0.032. In each simulation the There are many possibilities for the form of the
standard errors of the weight estimates, Se½w ^ i , function d which is usually treated as a function
were calculated both using formulae (10) and (11) of the stadardized residual z. If we take dðzÞ ¼
and the method of linearizing the formula of wi z2 =2, the regression is the usual least squares re-
with respect to bj parameters. The mean of the gression. The function dðzÞ ¼ jzj gives the so called
standard errors Se½w^ i was 0.032 for each weight wi . absolute deviation regression.
The mean values are equal to the values of the The estimates of the parameters are calculated
corresponding standard deviations. Alho and by deriving the Eq. (12) with respect to each bi and
Kangas (1997) note that the values of the standard putting the derivatives equal to zero. It is known
errors calculated by linearizing with respect to bj that in the case of dðzÞ ¼ z2 =2 the equation system
parameters are often too small. In our simulations resulting from the derivation is a linear one, the
the values were in all cases less than the values given least squares normal equations. If the function
by (10) and (11) deviating 40% as its maximum. dðzÞ is of some other type, then the equation sys-
Simulations based on matrices with unequal tem is not linear. However, the equations can still
weights give results which are consistent with the be approximated by the equations of a weighted
above results. linear regression, and the values for the esti-
When using the standard errors of the weights in mates of bi can be solved by iteration by carrying
practice one should notice that the sampling dis- out weighted linear regressions. The weights are
tributions of the estimates of the weights having functions of the residuals, and their functional
values near zero or near one are heavily skew. form is
The sampling distributions given by our simula-
d 0 ðzÞ
tions resembled the beta distribution proposed by gðzÞ ¼ : ð13Þ
z
Vargas (1982).
For the least squares regression the weight
3. Robust regression analysis and AHP function is gðzÞ ¼ 1. The idea in the robust re-
gression is to give a weight equal to one if the
The theory of robust regression can be found in absolute value of the residual z is small. In the case
standard statistical texts (see for instance Mont- of a large standardized residual, for instance an
gomery and Peck, 1992). absolute value greater than two, the value of the
We take a regression model in general form (4) weight function should be smaller than one. If the
T
y ¼ Xb þ , where y ¼ ðy1 ; . . . ; yn Þ is the obser- residual is large enough, an outlier, the weight
P. Laininen, R.P. H€am€al€ainen / European Journal of Operational Research 148 (2003) 514–524 519
purpose. At the same time we compare the results means given by the eigenvector method indicate a
with the eigenvector method, too. small bias.
Let us take an AHP-matrix A of size 5 5: Our simulations by using the value r ¼ 0:8 give
2 3 results which are consistent with the results in
1 2 5 5 9 Table 1. The effect of the values of a and b is such
6 1=2 1 5=2 5=2 9=2 7 that, if we choose b < 2 or a > 1 the standard
6 7
A¼6
6 1=5 2=5 1 1 9=5 7
7: ð20Þ deviations given by the robust regression become
4 1=5 2=5 1 1 9=5 5 larger.
1=9 2=9 5=9 5=9 1
3.2. The effect of an outlier
The matrix A is fully consistent so that the
consistency ratio (Saaty, 1980) takes the value In the following comparisons we demonstrate
CR ¼ 0. Both the eigenvector method, the loga- how the eigenvector method, the ordinary least
rithmic least squares regression, and the robust re- squares regression, and the robust regression work
gression give the same weights: w1 ¼ 0:4972, w2 ¼ in the presence of an outlier. We have changed the
0:2486, w3 ¼ 0:0994, w4 ¼ 0:0994, w5 ¼ 0:0552. We value of an entry of the consistent matrix A used
call these the correct weights. before. The values of the weights can be seen in the
In order to compare the eigenvector method, the last column in Table 1.
logarithmic least squares regression, and the robust The correct value of the entry Að1; 4Þ is 5, and we
regression we generated 2000 random matrices changed it to 7 together with Að4; 1Þ ¼ 1=7. The
from A by adding into each entry logðAði; jÞÞ a resulting consistency ratio is CR ¼ 0:3. This modi-
normally distributed random error with expecta- fication does not have an effect on the weights given
tion zero and standard deviation 0.4, and rounding by the robust regression but the weights given by
its exponential to the nearest fraction formed by the eigenvector method change: w1 ¼ 0:5183, w2 ¼
the integers from one to nine. The value 0.4 for the 0:2411, w3 ¼ 0:0964, w4 ¼ 0:0905, w5 ¼ 0:0536.
standard deviation represents a typical one in ex- Also the corresponding weights given by the loga-
perimental data of Leskinen (2000). For each ma- rithmic least squares regression change: w1 ¼
trix the weights given by the eigenvector method, 0:5173, w2 ¼ 0:2418, w3 ¼ 0:0967, w4 ¼ 0:0904,
the least squares regression, and the robust re- w5 ¼ 0:0537. If we modify the value of the entry
gression were calculated. The means and the stan- Að1; 4Þ a little more so that Að1; 4Þ ¼ 9 and
dard deviations are shown in the following table. Að4; 1Þ ¼ 1=9, then CR ¼ 0:9. The weights given by
One can see from Table 1 that the values of the the robust regression are exactly the same as before
means and the values of the standard deviations the modification, but the weights given by the two
given by the logarithmic least squares regression other methods change radically. The eigenvec-
and the robust regression are practically identical, tor method gives the weights w1 ¼ 0:5356, w2 ¼
and the means are near to the correct weights. The 0:2343, w3 ¼ 0:0937, w4 ¼ 0:0843, and w5 ¼ 0:0521.
Table 1
The means and the standard deviations of the weights from 2000 simulations
Weight Mean eigenv. StDev eigenv. Mean log.ls. StDev log.ls. Mean rob.reg. StDev rob.reg. Correct weight
w1 0.482 0.048 0.493 0.052 0.493 0.054 0.4972
w2 0.254 0.040 0.250 0.041 0.250 0.042 0.2486
w3 0.103 0.019 0.101 0.019 0.101 0.020 0.0994
w4 0.102 0.020 0.100 0.020 0.100 0.020 0.0994
w5 0.059 0.010 0.056 0.011 0.056 0.011 0.0552
Eigenv: ¼ weights calculated by the eigenvector method. Log:ls: ¼ weights calculated by the logarithmic least squares regression.
Rob:reg: ¼ weights calculated by the robust regression with a ¼ 1 and b ¼ 2.
P. Laininen, R.P. H€am€al€ainen / European Journal of Operational Research 148 (2003) 514–524 521
The logarithmic least squares regression gives the least once in the paired tests, then H0 of the
values w1 ¼ 0:5321, w2 ¼ 0:2366, w3 ¼ 0:0946, composite hypothesis is rejected. The paired test
w4 ¼ 0:0841, and w5 ¼ 0:0526. H0 : bi ¼ bj can be carried out by using the test
As these examples show the eigenvector method statistic
and the logarithmic least squares regression are
b^i b^j
quite sensitive to outliers. The robust regression tij ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ;
automatically reduces the effect of an outlier. d b^i ; b^i þ Cov
Cov½
Cov d b^j ; b^j 2 Cov
Cov½ d b^i ; b^j
Cov½
i < j ¼ 1; . . . ; m: ð22Þ
4. Statistical simultaneous comparison of the The denominator in (22) is the standard error of
weights the difference b^i b^j . For i ¼ j we put tii ¼ 0. From
a consistent comparison matrix the w ^ i estimates are
As a result of the regression analysis one ob- obtained so that the standard errors of the esti-
taines the estimates for the AHP-weights and the mates are zero. In this case tij takes the value in-
standard errors of the estimates, too. But there are finity for i 6¼ j if wi wj 6¼ 0. All values tij ,
other features in the regression analysis which help i; j ¼ 1; . . . ; m establish an m m matrix T that we
the analyst interpret the results. call the dissimilarity matrix. If the model (1) is valid
The estimated matrix A ^ contains the values of and bi bj ¼ 0 (or wi wj ¼ 0), then according
the entries given by the regression model: to standard regression theory the standardized
difference tij follows the t-distribution with a
A^ði; jÞ ¼ expðb^i b^j Þ; i; j ¼ 1; . . . ; m: ð21Þ
df ¼ n m þ 1 degrees of freedom. So, matrix T
A comparison between the entries Aði; jÞ given provides a simple way to the statistical simulta-
by the decision maker and the estimated entries neous comparison of the AHP-weights in order to
A^ði; jÞ gives an insight into the inconsistency of the decide which of them differ statistically signifi-
statements given by the decision maker. A high cantly. The critical value tc ðaÞ for the statistic tij is
difference between the observed and the estimated the value that maxðjtij jÞ exceeds with the proba-
values tells about the inconsistency of the decision bility a. In the theory of linear models the critical
maker’s statements. value is usually given by the assumption that the
By using the results of the regression analysis a estimates b^i and b^j are independent, and the critical
multiple comparison of the weights can be done. value is then obtained from the Studentized range
This means testing the hypothesis H0 : w1 ¼ ¼ distribution. But here the estimates are correlated,
wm at a risk level a. The easy way to make the test is and so the critical values must be simulated. In
to test the hypothesis H0 : b1 ¼ ¼ bm . Namely, Table 2 we give simulated critical values for m ¼
if wi ¼ wj then bi ¼ bj , and vice versa. For testing 3; . . . ; 10 and a ¼ 0.05. If some of the values of jtij j
the hypothesis concerning the b parameters we in the matrix T exceeds the corresponding critical
have the theory of linear models in use (see for value in Table 2, then the hypothesis H0 : w1 ¼
instance Miller, 1981). ¼ wm is rejected at the risk level 0.05.
The simultaneous hypothesis H0 : b1 ¼ ¼ bm A fine feature of the test statistic (22) is that it is
can be tested by testing all of the hypotheses independent on the scale parameter of the Loot-
H0 : bi ¼ bj , i 6¼ j ¼ 1; . . . ; m. If H0 is rejected at sma’s exponential scale (Leskinen, 2000).
Table 2
Critical values tc ð0:05Þ of tij at the risk level a ¼ 0:05
m 3 4 5 6 7 8 9 10
tc ð0:05Þ 18.94 4.79 3.76 3.46 3.39 3.34 3.36 3.37
The values represent 95% quantiles of 40 000 simulated values for the distribution of jtij j with equal correct weights.
522 P. Laininen, R.P. H€am€al€ainen / European Journal of Operational Research 148 (2003) 514–524
Laininen, P., H€
am€al€
ainen, R.P., 1999. Analyzing AHP-matrices Rao, C.R., 1968. Linear Statistical Inference and Its Applica-
by robust regression. In: Proceedings of the Fifth Interna- tions. John Wiley, New York.
tional Symposium on the Analytic Hierarchy Process Saaty, T.L., 1977. A scaling method for priorities in hierarchical
(ISAHP’99), Kobe, Japan, August 12–14. structures. Journal of Mathematical Psychology 15, 234–
Leskinen, P., 2000. Measurement scales and scale independence 281.
in the analytic hierarchy process. Journal of Multi-criteria Saaty, T.L., 1980. The Analytic Hierarchy Process. McGraw-
Decision Analysis 9, 163–174. Hill, New York.
Lootsma, F.A., 1993. Scale sensitivity in the multiplicative Saaty, T.L., 2000. Fundamentals of Decision Making and
AHP and SMART. Journal of Multi-criteria Decision Priority Theory with The Analytic Hierarchy Process. RWS
Analysis 2, 87–110. Publications, Pittsburgh.
Miller Jr., R.G., 1981. Simultaneous Statistical Inference, Saaty, T.L., Vargas, F., 1984. Comparison of eigenvalue,
second ed. Springer Verlag, New York. logarithmic least squares and least squares methods in
Montgomery, D.C., Peck, E.A., 1992. Introduction to Linear estimating ratios. Mathematical Modelling 5, 309–324.
Regression Analysis. John Wiley, New York. Salo, A.A., H€am€al€ainen, R.P., 1997. On the measurement of
P€
oyh€onen, M.A., H€ am€al€ainen, R.P., Salo, A.A., 1997. preferences in the analytic hierarchy process. Journal of
An experiment on the numerical modelling of verbal ratio Multi-criteria Decision Analysis 6/6, 309–319.
statements. Journal of Multi-criteria Decision Analysis 6, Vargas, L.G., 1982. Reciprocal matrices with random coeffi-
1–10. cients. Mathematical Modelling 3, 69–81.