You are on page 1of 10

Annex 4D,.

Multi-Criteria Analysis

This annex presents a tool to allow systematic modelling of decision makers preferences.
The annex has been prepared by ERM.

MULTI-CRITERIA TECHNIQUES

1.1

BACKGROUND
Multi-criteria techniques are tools developed in the field of decision theory to aid
problem-solving. They entail the systematic modelling of a decision makers1
preferences to solve in an explicit manner a choice between options involving a
number of, often conflicting, objectives. Through the aggregation of disparate
information onto a common index of utility or value they aim to provide a
rational basis for classifying choices.
The field of decision theory is wide, and has received contributions from, inter
alia, the disciplines of engineering, mathematics, psychology, management
science and economics. The multi-criteria approach examines how all the
relevant aspects of a problem are assessed and traded off by decision-makers.
Essentially it is a top-down exercise, based on a decision-makers perception of
how a decision can be decomposed into trade-offs between objectives. Elicitation
techniques may be used to reveal such outcome preferences. The multi-criteria
technique employs data on the performance of competing options against the
decision-makers stated objectives and develops a composite utility function for
each option. Detailed source material is widely available, for example in Keeney
(1992), Keeney and Raiffa (1976), von Winterfeldt and Edwards (1986) and
Watson and Buede (1987).

1.2

INTRODUCTION
Multi-criteria techniques encompass a large family of methods of which 40 or
more different approaches are distinguishable in the literature, from the highly
sophisticated through to simple rating systems (Nijkamp, 1986; Nijkamp et al
1990)2.
The common rationale of these methods is to establish a broad framework for
assessing the impact of making a choice, simplifying the decision into its
constituent elements. In most cases the method requires developing a complete
set of alternative solutions to a problem (the options), assessing all relevant
performance information for criteria which judge the value or utility of the
options, and trading-off the relative significance of the criteria to resolve the
problem. Subjective and implicit decision making can be thus made objective and

(1) 1 The singular decision maker should be regarded are interchangeable with plural decision makers throughout this note.
(2) 2 Many techniques have clearly defined methodologies, including multi-attribute value theory (MAVT), multi-attribute
utility theory (MAUT) and the analytical hierarchy process (AHP).

transparent in a simple evaluative model (Bouyssou, 1990; Chung and Poon,


1996).
The techniques provide for the inclusion of intangibles in policy analysis, allow
the consideration of both qualitative and quantitative data in the same model,
and assist the structuring and trading-off of disparate criteria which are in basic
conflict in complex decision making (for example, efficiency and equity, Miller,
1985).

1.3

MEETING OBJECTIVES
In making a choice, a decision maker should consider all the relevant costs and
benefits of the options (in the widest possible sense) to ensure they make a sound
decision that adequately addresses all concerns. The relative preference for
alternative options can be judged by quantifying their performance against a set
of relevant objectives, attributes or dimensions, which in total describe the
options value to the decision maker (Miller, 1985). The preferred option should
be that which, on balance, comes closest to meeting the decision makers
objectives, which may often conflict.
In practice, it is unlikely that any one option will perform best against all
objectives and can be clearly preferred; each will demonstrate different
advantages and disadvantages. Describing the balance between objectives, and
identifying the preferred option is a complex problem.

1.4

MAKING TRADE-OFFS
This complexity may be resolved by making trade-offs between objectives, which
entails determining a trade-off or substitution ratio; how much of one can be
surrendered in order to achieve another. In practice, this is usually done
intuitively when a choice is made, recognising, for example, that option As
superior performance against objective X clearly outweighs option B being
preferred for objective Y. However, comparisons are rarely as simple as this.
Even where the comparison is straightforward, there are considerable advantages
in making trade-offs explicitly, within an explicit decision-aiding framework.
The process is structured and comprehensive, ensuring that all concerns are
identified and addressed. The approach should retain sufficient flexibility for the
robustness of trade-off decisions to be thoroughly explored, and it should be
sufficiently transparent to ensure that the reasons behind a particular choice are
made clear. The advantages of a structured approach are particularly apparent
where there are many alternatives and/or numerous conflicting objectives, where
such a transparent logic can aid communication, debate and the route towards
consensus.

A STEP BY STEP APPROACH

A multi-criteria framework helps tackle complex problems by breaking them


down into smaller and more manageable components, viz.:
identification of the overall goal in making a decision, subsidiary objectives
and the various indices or criteria against which option performance may be
measured;
identification of all the alternative options;
assessment of option performance against criteria;
valuation of performance;
weighting of objectives or criteria;
evaluation and ranking of options; and
sensitivity analysis1.
The following sections describe the steps involved in this approach.
2.1

IDENTIFYING AIMS OR OBJECTIVES


Requirements of the objectives list
The list of decision objectives should be comprehensive, consistent and without
overlap. This avoids double-counting and the problem of preference
dependence, where in order to establish whether option A or B is preferred with
respect to objective Y, one needs to know how they performed against objective
Z. Independence allows one to look at each criterion without reference to others
(Vinke, 1992; Bouyssou, 1990).
Nevertheless, only those objectives which affect the choice between options need
be taken further. If an objective, albeit inherently important, does not vary in
performance between options, it will have no impact on the decision. A simple
scoping study will allow such objectives, and those with other minor impacts, to
be excluded from further analysis.
Creating an objectives list
The list of objectives can be developed top down by dividing an overall goal into
subsidiary objectives, and, these in turn, into a hierarchy of criteria through
further decomposition (Bouyssou, 1990). Alternatively a bottom up approach
can be adopted, identifying all relevant objectives individually and aggregating
(3) 1 An examination of how sensitive the ranking is to variation in the assumptions underlying the framework and specifically
the values used for key parameters and weights.

them appropriately. Both approaches deliver a decision tree representing the


relationships between objectives used in the evaluation and selection of a
preferred strategy. A decision tree avoids both the need to consider too many
criteria simultaneously, and the problem of considering very different low level
objectives at the same time. The limit of a human decision makers capacity to
handle criteria is regarded as between 7 and 12 (Nijkamp, 1986; Nijkamp et al
1990; Bouyssou, 1990).
Where there is a group of decision makers or stakeholders, this process of
developing a decision tree is likely to require discussion and consensus, and is
best achieved through a facilitated group meeting or meetings. The objectives list
will be site-specific, reflecting the concerns and preferences of the decision maker
and any stakeholders he/she might wish to involve in the decision-making
process. It is important to emphasise that there can be no prescriptive and
generic listing of objectives. A brain-storming workshop for decision
stakeholders1 is an effective way to develop a set of objectives.
Converting objectives to criteria
In many cases it will be necessary to convert objectives into measurable criteria to
facilitate further analysis. Where it is difficult to encapsulate an objective within
a single criterion, a simple proxy is often adequate. Although this may be
simplistic, the uncertainties which are introduced into the analysis are more than
counterbalanced by the benefits of a convenient and easily defined parameter for
which data are readily available.
Constraints
Although an option may perform poorly against one or more criteria it may still
be judged as a favoured alternative if its performance in other areas is strong
enough. It is clear, however, that there are constraints outside which no
alternative would be considered acceptable. These may be associated with any
aspect of the option, and are often, although not exclusively, related to the
objectives set above.
2.2

DEFINING ALTERNATIVES
The next step in the analysis is the identification of the set of alternative options.
Note that these should be compiled after the objectives and constraints have been
set. The option set should be comprehensive to ensure that no viable option is
omitted. However, it may be advantageous to start with a limited, but diverse,
set. This avoids analysing what may be a large number of options which are
closely related, and allows poorly performing options to be weeded out quickly.
(4) 1 It is important that those responsible for approving a choice actively participate in the process and therefore own the
decision. It is their beliefs and values which should drive the analysis. Supporting analysts may, of course, develop background
material at each step, but in order for the framework to reflect and assist the decision making process the choices made should be
those of the key players. The decision maker(s) may choose to include external stakeholders in this process if they wish.

In particular, the performance of options can be compared with any constraints


which have been identified, and non-compliant alternatives eliminated. Further
analysis can concentrate on variations to the theme of those which are initially
preferred, fine-tuning the favoured alternatives.
Stakeholder workshops
A stakeholder workshop, as mentioned above, can also be used to identify a set of
options. The workshop may need to be held over 2 or more sessions to allow
more detailed options to be developed and constraints to be checked.
2.3

PREDICTING PERFORMANCE
In order to estimate the performance of each option the decision maker may need
to consult a number of sources. These will clearly vary with the decision under
analysis, but are likely to include site-specific and/or generic data concerning
each of the decision criteria. Developing a performance matrix may involve
applying data from the literature, bespoke and proprietary models and
canvassing opinion from the public, experts and using Delphi Panel techniques.
Wherever appropriate, measures of risk or uncertainty can be included in the
performance data, using central estimates, ranges and distributions, and can be
used in the evaluation and sensitivity analysis.
Qualitative assessment
Where the performance of an option cannot reasonably be quantified against an
objective, a qualitative assessment can still provide a valuable input into the
analysis. Such an assessment may take the form of a ranking using a Likert Scale,
a subjective evaluation of performance or even simple descriptions or numbering
of impacts. In each case a basis for distinguishing between the options for that
objective is established, and it can be included in the process rather than
overlooked. There is no need to exclude information simply because it is soft or
fuzzy, which may be the only way of describing performance against some
intangible criteria (Nijkamp, 1986).

2.4

USING THE PERFORMANCE MATRIX


The amount of data required in order to consider the performance of the
strategies against all the relevant objectives may be considerable. Nevertheless
even a simple matrix of information can be a powerful tool in supporting an
evaluation of the alternatives. It will reveal where each option performs well and
where its shortcomings lie in respect to the competition, leading to an
understanding which might not previously have been possible. The performance
information should be checked against any constraints identified earlier in the
analysis which would render an option impracticable or otherwise unacceptable.

Simplifying the matrix


Some degree of simplification of a matrix can be achieved by aggregating
performance information where possible. There are parallels with the valuation
phase of life cycle assessment for environmental impacts, where individual
greenhouse gases grouped in terms of greenhouse warming potential as tonnes of
carbon equivalent, and monetary costs can also be combined. Clearly the
aggregation of financial information is also possible. Nevertheless, aggregating
criteria will obscure uncertainties over the magnitude of impacts and their spatial
and temporal expression, and introduces further uncertainties through the need
for suitable weighting factors.
2.5

VALUING PERFORMANCE
Even in a simple matrix, it is unlikely that one option will out-perform the others
against all the objectives identified by the decision maker. Making a choice will
be difficult, but can be simplified by making trade-offs between the objectives or
criteria. The decision maker must consider how much of one objective it is
prepared to surrender in order to achieve another. This can be achieved by first
establishing how valued, or desirable, are the performances of the alternative
strategies with respect to the objectives, and then weighting these derived
functions and combining them into an overall measure of performance.
To do this, performance measures should be converted to value by normalising
scores to a common range, say 0-100, where the most favoured scores 100, and
the least 0. However, the scale between best and worst performance may not
always be linear. The value associated with a unit on the objective/criteria scale
may change according to the point on the scale at which that unit lies. These
changes in preference along a scale, or value functions, are more likely to be
important where there are dramatic ranges of performance, or performance
thresholds, and can be elicited by skilled facilitators or through the use of
appropriate decision support software1.
Where uncertainty or risk associated with performance is evaluated, the function
can take account of preference changes with respect to the probability of various
outcomes and is generally called a utility function.

(5) 1 There are many software tools on the market developed to service the growing use of decision tools in fields as diverse as
mathematics, psychology, economics and management science. Many of these originate in the USA and can be traced through
software listings under decision support or multi-attribute analysis. Tools developed in the UK include Hiview and Equity
developed by Enterprise LSE (Houghton Street, London, WC2A 2AE, tel: 0171 955 7128, fax: 0171 955 7427).

2.6

WEIGHTING OBJECTIVES
Introduction
In order to establish a composite measure of performance across all the objectives
selected, or a combined value or utility function, thus providing a basis for
identifying preferred options, the objectives/criteria must be weighted according
to how important each is regarded in relation to the others.
Require a decision-maker to articulate how much one criterion willing to forego
in meeting higher levels of performance against another criterion. (Miller, 1985).
The trade-off or substitution ratio expressed allows preference for different
criteria to be expressed on the same scale (Vinke, 1992).
Weighting Techniques
A range of techniques can be used for estimating and modelling preferences,
depending on the time available, the difficulty of the task and the required
precision of the outcome (Nijkamp et al, 1990). These are described in detail in
the literature, with many authors having their own particular preferences.
Techniques for developing weights include ex post analysis, analysis of
documentation where weights are implicit, use of hypothetical priorities and
interactive methods based on interviews, questionnaires and elicitation
techniques. Some approaches have developed prescriptive weighting sets to
ensure consistent analysis with regard to a specific problem (Environment
Agency, 1999).
Interactive methods include ranking techniques, verbal statements on weights,
distribution of points, scenario formulation and pairwise comparison or swing
weighting (Nijkamp, 1986, Reid and Christensen, 1994; Keeney, 1992). These
offer the opportunity to engage decision makers further in structuring and
resolving the problem, and for exploring the consequences of their expressed
preferences.
Weights represent a particular value and preference set, and clearly they will
change with the views of the decision maker, with corresponding alteration to the
preferred outcomes. The effect may be explored with different weighting sets for
different stakeholder groups, for example, business groups, policy makers, green
groups, professional groups and academics (Macdonald, 1997; Sobral et al 1981;
Chung and Poon, 1996) or by using a weighted sum of group specific weights
(Miller, 1985).
Decision Conferencing
Where an interactive method is used to determine weights with a group of
decision makers or stakeholders, the process is best conducted in a facilitated

decision conference where the decision makers have been prepared with suitable
background material on the evaluation of performance and associated
uncertainties. This material should place the performance of each option firmly
within an appropriate context to assist the statement of preferences. The meeting
should discuss, and where necessary, revise, the framework for the analysis and
the background information supplied, and should debate preferences for the
ranges of performance1 offered by the alternatives against all criteria, establishing
a consensus on the weighting factors to be used.
2.7

RESULTS
Once an appropriate set of weights has been derived these can be applied to the
normalised performance scores and preferred options identified. The results of
the analysis can be surprising, and should not be taken as inviolate, the approach
is flexible and open -ended, not deterministic. A thorough examination of the
sensitivity of the overall conclusion to the assumptions made in the analysis, to
uncertainties, and to weighting factors stemming from plurality of opinion is
necessary to explore the decision envelope around the preferred options, and
examine the robustness of the indications (Stirling, 1996).
The weighted, normalised data should be examined to discover the factors that
are most significant in determining the overall ranking. The decision maker(s)
should discuss whether it they are happy with these, their implications and the
related assumptions, or whether they indicate an error or misunderstanding in
performance evaluation or the weighting process. In many cases iteration will be
necessary to refine the alternatives, carry out more precise modelling or debate
further the weights which should be used.

2.8

THE ROLE OF THE DECISION MAKER


Multi-criteria techniques are decision aiding tools and do not replace the role of
the decision-maker, nor their responsibility for the decision. Identifying
preferred options requires trade-offs to be made between the benefits and
disbenefits of all alternatives. Either implicitly or explicitly the responsible
decision maker must identify the relevant objectives and value the performance
of the competing options. The advantage of performing the analysis explicitly,
using a formal technique, is that the process is structured and transparent;
important factors should not be overlooked, and all the assumptions that result in
an option achieving a particular rank are open for examination and criticism.
Moreover, by indicating the key factors in the analysis, debate and resources can
be focused on these areas to clarify the identification of a preferred solution.

(6) 1 The requirement is to weight the range of performance against objective X against the range of performance against
objective Y, rather than weight the inherent importance of the objectives out of context. Thus the group should derive a trade-off
between objectives ,i.e. how much of the range of performance against objective X are they prepared to substitute for a change in
performance of best option to worst option for objective Y.

2.9

APPLICATION TO WASTE MANAGEMENT


There have been a number of studies which have applied multi-criteria decision
support techniques to waste management (Sobral, et al 1981; Maimone, 1984;
Perlack and Willis, 1985; Yhdego et al ,1992; Chung and Poon, 1996; Macdonald
1996; Chang and Lu, 1997; Beltramo and Broglino, 1999).
Many of these studies begin with the observation that making justifiable
decisions about waste management requires consideration of a wide range of
impacts, including, for example, socio-economic, environmental, economic, land
use and resource use. These impacts are often overlooked in economic analyses
because they cannot be measured in terms of monetary units (Sobral et al 1981).
Clearly, considering all factors and options for waste management is a planning
problem of formidable proportions (Maimone, 1985). Criteria used in a study by
Chung and Poon (1996) are shown in Table 2.1.
Generally in these cases the multi-criteria framework is used to structure the
problem by analysts rather than by the decision makers themselves. Theoretical
decision maker stances are used to explore the decision and examine the
response of the outcome to artificially extreme points of view. Often the model
adopted is very simplistic (Yhdego et al, 1992).
Nevertheless, the approach is used to consider both qualitative and quantitative
information in real decision situations (Sobral et al 1981) and sophisticated
techniques such as fuzzy set theory are used to accommodate data uncertainty
(Chang and Lu, 1997). Pairwise comparison is the most frequently used
interactive technique used to establish trade-off relationships between criteria.
Beltramo and Broglino (1999) for example, developed weights from enquiries,
interviews and discussion with 5 distinct interest groups.

Table 2.1

Waste Management Evaluation Criteria (Chung and Poon, 1996)


Economic Impact
Internal costs
Transport costs
Marketability of recovered
materials and energy

Socio-political impact
Social equity
Ease of administration /
implementation
Compatibility with the public
administration principles

Environmental impact
Land used
Material recovered
Waste coverage
Waste elimination
Net energy recovered
Local air pollution
Transportation
Global air pollution
Potential for waster pollution
Land contamination and
future restriction
Disamenity
Other health risk
Noise

You might also like