You are on page 1of 4

VERY DRAFT NO OFFICIAL STATUS

Monitoring Humanitarian Interventions.

Motives (Why Monitor?)

Accountability/Performance Monitoring

While there is room for monitoring for accountability/performance monitoring


purposes in humanitarian interventions, it is limited. Provided that an attempt at
humanitarian assistance is sincere, it is incongruous to consider it ‘unsatisfactory’. To
judge one attempt to help as inferior to another is to introduce very significant value
judgements into any analysis. In such emergencies, all assistance may be welcomed.
The fact that such assistance is constrained by varying forms and degrees of financial,
ideological or other factors must simply be accepted, as no organisation is totally free
from such constraints.

Accountability/performance monitoring is therefore never an objective exercise.


Results will vary widely, both in relation to which agency is being monitored and
which agency is doing the monitoring. Hence, monitoring for accountability purposes
in humanitarian interventions is limited to investigating:

1. whether the various attempts at assistance are sincere;


2. how well the policies and preferences of the agency being monitored align
with the policies and preference of the agency doing the monitoring; and
3. how well the shared components of policy and preferences are put into
practice.

Methodologies for achieving such investigations will therefore necessarily be case-


specific, with each monitoring agency needing to consider the second of the above
factors in designing monitoring instruments. This paper therefore does not encompass
such methodologies.

It should also be noted that in assessing the third factor, there are no real absolute
standards to be met in complex humanitarian emergencies. While indicators such as
the SPHERE guidelines offer a very important benchmark, researchers should never
mistake their task as simply measuring whether an agency has adhered to such
guidelines. This is a meaningless assessment in most complex emergencies. The true
task of researchers is to measure how well an agency strove to meet or better the
achievement of the benchmarks, given the very real constraints faced in such
circumstances. Note that this approach in no way condones the lowering of the
desired standards, it merely recognises that the ability of agencies to meet them will
vary greatly with the circumstances of the emergency.

Continuous Improvement

No agency or individual may be held responsible for what was effectively an


unpredictable outcome. Such outcomes may be the result of unforseen constraints, or
flow from the complex interactions of other, foreseeable factors. To attempt to judge
the ‘performance’ of an agency by the unpredictability of their operating environment
is to choose a very misleading proxy measure. An agency’s responses to
unpredictable constraints or opportunities are more relevant, but again, given the
VERY DRAFT NO OFFICIAL STATUS

complexity of circumstances being dealt with, will more often than not be limited to
trial and error approaches. Hence, while it is important that an agency recognise and
respond to unforseen influences, the effectiveness of the approaches used is also not
necessarily a good proxy measure of an agency’s ‘performance’.

What is of much more value is to simply make an effort to learn from the collective
experience of humanitarian interventions and try to disseminate and implement
findings for continuous improvement purposes. Such an approach is equally useful at
the overall intervention, activity or agency level. Furthermore approaches to
monitoring each level need not be mutually exclusive. If a consistent and appropriate
methodology for information gathering is employed at the overall intervention level,
disaggregation of the data to activity or agency level is easily achievable.

As pointed out by the DAC (OECD, 1999) and others, assessments of complex
humanitarian interventions often necessarily rely on historical narratives regarding
‘what happened’. This is very understandable, given the highly contextual nature of
the available information. To try to ‘model’ humanitarian emergencies on a limited
number of fixed variables, and then base monitoring on these is to beg
oversimplification and meaningless results. Subjectivity and qualitative information
are a necessary component of any assessment of humanitarian interventions. The
challenge is therefore to choose an information gathering methodology which respects
the limitations of the available base data, but concurrently makes the best possible use
of them.

Methodology for Continuous Improvement Monitoring

Obvious measures such as beneficiary numbers, mortality rates, etc., are critical to
assessing the circumstances facing an intervention at any one time. However,
attempts to investigate the reasons why circumstances eventuated or were resolved is
often not always so straight forward, and often involves considering a range of
varying opinions1.

Given that much of the available information is unavoidably subjective, it is


obviously important to collect as many perspectives as feasible. This, of course,
should always include not just implementing agencies, but a range of beneficiaries
and the host government/faction(s). Not all opinions will be scientifically or
professionally based, so any methodology used must allow for collection of some
purely qualitative information. A key consideration whenever collecting qualitative
information is how to structure it to provide the most useful outcomes of analyses.

In the interest of brevity, this paper fails to present a comparative review of different
approaches to structuring qualitative data in the context of humanitarian
interventions2. Rather, it presents only the most useful methodology so far identified.
In the context of humanitarian interventions, this appears to be a modified SWOT
approach. The SWOT approach tracks (over time) Strengths, Weaknesses,
Opportunities and Threats. In this regard it is very similar to the ‘Most significant
Changes’ approach (ref).
1
Some of which may be more scientifically defensible than others –which also needs to be taken into
account.
2
Although the author would strongly suggest that use of rating scales be avoided!
VERY DRAFT NO OFFICIAL STATUS

By asking respondents to provide what they consider to be the key ‘issues’ at any
point in time, codifying these into SWOT factors and also linking them to narrative
explanations, a very functional means of structuring data is achieved.

To illustrate this, consider the information collected regarding key weaknesses. Once
collection is complete, the recorded weaknesses may be compiled and codified under
like-categories. Most commonly perceived weaknesses may then be determined. If
narrative responses linked to these ‘issues’ include ‘intended actions’ and ‘progress
made since last monitored’, an iterative time series of monitoring will allow the
tracking of such problems, what has been tried to fix them, and what seems to work
and what doesn’t seem to work.

While provision of pre-codified structure can tend to lead respondents, the trade-off in
this case may be worthwhile. Hence the pre-emptive derivation of categories in
which SWOT factors may lie is a potential refinement of this approach, as is then
arranging these categories under pertinent headings. Note that, while useful, this pre-
codification should never be confused with ‘modelling’ of the factors associated with
a heading. The categories and headings chosen must be clearly recognised as being a
compilation of convenience, with a virtually infinite number of equally valid
structures possible. This recognition is important in that it will help prevent less
aware users from inferring any form of ‘formulaic’ functionality of the structure
used3.

Attachment 1. Illustrates a more developed application of this methodology


(please read carefully).

Outcomes of Analyses

The approach used in Attachment 1 simply attempts to capture, as they occur, the key
points which should make up any ex-post, historical narrative pertaining to a
humanitarian intervention. Collecting such information in real time is clearly
preferential to attempting to collect it on an ex-post basis, as researchers will be free
to draw their own conclusions, untainted by poor recollection and pre-construction of
events into perception-specific ‘stories’.

By marrying a simple ‘Issue/Action Planned/Progress made’ iterative monitoring


format and a categorised SWOT structure, qualitative data is collected in a manner
which allows it’s meaningful aggregation and disaggregation across a wide range of
parameters. For example, a researcher could investigate the most common
weaknesses (ie. problems) encountered across a complex, multi-player intervention.
They might then use the header data as markers to narrow consideration to the most
common weaknesses associated with (all or one) water/sanitation providers.
Conversely, a researcher may be aware of a major problem faced by a water/sanitation
provider, and could ask the database to list the previous incidences of similar
problems in the current (or earlier) intervention(s). Given that this data collection
format tracks problems from their initial appearance until resolution, and records all
successive actions planned and progress made, a researcher could examine what
solutions seem to have worked (or not worked) in the past. Again, by using the
3
A common trap associated with the use of rating scales.
VERY DRAFT NO OFFICIAL STATUS

relevant header data as markers, researchers could narrow such searches to be highly
context specific. Because a ‘lesson’ of this type is never extracted until a contextual
need is identified, this approach avoids the corruptive generalisation and abstraction
usually associated with attempts at producing ‘lessons learned databases’. AusAID
already uses this approach on data from Activity Monitoring Briefs (AMBs). in its
Experience Learnt Feedback (ELF) tool.

The inclusion of a sheet collecting simple satisfactory/unsatisfactory perceptions in


relation to each category heading allows researchers to also investigate the relative
importance of weaknesses. Note that the fact that a problem is common, does not
necessarily mean that it seriously affects an intervention. It is really only if a
weakness is regularly associated with unsatisfactory outcomes, that it may warrant
mitigation. While unavoidably based on respondent opinions, this approach allows
such associations to be determined, and done so in the context of the current stage of
the particular intervention being monitored. The relative importance of a problem
may vary with many factors, so any attempt to pre-emptively decide (or weight) their
relative importance should be treated with extreme caution. It is much better to let
those best informed provide their immediate perceptions and let this data indicate
relative importance. Otherwise an appropriate prioritisation of mitigative actions may
be mislead by a researcher’s own assumptions.

An example of being able to draw such associations is provided from AusAID’s


existing AMB data. In 97% of cases in which the control4 category ‘ongoing
implementers ability to meet recurrent costs’ was listed by respondents as a weakness
(in bilateral development activities), the perception of ‘Activity Sustainability’ (as a
heading) was deemed unsatisfactory. Hence, this form of analysis provides not just a
list of the most commonly recorded weaknesses, but also a prioritisation of the
currently perceived significance of each weakness.

Note that the same form of analysis is equally achievable in relation Strengths,
Opportunities and Threats.

4
Inserted to see if the analysis would provide sensible results.

You might also like