You are on page 1of 4

Draft

Self Monitoring Agents


This note looks at some of the broad issues that arise when we evaluate the role of self-monitoring agents in international development.

Who are Self Monitoring Agents (SMAs)? There does not appear to be a consensus in academic writing as to the exact definition of the phrase self monitoring agent. Broadly, SMAs seem to refer to (i) the organisations which assess their own effectiveness against the broad goals set by a funding body (such as the Clinton Global Initiative, for example) or a specialised organisation that looks to measure the effectiveness of various actors in the field of developmental (primarily NGOs).

2
2.1 2.2

Why are SMAs relevant? At a theoretical level, SMAs seem to occupy an important role in the assessment of developmental projects, and organisations. With increased emphasis on the importance of participation in development, there has been a growing recognition that the monitoring and evaluation of development and other community-based initiatives should be participatory. Along with this, there has been a greater interest (and concern) in monitoring and evaluation by donors, governments, NGOs and others. This is referred to in some academic literature as participatory monitoring and evaluation (PME). This participatory form of development is driven by several factors a move towards performance based accountability and management by results, an increasingly challenging financial environment (specially in the last few years), a demand for demonstrated achievement, greater decentralisation which in turn requires new methods of oversight, and the growing capacity of NGOs and community bases organisations as actors in the field of development. PME has been part of the policy making domain of larger donor organisations since the 1980s. In short, development organisations need to know how effective their efforts have been. But who should make these judgements, and on what basis? Varying approaches can be taken, which involve local people, development agencies, and policy makers deciding together how progress should be measured, and results acted upon. What is PME and why is it important? There appears to a great deal of divergence between the ways in which organisations, field practitioners, academic researchers, etc. understand the meaning and practice of PME. However, broadly speaking, it refers to the processes by which an organisation measures its results (or is measured, by an impartial specialist body) against an independent benchmark established by another organisation (for the same sector).

2.3

2.4

Uses of PME PME is usually understood as performing the following functions:

(i) (ii) (iii)

impact assessment, project management and planning, organisational strengthening or institutional learning,

Draft

(iv) (v)

understanding stakeholder perspectives, and public accountability. The Top-down or Conventional Approach The conventional approach to monitoring an organisations performance within the heads set out in paragraph 4 above has focussed on a quantitative measurement which strives for objectivity, and is orientated towards the needs of programme funders to make judgments about the efficacy of such an organisation. This can safely be called the Topdown Approach to PME. This approach is geared towards enhancing cost efficiency and accountability, and is usually conducted by outsiders with a view to providing an objective evaluation. This approach is frequently criticised as being costly and ineffective, because it fails to actively involve project beneficiaries, making project evaluation an increasingly specialised field and activity removed from the on-going planning and implementation of development initiatives. Focussing on quantitative information, also ignores a fuller assessment of project outcomes, processes and changes. I would argue that many of the organisations such as Givewell (http://www.givewell.org/) and Philanthropy Capital (http://www.philanthropycapital.org) adopt a top down approach to monitoring an organisations approach. Givewell as an example of top down evaluation
5.4.1

5
5.1

5.2

5.3

5.4

In its 2011 international aid process review , Givewell states: Our focus is on finding outstanding charities rather than completing an in-depth investigation for each organization we consider. For that reason, we rely on heuristics, or meaningful shortcuts, to distinguish between organizations and identify ones that we think will ultimately qualify for our recommendations. In general, we believe that charities should bear the burden of proof when soliciting donations from "casual" donors - donors that do not have the time or resources to conduct significant in-depth investigations of charities on their own. We therefore only recommend charities that can make a strong case that they are significantly improving lives in a cost-effective way and can use additional donations to expand their proven program(s) (see our criteria). Charities that we do not recommend may be effective, but we have not identified a strong case for a casual donor to be confident in such charities

5.4.2

Some of the meaningful shortcuts that it has evolved are: (i) High-quality monitoring and evaluation reports published on the charitys website. This measures the impact that a charity has had, for example, for an education based charity such as Camfed impact would be measured by, 2 inter alia, attendance rates and test scores ; A focus on priority programmes as identified by Givewell. These are 3 predominantly related to disease prevention and/or malnutrition.

(ii)

1 2 3

Available at http://www.givewell.org/international/process/2011. For a full list of criteria, please see http://givewell.org/international/technical/criteria/impact. A full list of priority programs is available at http://www.givewell.org/international/program-reviews#Priorityprograms.

Draft

(iii) (iv) (v)

Creation of an outsized impact; Extreme cost effectiveness; and Promising causes]

5.5

New Philanthropy Capital (NPC), follows a similar approach, publishing a little blue book which is intended to act as a guide to analysing charities, for both charities and funders. NPC is impact driven to the extent that it even publishes a document called Principles of Good Impact Reporting. The various documents produced by NPC are attached to the email accompanying this note. The Bottom-up Approach In response to the problems identified with conventional approaches to project monitoring and evaluation, the emphasis has shifted towards the recognition of locally relevant processes of gathering data, and more specifically towards self-monitoring by the organisations themselves. The main arguments for this approach are: enhanced participation of beneficiaries, increased authenticity of findings and improved sustainability of project activities, and more efficient allocation of resources. The World Bank has said, for example, in a project evaluation report on a poverty reduction project in Andhra Pradesh: Given the limited budgets allocated to supervision activities, supervising World Bank projects has long been a serious challenge particularly in sprawling rural areas with small poor communities spread out over vast distances. Innovative ways of sharing monitoring and supervision roles and responsibilities with local partners, therefore, carry important practical implications for project managers, whose own capacity to monitor developments and identify challenges is limited by distance and time constraints. How do SMAs fit into this? Donor identification: There is, of course a distinction between governmental or international donors and smaller donors. However, many of the independent organisations examining the impact of charities and developmental organisations appear to be targeting the latter. Charities/Developmental Organisations being assessed: The organisations being assessed attempt to comply with the guidelines established by the assessing organisation.
7.2.1

6
6.1

(i) (ii) (iii)

6.2

7
7.1

7.2

Camfed is an excellent example of this. On its website it publishes an impact overview broken down into geographical regions and setting out the number of scholarships given, number of students supported, number of small businesses that grew out of the CAMA network, etc. Camfeds interaction with the Clinton Global Initiative (CGI) is also interesting. CGI requires that each member makes a Commitment to Action. A Commitment to Action is a concrete plan to address a global challenge. Commitments can be small or large, global or local. A multinational corporation might pledge to reduce its packaging saving money while reducing waste. A nonprofit might seek to expand an effective program into new geographies. No matter their size or scope, commitments help CGI members translate practical goals into meaningful and

7.2.2

Draft

measurable results. CGI works with each member to develop an achievable plan, and members report back on the progress they make over time.
7.2.3

Each such Commitment needs to be new, specific and measurable, either qualitatively or quantitatively. Camfeds commitment states that over the next five years, CAMFED commits to providing at least 800,000 additional years of education to girls and vulnerable boys from extremely poor families in rural areas of seven sub-Saharan African countries; develop local capacity to increase accountability in educational delivery in 4,000 communities, expand government partnerships to deliver best practice in girls' education in 10 countries; support the creation of 5,500 new businesses by providing grants or micro-loans to young women entrepreneurs and training in business and leadership skills; and measure 4 the long-term effects of girls' education through 10 research partnerships. These are specific quantitative goals that Camfed can show that it has achieved, and to a large extent, an articulation by the charity of what it hopes to achieve in a specific timeframe.

7.2.4

8
8.1

The provocative counterpoint The primary conflict in the way that self monitoring agents and charity/developmental organisation monitoring agencies operate follows from the overarching theme of the seminar. Is there a way of ensuring that there is greater stakeholder feedback in the process of charity evaluation? To what extent can stakeholders, or the community which is the recipient of aid able to articulate their/its interests in the targets that a developmental organisation needs to achieve to satisfy its donors? As things stand monitoring organisations formulate their own standards (albeit with feedback) but the charities themselves have to fit into established objective boxes to demonstrate that they are effective.

Available at http://www.clintonglobalinitiative.org/commitments/commitments_search.asp?id=265639.

You might also like