Download Chapter
DFID/International Development Research Centre/Thomas Omondi

Chapter 18 Monitoring and evaluation

Outputs or impact?

Photo: DFID/International Development Research Centre/Thomas Omondi

M&E manuals often speak of ‘impact’ and ‘process’ indicators. Impact indicators, which can be both quantitative and qualitative, measure changes that occur as the result of project activities. Conventional M&E methods usually focus on positive impacts. Few initiatives are without some negative impacts, although in most projects there is a reluctance to review these. All partners in a project should be open about the importance of identifying negative impacts and groups that have been overlooked or excluded. This requires a high degree of trust between those involved in the project, which may be difficult to achieve owing to the unequal relationship between poor communities and external organisations bringing in resources.

Process indicators measure the implementation of project activities, and are usually quantitative. They often act as proxy indicators of impact for DRR interventions, especially where the hazards concerned are infrequent. Actions during a project can be used as indicators of potential effectiveness. In a community disaster preparedness project, for example, process indicators might include recruiting, training and establishing a community disaster management team, organising public meetings to identify threats and the most vulnerable households, building relevant structures and holding regular evacuation drills.

In practice agencies are more comfortable with indicators of output rather than impact (especially quantitative indicators), and it is common for evaluations to come up with the kinds of output indicators that merely quantify the measures taken by a project (e.g. the number of volunteers trained or public education leaflets distributed). Evaluations tend to be short-term studies, usually carried out at the end of a project, when it is too soon to assess its longer-term consequences. Post-project impact assessments are rarer and there is a shortage of genuinely long-term studies. Published case studies of well-regarded initiatives usually appear at a relatively early stage in the project or are based on short-term evidence. The exceptions tend to be drought/food security initiatives: these demonstrate that projects’ impact can be judged only over a period of several years; they also reveal the extent of rethinking and modification that takes place even in successful projects.

DRR sometimes presents problems of evaluation because of what can be called its ‘reverse logic’: i.e. the success of an initiative is that something – the disaster or loss associated with it – does not happen. Nevertheless, evidence from subsequent hazard events and the response to them is a good indicator of the impact of some types of DRR intervention, such as the effectiveness of early warning and response systems, and the resilience of houses and infrastructure.

Structural/physical mitigation measures are relatively easy to assess. The quantity and quality of, for example, embankments, flood shelters, earthquake-resistant houses and soil and water conservation structures can be assessed visually, as can the extent to which alternative technologies or techniques are adopted. Judgement about the quality of such technical innovations serves as a proxy indicator for their impact – i.e. their resilience to actual hazard events. Non-structural measures involving changes in attitudes, skills, organisation or awareness are much more challenging. Proxy indicators of impact can be picked out, but they are less certain than those for physical change. For example, interviews or discussion groups can reveal how interventions have changed a community’s attitudes towards risk, but only allow us to estimate how that community will actually behave when confronted with a disaster.

Given these challenges, the need for triangulation and cross-checking of different types of evidence is clear. This is particularly important for qualitative data, where evidence may be more subjective. Triangulation of interview or focus group data can also identify differences in partners’ aims and expectations. Good impact evaluations should be wide-ranging in their search for relevant signs of increased resilience to risk, as well as objective about the quality of the evidence collected. Case Study 18.3 (Evaluating the impact of rainwater harvesting) is an example of this. In the field, direct observation is a useful way of identifying discrepancies between what people say and what they do, although evaluators do not always have enough time to do this.

Case Study 18.3 Evaluating the impact of rainwater harvesting

In 1997 the NGO Intermediate Technology (now Practical Action) commissioned an independent evaluation of a rainwater harvesting initiative in Kenya that had begun more than ten years before. The evaluation was based on project documentation (including local partners’ monitoring records), interviews with project and partner staff, five group discussions with beneficiaries (104 people in total), individual interviews and field observation. The discussion groups and individual interviews were based on PRA techniques. The evaluation covered a range of issues, including impacts on sorghum production, diets and household wealth, gender, land tenure and the environment.

Much of the evidence was qualitative. To obtain relative data on sorghum yields and constraints on sorghum production, the evaluators used ranking and proportional piling, in which individuals were asked to place stones in separate piles to indicate amounts. Data on crop yields was gathered from various sources, including project records, discussions with project staff and the assessments of interviewees. This was compared with data from previous project reviews and workshops.

C. Watson and B. Ndung’u, ‘Rainwater Harvesting in Turkana: An Evaluation of Impact and Sustainability’, mimeo (Nairobi: ITDG, 1997).