Download Chapter
DFID/International Development Research Centre/Thomas Omondi

Chapter 18 Monitoring and evaluation

Identifying cause–effect links

Photo: DFID/International Development Research Centre/Thomas Omondi

Analysis of the relationship between process (activity and output) indicators and outcome or impact indicators helps to understand cause–effect links, often referred to as ‘attribution’ in M&E guidance.

Many factors combine to make people vulnerable and create situations of risk. No project intervention can address all of these factors, and all projects will be influenced by them. This influence must be understood in order to assess a project’s achievements. To what extent are particular changes due to the project itself, or to local actors, external agencies and other factors? It can be difficult to make a judgement here, particularly when evaluating long-term impact.

Good risk reduction work should comprise a range of activities: organisational, educational, structural and socio-economic. Activities are meant to be mutually reinforcing. For example, training in safe building techniques should be complemented by regulation of land use and the setting and enforcement of building standards, as well as by measures to address the economic and social pressures that force poor people to live in flimsy housing in hazardous locations. Where risk reduction adopts such a broad approach, with numerous interlocking elements, how can one assess the results arising from one particular type of intervention against another? It may be impossible to identify specific links between cause and effect. Consequently, how can one set priorities for intervention?

Some project evaluations or assessments have used control groups for comparative purposes, although DRR and particularly humanitarian response agencies are sometimes uneasy about studying at-risk groups that the organisation is not attempting to protect. There are also methodological challenges with this approach: no two communities are exactly the same, which makes comparison difficult. The method is better at demonstrating the basic point that DRR interventions can bring benefits (by comparing communities that have been assisted against those that have not) than at assessing the most effective types of intervention to adopt. However, it can be useful. Some evaluations seek the views of community members not involved in projects, usually to identify reasons for non-participation. Talking to groups that have dropped out of a project can also provide valuable information about the way the project was implemented.

Some agencies specifically investigate external influences when assessing projects: this at least puts evaluation findings into context, even if it often cannot demonstrate specific cause-and-effect links. Triangulation of different data sets and sources is also helpful in isolating particular factors affecting success or failure. In most cases the sources and types of information will vary. In particular, there will be a mixture of quantitative and qualitative information. Using different stakeholders or assessors to review the same issue can reveal similarities and differences; here it is very important to consider the views of differently vulnerable groups.

The problem is reduced wherever evaluators can focus on specifics. Assessment of disaster preparedness and response measures tends to be simpler: for example, warning and evacuation procedures can be tested through practice drills as well as by events (there are examples of evaluation teams observing such drills). It is also relatively easy to isolate for analysis different elements in the preparedness-response system. Responses to early warnings have been studied on many occasions, throwing light on community attitudes and the effectiveness of early warning systems. Such knowledge has supported the development of sophisticated methods for evaluating the condition of early warning systems.

Projects that have clear objectives and targets can develop a hierarchy of indicators that link process to impact and thereby make M&E more coherent. Results-based frameworks, such as logical frameworks, which are used in project design, should already provide a hierarchy, helping evaluators to form judgements at all levels (activity, output, outcome, impact). However, M&E systems also need to be sensitive to changes and impacts that are due to a project, directly or indirectly, but which are unexpected and unplanned for. This means looking beyond formal, linear planning frameworks.

The Outcome Mapping and Most Significant Change methods move away from a focus on project results to explore how interventions contribute to change in wider, more complex and uncertain contexts. Outcome Mapping looks at changes in the behaviour, relationships and actions of the groups and individuals that a project works with (which may or may not be direct consequences of the project) and considers how the project and other factors contribute to that change process. Most Significant Change is a form of participatory M&E that works without predefined indicators, in which community members and field staff collect, discuss and analyse changes.+S. Earl, F. Carden and T. Smutylo, Outcome Mapping: Building Learning and Reflection into Development Projects (Ottowa: International Development Research Centre, 2001), http://www.outcomemapping.ca/resource/om-manual; Outcome Mapping website: http://www.outcomemapping.ca/resource; R. Davies and J. Dart, The ‘Most Significant Change’ (MSC) Technique: A Guide To Its Use, 2005, http://www.mande.co.uk/docs/MSCGuide.pdf; Most Significant Change web page: http://mande.co.uk/special-issues/most-significant-change-msc. These methods are good at capturing unforeseen changes and building up a more complete picture of change overall.