Download Chapter
DFID/International Development Research Centre/Thomas Omondi

Chapter 18 Monitoring and evaluation

Indicators

Photo: DFID/International Development Research Centre/Thomas Omondi

Evaluators normally look for a range of indicators that will give a balanced view of a project’s achievements and contribution towards its objectives: these should be easy to understand, by communities as well as implementing organisations. Indicators can be qualitative, quantitative or a mixture of the two, but in general they should try to be both SMART (specific, measurable, attainable, relevant and time bound) and SPICED (subjective, participatory, interpreted, cross-checked, empowering and diverse): see Table 18.1. Remember that the indicators that are easiest to measure are not necessarily the most useful for analysis.

Table 18.1 SMART and SPICED indicators

SMART SPICED
Specific: Indicators should reflect those things the project intends to change, avoiding measures that are largely subject to external influences. Subjective: Informants have a special position or experience that gives them unique insights which may yield a very high return on the investigators’ time. In this sense, what may be seen by others as anecdotal becomes critical data because of the source’s value.
Measurable: Indicators must be defined precisely so that their measurement and interpretation are unambiguous. They should give objective data, independent of who is collecting the data. They should be comparable across groups and projects, allowing change to be compared and aggregated. Participatory: Indicators should be developed together with those best placed to assess them. This means involving a project’s ultimate beneficiaries, but it can also mean involving local staff and other stakeholders.
Attainable: Indicators should be achievable by the project and therefore sensitive to the changes the project wishes to make. Interpreted and communicable: Locally defined indicators may not mean much to other stakeholders, so they often need to be explained.
Relevant: It must be feasible to collect data on the chosen indicators within a reasonable time and at a reasonable cost. Indicators should be relevant to the project in question. Cross-checked and compared: The validity of assessment needs to be cross-checked, by comparing different indicators and progress, and by using different informants, methods and researchers.
Time-bound: Indicators should describe by when a certain change is expected. Empowering: The process of setting and assessing indicators should be empowering in itself and allow groups and individuals to reflect critically on their changing situation.
Diverse and aggregated: There should be a deliberate effort to seek out different indicators from a range of groups, especially men and women. This information needs to be recorded in such a way that these differences can be assessed over time.
C. Roche, Impact Assessment for Development Agencies: Learning to Value Change (Oxford: Oxfam/Novib, 1999), pp. 48–49.

This sounds simple on paper, but in practice it is more complicated. Questions to be asked regarding the practicality of indicators include:

  • Measurability. Is the indicator measurable? Is it sufficiently sensitive to an improvement or deterioration in conditions?
  • Ease and cost of collection. How easy is it to obtain the information required? How costly will this be? Can the community participate? Are relevant data already collected?
  • Credibility and validity. Are the indicators easy to understand, or will people argue over what they mean? Do they measure something that is important to communities as well as implementing organisations?
  • Balance. Do the selected indicators provide a comprehensive view of the key issues?
  • Potential for influencing change. Will the evidence collected be useful for communities, implementers and decision-makers?+L. Noson, ‘Hazard Mapping and Risk Assessment’, in ADPC (ed.), Proceedings, Regional Workshop on Best Practices in Disaster Mitigation, 24–26 September 2002, Bali, Indonesia (Bangkok: Asian Disaster Preparedness Center, 2002), pp. 83–84 http://www.adpc.net/audmp/rllw/default.html.

Even with this guidance in mind, it is very rare to find all the evidence one wants. Indicators are indicators: they are not necessarily final proof. In some cases it will not be possible to measure change directly, with clear and unambiguous indicators. It is often necessary to identify relative or approximate changes instead, using indirect or ‘proxy’ indicators.

Part of the process of collecting baseline information should be to identify those indicators that will be most valid for M&E. However, experience as the work progresses may highlight other issues and require changes to the project. Some indicators may have to be modified or new ones will emerge, which makes it important to be flexible. Monitoring methods should be designed to pick up these issues so that decisions can be made. Where baseline data are lacking (which is often the case), or previously identified indicators are difficult to assess or simply irrelevant, the baselines may have to be reconstructed (e.g. from project documents, interviews with key informants and data from other organisations) or new indicators must be developed. In practice this happens quite often, but the process must be managed carefully to avoid confusing or misleading stakeholders; an open, participatory approach is needed, and the aim should be to achieve the highest possible level of consensus.

Evaluations usually combine qualitative and quantitative data. Both types are valuable, in different ways. Quantitative indicators are often used to assess progress towards stated targets (e.g. the number of hazard-resistant structures built or community disaster preparedness committees established). Numbers alone cannot measure quality or effectiveness, although they can be proxy indicators for this. Qualitative data are often used in DRR evaluations. Typically they are collected from stakeholders through workshops, focus groups or semi-structured interviews. They can provide good measures of achievement and impact, and reveal insights into processes and attitudes. Participatory approaches tend to produce a good deal of qualitative information. Some examples of data collection methods and their application are shown in Table 18.2.

Table 18.2 Examples of data collection methods and their application

Method Examples of application
Formal surveys of beneficiaries and other stakeholders (can also be generated by interviews and group discussions)
  • Survey of builders and occupants of hazard-resistant housing to ascertain application of skills and increased security
  • Household survey on food production, availability, consumption and marketing to identify patterns and shifts in vulnerability
Structured and semi-structured interviews with staff, partners, beneficiaries and others
  • Individual interviews building up a picture of the level of understanding of the project, agency–community working relationships, effectiveness of coordination mechanisms and outcomes of DRR interventions
Group discussions, especially with beneficiary communities (e.g. participatory workshops, focus groups)
  • Beneficiary workshop to identify and assess benefits of particular DRR interventions and unforeseen impacts
  • Expert workshop to assess potential effectiveness of new DRR methods or approaches
  • Feedback workshop with beneficiaries and other stakeholders to test/confirm evaluation findings
 Rapid assessments
  • Post-disaster telephone or field survey to indicate effectiveness of warning and response mechanisms and factors affecting them
Direct observation and visual surveys
  • Visual surveying of structural mitigation measures to determine quality of design and workmanship, take-up of technologies or techniques (disaster resilience inferred from this or assessed through post-disaster surveys)
  • Observation of coping strategies and other risk-reducing behaviour – before, during and after disasters
 Case studies
  • Personal or group accounts of use of skills, materials and organisational capacity acquired from disaster management training courses, during subsequent events
 Simulations
  • Group simulation or exercises (table-top or field) of disaster management activities or responses to disaster events, to test plans, skills, equipment, etc.
Documentary evidence
  • Content analysis of educational material on risk reduction and management produced by project
  • Quantitative and qualitative data about project delivery, effectiveness, impact and costs, from project documentation
  • Secondary data collection to complement or validate information collected by the evaluators in the field
C. Benson and J. Twigg with T. Rossetto, Tools for Mainstreaming Disaster Risk Reduction: Guidance Notes for Development Organisations (Geneva: ProVention Consortium, 2007), http://www.preventionweb.net/files/1066_toolsformainstreamingDRR.pdf.