Evaluators normally look for a range of indicators that will give a balanced view of a project’s achievements and contribution towards its objectives: these should be easy to understand, by communities as well as implementing organisations. Indicators can be qualitative, quantitative or a mixture of the two, but in general they should try to be both SMART (specific, measurable, attainable, relevant and time bound) and SPICED (subjective, participatory, interpreted, cross-checked, empowering and diverse): see Table 18.1. Remember that the indicators that are easiest to measure are not necessarily the most useful for analysis.
SMART | SPICED |
---|---|
Specific: Indicators should reflect those things the project intends to change, avoiding measures that are largely subject to external influences. | Subjective: Informants have a special position or experience that gives them unique insights which may yield a very high return on the investigators’ time. In this sense, what may be seen by others as anecdotal becomes critical data because of the source’s value. |
Measurable: Indicators must be defined precisely so that their measurement and interpretation are unambiguous. They should give objective data, independent of who is collecting the data. They should be comparable across groups and projects, allowing change to be compared and aggregated. | Participatory: Indicators should be developed together with those best placed to assess them. This means involving a project’s ultimate beneficiaries, but it can also mean involving local staff and other stakeholders. |
Attainable: Indicators should be achievable by the project and therefore sensitive to the changes the project wishes to make. | Interpreted and communicable: Locally defined indicators may not mean much to other stakeholders, so they often need to be explained. |
Relevant: It must be feasible to collect data on the chosen indicators within a reasonable time and at a reasonable cost. Indicators should be relevant to the project in question. | Cross-checked and compared: The validity of assessment needs to be cross-checked, by comparing different indicators and progress, and by using different informants, methods and researchers. |
Time-bound: Indicators should describe by when a certain change is expected. | Empowering: The process of setting and assessing indicators should be empowering in itself and allow groups and individuals to reflect critically on their changing situation. |
Diverse and aggregated: There should be a deliberate effort to seek out different indicators from a range of groups, especially men and women. This information needs to be recorded in such a way that these differences can be assessed over time. |
C. Roche, Impact Assessment for Development Agencies: Learning to Value Change (Oxford: Oxfam/Novib, 1999), pp. 48–49.
This sounds simple on paper, but in practice it is more complicated. Questions to be asked regarding the practicality of indicators include:
Even with this guidance in mind, it is very rare to find all the evidence one wants. Indicators are indicators: they are not necessarily final proof. In some cases it will not be possible to measure change directly, with clear and unambiguous indicators. It is often necessary to identify relative or approximate changes instead, using indirect or ‘proxy’ indicators.
Part of the process of collecting baseline information should be to identify those indicators that will be most valid for M&E. However, experience as the work progresses may highlight other issues and require changes to the project. Some indicators may have to be modified or new ones will emerge, which makes it important to be flexible. Monitoring methods should be designed to pick up these issues so that decisions can be made. Where baseline data are lacking (which is often the case), or previously identified indicators are difficult to assess or simply irrelevant, the baselines may have to be reconstructed (e.g. from project documents, interviews with key informants and data from other organisations) or new indicators must be developed. In practice this happens quite often, but the process must be managed carefully to avoid confusing or misleading stakeholders; an open, participatory approach is needed, and the aim should be to achieve the highest possible level of consensus.
Evaluations usually combine qualitative and quantitative data. Both types are valuable, in different ways. Quantitative indicators are often used to assess progress towards stated targets (e.g. the number of hazard-resistant structures built or community disaster preparedness committees established). Numbers alone cannot measure quality or effectiveness, although they can be proxy indicators for this. Qualitative data are often used in DRR evaluations. Typically they are collected from stakeholders through workshops, focus groups or semi-structured interviews. They can provide good measures of achievement and impact, and reveal insights into processes and attitudes. Participatory approaches tend to produce a good deal of qualitative information. Some examples of data collection methods and their application are shown in Table 18.2.
Method | Examples of application |
---|---|
Formal surveys of beneficiaries and other stakeholders (can also be generated by interviews and group discussions) |
|
Structured and semi-structured interviews with staff, partners, beneficiaries and others |
|
Group discussions, especially with beneficiary communities (e.g. participatory workshops, focus groups) |
|
Rapid assessments |
|
Direct observation and visual surveys |
|
Case studies |
|
Simulations |
|
Documentary evidence |
|
C. Benson and J. Twigg with T. Rossetto, Tools for Mainstreaming Disaster Risk Reduction: Guidance Notes for Development Organisations (Geneva: ProVention Consortium, 2007), http://www.preventionweb.net/files/1066_toolsformainstreamingDRR.pdf.