Download Chapter
DFID/International Development Research Centre/Thomas Omondi

Chapter 18.1 Monitoring and evaluation

Introduction

Photo: DFID/International Development Research Centre/Thomas Omondi

Monitoring and evaluation (M&E) are important because they:

  1. Make operational agencies more accountable to those they seek to help, as well as those who support them.
  2. Demonstrate to donors, policymakers and practitioners that risk reduction works, thereby making a case for greater effort in this area.
  3. Improve understanding of how DRR works in practice – including identifying problems and mistakes.

This chapter contains a short account of approaches to M&E of DRR projects and programmes, focusing on evaluation. Project monitoring in general is covered in standard manuals and should be part of all agencies’ systems and training. Evaluation is one of the principal methods by which agencies seek to learn lessons and incorporate them into their work to improve future policy and programming. It also provides accountability to partners, beneficiaries and donors.

The range of M&E approaches and methods in development and relief has grown considerably over the years, as has the level of interest in the subject. This has partly been driven by criticism and donor pressure, but also by the desire to demonstrate success and improve performance. A growing body of work is providing agencies with better-informed guidance on M&E methods for development, DRR and emergencies. This is supported by initiatives such as the Active Learning Network on Accountability and Performance in Humanitarian Assistance (ALNAP) (www.alnap.org) and the electronic MandE information forum for development workers (www.mande.co.uk).

Assessment of a project or programme can focus on several different aspects:

  • Inputs. These are the human, financial and technical resources deployed. Their effectiveness, cost-effectiveness and appropriateness can be assessed.
  • Activities and processes. This covers the performance of tasks and factors affecting this.
  • Outputs. These are the immediate results the project achieves (sometimes called ‘deliverables’).
  • Impact (or outcomes). This is significant or lasting changes, brought about by a specific action or series of actions.+C. Roche, Impact Assessment for Development Agencies: Learning To Value Change (Oxford: Oxfam/Novib, 1999).

Similarly, the main distinctions between monitoring and evaluation can be identified:

  • Monitoring usually addresses inputs, activities and outputs. Most monitoring systems are designed to meet the ongoing information needs of project managers and provide information for progress reports to donors. Evaluations focus on outputs and especially impact, and are intended for a wider audience within and outside the organisation.
  • Monitoring is mainly descriptive. Evaluation is more analytical. Impact assessment is mainly analytical and concerned with longer-term outcomes.
  • Monitoring should be regular and frequent, throughout the project. Evaluation is infrequent and can take place at any point in the project cycle (and after the project has ended).

Other terms used in this context are:

  • Review. Reviews come somewhere between monitoring and evaluation. They supplement regular monitoring, taking place less frequently and providing an opportunity to identify key issues in programming. They usually form part of internal management systems, but reviews involving external stakeholders are not uncommon.
  • Audit. Audits assess project and programme compliance with established regulations, procedures or mandates.