M&E is of little value if it does not lead to improvements in agencies’ work to reduce risk. M&E reports are potentially very useful documents. They enable practical lessons to be learned and applied within and across programmes and regions. They feed into strategic planning by providing a basis for discussion about better practice and policy change. They also contribute to institutional memory, which is important in organisations that suffer from rapid staff turnover. Good-quality presentation is essential here: no matter how good the evidence and analysis they contain, reports will not inform and influence if they are not well written and presented.
Evaluation should be embedded within an organisation’s systems and regular practice to ensure that learning takes place. In reality, many agencies are poor at absorbing the lessons from evaluations, with the result that the same problems recur. Too often, the review or evaluation report is filed away to be acted upon later, but then forgotten amidst the pressure of work. Many organisations have poor information storage and retrieval systems, making it very difficult to find documents, and feedback mechanisms are weak. Few staff have sufficient time to reflect on the lessons from individual projects, and fewer still are able to consider what can be learnt from several projects and countries. Overwork and pressures of work, which are common among staff in DRR agencies, prevent clear thinking and innovation. Knowledge management and learning systems need to be given higher priority and more resources in most organisations. Plans for sharing and using results and findings, in the field and across the organisation, should be built into the evaluation process from the start. These should be based on consultations with potential users of the evaluations.
Transparency in M&E is a key element in making operational agencies more accountable. Evaluation processes should be as open as possible, and their results should be made widely available, particularly to project stakeholders (who should also be consulted before reports are submitted, for clarification and confirmation). However, there is still much to be done here. The widespread failure to share and publish DRR evaluations means that practitioners are unable to learn lessons from each other and so are frequently reinventing the wheel. It also runs counter to the principle of accountability that agencies claim to follow. There is a particular reluctance to document mistakes and share their lessons. In some cases, joint reviews by agencies could be carried out to encourage mutual learning, knowledge sharing and transparency. Participatory M&E creates a sense of ‘ownership’ of the final product among stakeholders, which greatly increases the likelihood that lessons will be noted and acted upon.