In nearly all humanitarian aid programmes, the tasks of ongoing monitoring, programme evaluation and learning are vital to success. Monitoring and evaluation activities are particularly important in complex urban settings, where initial programme design is likely to require modifications and adaptations in response to a rapidly changing operating environment.
The subject of monitoring and evaluation in humanitarian action is vast, and a wealth of tools and approaches are available (several are given in this section). This section,+This section benefited in particular from inputs from Paul Knox Clarke, Amelie Sundberg, Neil Dillon and Leah Campbell of the ALNAP Secretariat. like others, does not intend to review the entirety of monitoring and evaluation, but aims instead to highlight key principles, challenges and opportunities relating to urban areas. The section introduces some of the challenges in urban monitoring and evaluation, identifies emerging lessons and approaches for urban monitoring and evaluation and discusses remote monitoring in conflict situations. As with some other sections of this GPR, this is an area of emerging rather than established good practice.
Monitoring and evaluation ties in closely to a number of other sections discussed in this GPR, especially those within the project management cycle, in particular assessments (Section 3.6) and design and management (Section 3.9).
The complexities of designing and enacting urban programmes are discussed throughout this GPR. The process of monitoring and evaluating programme activities and outcomes needs to deal with this complexity and the ways in which humanitarians respond to it (including by adapting to rapid changes in the environment, engaging with a multitude of actors and in some instances undertaking multi-sectoral programming).
Two reports from ALNAP+See A. T. Warner, What is Monitoring in Humanitarian Action? Describing Practice and Identifying Challenges (London: ALNAP/ODI, 2017) (https://www.alnap.org/help-library/what-is-monitoring-in-humanitarian-action-describing-practice-and-identifying); and ALNAP, Evaluation of Humanitarian Action Guide (London: ALNAP/ODI, 2016) (www.alnap.org/system/files/content/resource/files/main/alnap-evaluation-humanitarian-action-2016.pdf). describe some general challenges to monitoring and evaluation in humanitarian interventions, which are particularly relevant for urban emergencies:
Several other challenges to evaluation are intensified in urban environments:
Lessons and approaches to urban monitoring and evaluation include:
Build monitoring and evaluation into the programme from the beginning
In urban programmes, it is especially important to include monitoring and evaluation as part of the original programme design. Urban programmes tend to be particularly ‘information heavy’, as a result of the number and diversity of people and elements involved, and the need to capture changes in the context. For programmes to be successful, resources for information collection and – particularly – analysis need to be made available at the planning stage. Many urban programmes are also fairly small compared to the level of needs across a city. To be effective, they often rely on scaling up. This depends on good information on what worked, what didn’t and why (see Section 3.2 on area-based approaches for a further discussion of scaling up). Several of the programme design approaches outlined in Section 3.9 combine iterative programming with ongoing data collection and analysis. These approaches cannot be used successfully without a clear monitoring and evaluation plan.
ALNAP’s Evaluation of Humanitarian Action guide+ALNAP, Evaluation of Humanitarian Action Guide. outlines key issues that should be considered, from the outset, in the design of evaluation systems. These are also broadly relevant for thinking about the design of monitoring systems. They include:
Focus on specific information needs
Complex, diverse and rapidly changing urban environments create specific information needs. Monitoring and evaluation systems need to ensure that these needs are met, and that the specificities of urban response are reflected in the design of monitoring and evaluation systems.
Ideally, monitoring should consider ‘need to know versus want to know’ – in such a dense environment, where monitoring has to be timely and relevant with scarce resources, untangling information needs is critical. Monitoring needs to consider the context – are there changes in the situation? Have these led – or are they likely to lead – to changes in need? Monitoring also needs to consider the outcomes of activities – what effects (both intended and, where these can be identified, unintended) are activities having? What does this mean for the programme?
Evaluations need to focus on a number of issues:
For further information, see:
I. Christoplos and N. Dillon with F. Bonino, Evaluation of Protection in Humanitarian Action (London: ALNAP/ODI, 2018) (www.alnap.org/system/files/content/resource/files/main/EPHA%20Guide%20online%20interactive.pdf).
S. Jabeen, ‘Unintended Outcomes Evaluation Approach: A Plausible Way to Evaluate Unintended Outcomes of Social Development Outcomes’, Evaluation and Program Planning 68, June 2018 (www.ncbi.nlm.nih.gov/pubmed/28965770).
J. Puri et al., What Methods May Be Used in Impact Evaluations of Humanitarian Assistance?, Working Paper 22 (New Delhi: International Initiative for Impact Evaluation (3ie), 2015) (www.alnap.org/system/files/content/resource/files/main/wp-22-humanitarian-methods-working-paper-top.pdf).
Engage in complexity and systems thinking
Overall, urban monitoring and evaluation means engaging in complexity and systems thinking (see Section 1.1, on ways of seeing the city). Guides that outline steps for undertaking evaluations in complex settings or using a systems-based approach to evaluation are:
M. Bamberger, J. L. Vaessen and E. R. Raimondo (eds), Dealing with Complexity in Development Evaluation (Thousand Oaks, CA: Sage, 2015).
B. Hargreaves, Evaluating System Change: A Planning Guide (Princeton, NJ: Mathematica Policy Research, 2010) (www.mathematica-mpr.com/our-publications-and-findings/publications/evaluating-system-change-a-planning-guide).
Williams and R. Hummelbrunner, Systems Concepts in Action: A Practitioner’s Toolkit (Stanford, CA: Stanford University Press, 2010) (www.sup.org/books/title/?id=18331).
J. Rogers, ‘Using Program Theory to Evaluate Complicated and Complex Aspects of Interventions’, Evaluation 14, 2008 (https://journals.sagepub.com/doi/pdf/10.1177/1356389007084674).
This USAID paper considers issues specific to monitoring in situations of complexity: Discussion Note: Complexity-Aware Monitoring, July 2018 (https://usaidlearninglab.org/sites/default/files/resource/files/cleared_dn_complexity-aware_monitoring.pdf).
In Barrio Mio, an area-based disaster risk reduction and response project in Guatemala City, Project Concern International (PCI) implemented a multi-sectoral programme with activities ranging from women’s savings groups to the installation of retaining walls and water tanks and building the GIS capacities of municipal actors. Initially, the project relied on a set of indicators from its funding proposal. These were separated by sector, and covered issues such as the number of shelters incorporating hazard mitigation measures and the number of people demonstrating good handwashing practices.
A case study on the project found that, while these indicators helped to demonstrate achieved deliverables, they failed to ‘capture the richness of the Barrio Mio project and what it’s been able to achieve – which is far beyond the level of ambition that these indicators suggest’. The Barrio Mio team developed a number of complementary indicators in addition to the list from the donor, though tensions remained ‘between what some describe as a “myopic” focus on the list of indicators and the overall impact the project has had’.
Source: Adapted from L. Campbell, Barrio Mio and Katye: PCI’s Neighbourhood Approach in Cities (London: ALNAP, 2019) (https://www.alnap.org/help-library/barrio-mio-and-katye-pcis-neighbourhood-approach-in-cities).
Be aware of the multiple information sources available in the city
Cities are information-rich. Local government, service providers, chambers of commerce, journalists and many others may collect the information that monitoring or evaluation systems require. It may also be possible to make use of geospatial approaches (see Section 3.4 on mapping and geospatial analysis) to identify, or triangulate, changes in context or the effects of programmes. When using secondary data, however, it is important to ensure that the data adequately represents the populations of greatest concern. Official data often ignores certain parts of cities (such as informal settlements), or is aggregated at a high level, and so effectively hides the reality of life for the poorest or most marginalised groups.
When investigating the degree to which outcomes are achieved, or the constraints to achieving them, qualitative data is also important. One literature review on urban crises advises that ‘qualitative data may be required to capture impacts and outcomes that are more difficult to quantify (e.g. impacts on local power structures and urban socio-economic realities)’.+D. Brown et al., Urban Crises and Humanitarian Responses: A Literature Review (London: UCL, 2015) (www.urban-response.org/system/files/content/resource/files/main/bartlett.pdf). Qualitative approaches are critical to explaining how and why something occurs.+M. Skovdal and F. Cornish, Qualitative Research for Development: A Guide for Practitioners (Rugby: Practical Action, 2015) (http://eprints.lse.ac.uk/64207/). A 2016 learning workshop of urban disaster risk reduction practitioners emphasised the need for monitoring and evaluation tools to recognise the complex social dynamics in urban neighbourhoods, and the use of qualitative indicators to understand the context.+J. P. Sarmiento et al. (eds), Urban Disaster Risk: Systematization of Neighborhood Practices (Miami, FL: Florida International University Extreme Events Institute, 2016).
For further information, see:
M. Skovdal and F. Cornish, Qualitative Research for Development: A Guide for Practitioners (Rugby: Practical Action, 2015) (http://eprints.lse.ac.uk/64207/).
M. Quinn Patton and M. Cochran, ‘A Guide to Using Qualitative Research Methodology’, MSF, 2012 (https://evaluation.msf.org/sites/evaluation/files/a_guide_to_using_qualitative_research_methodology.pdf).
ICRC, Acquiring and Analysing Data in Support of Evidence-based Decision Making (Geneva: ICRC, 2017) (www.icrc.org/en/publication/acquiring-and-analysing-data-support-evidence-based-decisions-guide-humanitarian-work).
M. Bamberger, J. Rugh and L. Mabry, ‘Qualitative Evaluation Approaches’ in RealWorld Evaluation: Working under Budget, Time, Data, and Political Constraints (Thousand Oaks, CA: Sage Publications, 2012).
M. Q. Patton, Qualitative Evaluation and Research Methods (Thousand Oaks, CA: Sage Publications, 2002).
Use people-centred approaches
As discussed throughout this Good Practice Review (and introduced in Section 1.1 on ways of seeing the city), people-centred approaches are key. One guide presents an approach which asks households to (retrospectively): describe their livelihoods before the disaster, immediately after the disaster and after humanitarian interventions; identify changes; and describe the contribution interventions have made to these changes. It also lays out good practice in working with affected communities. See R. Few et al., Contribution to Change: An Approach to Evaluating the Role of Intervention in Disaster Recovery (Rugby: Practical Action Publishing, 2013) (https://reliefweb.int/report/world/contribution-change-approach-evaluating-role-intervention-disaster-recovery).
Similarly, the Good Enough Guide: Impact Measurement and Accountability in Emergencies provides a number of simple and effective tools and principles for understanding the impact of humanitarian activities from the perspective of the people who are meant to benefit from them: see www.alnap.org/help-library/good-enough-guide-impact-measurement-and-accountability-in-emergencies.
The ‘most significant change’ (MSC) approach relies on collecting significant change stories coming out of a programme, and the systematic selection of the most significant stories by panels of designated stakeholders or staff. It is particularly good at identifying the more unusual or extreme effects of interventions, and for creating a shared understanding between stakeholder groups involved in a response. See R. Davies and J. Dart, The ‘Most Significant Change’(MSC) Technique: A Guide to Its Use, 2005 (www.mande.co.uk/wp-content/uploads/2005/MSCGuide.pdf).
In addition, a number of toolkits and guidance notes are available to support participatory approaches to information collection and analysis. They include:
Consider working ‘backwards’, from outcomes to interventions
An important element of many people-centred approaches to evaluation is that they invert the ‘normal’ sequence of evaluation. Rather than starting with an intervention and working forwards to try to identify the results of the intervention, they begin with the changes that people have seen and work backwards to see how these changes link to the intervention. As a result, they are generally better adapted to evaluating urban programmes, where there are often long and complicated causal chains between the humanitarian response and the effects on people’s lives, and where multiple interventions may have contributed to the final outcome. That said, they may be less good at fulfilling donor requirements to show how a single, specific intervention worked. Some of these approaches may also be too demanding for certain types of humanitarian crises, such as rapid-onset emergencies.
The Good Enough Guide, MSC and Contribution to Change discussed above all take this approach, as does the increasingly popular ‘outcome harvesting’ method, which uses a six- step process to identify positive and negative, intended and unintended outcomes, and then articulates verifiable connections between these outcomes and initiatives of interest: see www.outcomemapping.ca/download/wilsongrau_en_Outome%20Harvesting%20Brief_revised%20Nov%202013.pdf.
Traditional approaches for evaluating urban interventions can still be relevant and appropriate. The ALNAP evaluation guide presents different evaluative options, such as project evaluation, process evaluation and impact evaluation and accompanying methods, including case studies, process reviews, outcome reviews, before and after comparisons, interrupted time series and comparison groups.+ALNAP, Evaluation of Humanitarian Action Guide, pp. 193–214.
Evaluation is not the only way to promote organisational learning, nor is it necessarily the most cost-effective. Formal evaluation of humanitarian action sits alongside a range of additional learning and accountability tools, from beneficiary tracking to monitoring systems and After-Action Reviews. Other learning processes to consider in humanitarian action are also presented in the ALNAP guide.
Consider using iterative approaches
As noted throughout this GPR, cities are dynamic environments, where needs often change quickly. For this reason, many approaches to monitoring and evaluation emphasise an ongoing, iterative process that relies less on establishing whether pre-defined indicators are being achieved and more on understanding what is changing, and how (and whether) the humanitarian response is achieving these changes.
Examples of this type of iterative approach include:
The Good Enough Guide and MSC approaches outlined above are both intended to be used iteratively.
It is worth noting that simply having an iterative approach is not enough: effective monitoring and evaluation of urban humanitarian action requires not only that monitoring and evaluation systems are in place, but also that they are linked to the relevant decision- making procedures and systems. Early engagement with key users (see above) can go some way to addressing this problem. For more suggestions on ensuring that evaluative (and, by extension, monitoring) information is used, see A. Hallam and F. Bonino, Using Evaluation for a Change: Insights from Humanitarian Practitioners (London: ALNAP/ODI, 2013) (www.alnap.org/system/files/content/resource/files/main/alnap-study-using-evaluation-for-a-change.pdf).
Following the 2010 earthquake in Haiti, Groupe URD conducted a number of iterative+See https://www.urd.org/en/. and real-time evaluations. Iterative evaluation aims to ‘analyse how a programme is being implemented in relation to changes in context and needs, and to ensure that the programme remains relevant and that there is effective coordination between the actors involved’. Between 2012 and 2015, Groupe URD maintained a ‘Haiti observatory’ to conduct iterative monitoring, promoting learning and good practice.
More information about the work of the Haiti observatory can be found at https://www.urd.org/en/research-page/?zone_geo=crise-central-america-caribbean;haiti
Work collaboratively with other stakeholders
Another common feature of many monitoring and evaluation approaches suited to urban environments is that they are intended to be used by multiple stakeholders, including humanitarian agencies, local government and civil society. In fact, many aim to simultaneously produce information and build shared understanding of the context and a shared commitment to response activities.
Shared or joint approaches have important advantages in urban contexts, where, in the wake of a humanitarian crisis, many organisations will be working on response activities that will influence (and hopefully support) each other. Joint monitoring and evaluation could facilitate an understanding broader than any one project, and ‘may provide better opportunities to document challenges, shortcomings, failures and successes’, as well as potentially revealing systemic issues, rather than individual cases.+Brown et al., Urban Crises and Humanitarian Responses.
It should be noted that collaboration can often be difficult, particularly where the agencies involved are competing for funding or where they have very different organisational structures and cultures. See Section 2.1 on coordination for further discussion on this. See also T. Beck and M. Buchanan-Smith, Joint Evaluations Coming of Age? The Quality and Future Scope of Joint Evaluations (London: ALNAP/ODI, 2008) (www.alnap.org/help-library/joint-evaluations-coming-of-age-the-quality-and-future-scope-of-joint-evaluations). The UN Evaluation Group’s Resource Pack on Joint Evaluations (2014) may also be helpful: www.unevaluation.org/document/detail/1620.
Monitoring in conflict situations is inevitably governed by access and security concerns. In cases where access by international humanitarian actors is restricted, remote monitoring may take place. In contexts where access is severely limited, remote monitoring, as a wider part of remote management, may be the only option. Remote management can be defined as:
A reactive stance in response to insecurity that involves some delegation of authority and decision-making responsibility to national implementers. There is commonly a moderate investment in capacity building for nationals and procedures in place that enable better communication, monitoring, and quality. Assumes that decision-making and authority will revert back to international [staff] following the restoration of security.+S. Choudhri, K. Cordes and N. Miller, Humanitarian Programming and Monitoring in Inaccessible Conflict Situations: A Literature Review, Health Cluster, 2017 (www.who.int/health-cluster/resources/publications/remote-lit-review.pdf).
A 2017 literature review of humanitarian programming and monitoring in inaccessible conflict settings+Ibid. noted that ‘remote operations require increased monitoring and reporting requirements than traditional programming due to the lack of field presence and direct oversight by international organizations, but often have fewer resources to meet these increased demands’. Challenges include ‘limited opportunities for data collection, poor quality data and inaccurate information, and lack of monitoring skills and capacity of local staff, among others’.
An important issue here concerns risk transfer to local staff and other local actors. Concerning this, the same literature review concluded that ‘Remote operations involve the transfer of risk from international to local actors, who are assumed to be at lower risk for targeting and therefore safer when implementing. This is often a false assumption as they face unique threats that are often not acknowledged in security assessments. Additionally, local actors are infrequently present at trainings on security, and are often left with minimal security- related equipment when expatriates evacuate’.+Ibid., p. 8.
Where access is impossible, such as in conflict situations, existing data and resources are sometimes used. For example, Food and Agriculture Organisation (FAO) and World Food Programme (WFP) used government data for their 2018 monitoring report on food security in 16 conflict-affected countries.+FAO and WFP, Monitoring Food Security in Countries with Conflict Situations: A Joint FAO/WFP Update for the United Nations Security Council, January 2018, Issue 3 (www.fao.org/3/I8386EN/i8386en.pdf), p. iii.
The Joint Market Monitoring Initiative (JMMI) was established in 2017 by REACH and the Libya Cash and Markets Working Group (CWG) to monitor market dynamics in order to improve cash programming. REACH describes the methodology for data collection as follows: ‘The methodology for the JMMI is based on purposive sampling. In each assessed market, at least four prices per item need to be collected from different shops to ensure the quality and consistency of collected data.
‘Partner field teams, in coordination with the CWG, identify shops to assess based on the following criteria: 1. Shops need to be large enough to sell all or most assessed items. 2. Prices in these shops need to be good indicators of the general price levels in the assessed area. 3. Shops should be located in different areas within the assessed city or baladiya.
‘In locations where it is not possible to identify four large markets that fulfil criterion (1), smaller shops, such as grocery shops, vegetable vendors, butchers and bakeries, are added to the shop list, as long as they fit criteria (2) and (3), in order to guarantee at least four prices per item of interest. Each month, price data is collected from the same shops whenever possible to ensure comparability across months.
‘The CWG primarily targets urban areas throughout Libya, aiming to ensure coverage of markets that serve as commercial hubs for surrounding regions. Data is collected via the KoBo mobile data collection application. The CWG maintains a joint KoBo account for the JMMI. The data collection tool is published alongside the dataset every month and disseminated to the humanitarian community’.
Monitoring takes place monthly of a ‘minimum expenditure basket’, comprising both food and non-food items. REACH reports that ‘By following the price developments of products such as bread, beans, soap and fuel, REACH and the CWG have been able to provide humanitarian actors [with] information on the financial burdens faced by households dependent on market priced goods in their respective localities’. The information indicates variations between cities and regions, wherein ‘the assessment noted clear spatial patterns both in May and June  with basket costs generally lowest in coastal port cities and highest in southern Libya’.
Source: REACH, Libya: What Does It Take to Make Ends Meet: Understanding Financial Burdens with the Aid of the Minimum Expenditure Basket (Geneva: REACH, 2018) (https://reliefweb.int/report/libya/libya-joint-market-monitoring-initiative-jmmi-1-10-october-2018).
For more discussion on remote monitoring, see:
Workshop Summary: Remote Monitoring, Evaluation and Accountability in the Syria Response, ALNAP and DEC, 27 June 2014 (www.alnap.org/system/files/content/resource/files/main/alnap-dec-syria-workshop-summary-final.pdf).
E. Sagmeister and J. Steets, The Use of Third-party Monitoring in Insecure Contexts: Lessons from Afghanistan, Somalia and Syria, SAVE Resource Paper, October 2016 (https://www.gppi.net/media/SAVE__2016__The_use_of_third-party_monitoring_in_insecure_contexts.pdf).
B. Norman, Monitoring and Accountability Practices for Remotely Managed Projects Implemented in Volatile Operating Environments, Tearfund, 2012 (https://www.elrha.org/wp-content/uploads/2015/01/Remote20Monitoring20and20Accountability20Practice20_web2028229.pdf).