Transcript Overseas Development Institute
Re-thinking Humanitarian Impact Assessment: theory and practice
OCHA Joint Review of Inter Agency Evaluations, Geneva 12 th June, 2009.
Main Aim of Study
To provide an overview of current experiences and thinking in order to develop and test a conceptual framework to be used for understanding, planning and implementing impact assessment
OCHA Workshop on Inter-Agency Evaluations, June 2009.
2
Main parts of the report
Five-part conceptual framework 4 Case studies on impact assessment Conclusions and recommendations
OCHA Workshop on Inter-Agency Evaluations, June 2009.
3
Methodology
Research conducted Sep 2008 – Apr 2009 Literature review Key informant interviews Discussions at 24th ALNAP biannual meeting Case studies Impact assessment survey (ALNAP full and observer members)
OCHA Workshop on Inter-Agency Evaluations, June 2009.
4
Five-part framework
1. Understanding and Balancing Stakeholder interests 2. Understanding and defining impact 3. Methodological approaches and challenges 4. Engaging local actors and affected populations 5. Capacities and incentives for improved impact assessment
OCHA Workshop on Inter-Agency Evaluations, June 2009.
5
1. Understanding and Balancing Stakeholder Interests
Impact assessments are more likely to be used if they meet the interests of stakeholders Decisions about purpose and scope are political Difference and tension between ‘proving impact’ (accountability) and ‘improving practice’ (learning) Allow enough time to negotiate and ensure adeqaute participation
OCHA Workshop on Inter-Agency Evaluations, June 2009.
6
2. Understanding and defining humanitarian impact and theories of change
A widely recognised definition: Lasting or
significant changes – positive or negative, intended or not – in people’s lives brought about by a given action or series of actions
(Roche, 2000) Theory of change must be clear, realistic and understood by all stakeholders.
OCHA Workshop on Inter-Agency Evaluations, June 2009.
7
3. Methodological Approaches and challenges
Methodological appropriateness could be considered the “gold standard” for impact
evaluation (NONIE, SG2, 2008) Key issues include: Indicators: moving beyond outputs Overcoming the attribution problem with appropriate approaches and methods Baselines, monitoring and data collection Timing and amount of time
OCHA Workshop on Inter-Agency Evaluations, June 2009.
8
4) Engaging local actors and affected populations throughout
Participation by affected populations is not a key feature of impact assessments. Attempts to improve this include: The Listening Project ECB ‘Good Enough Guide’ Feinstein International Center Participatory Impact Assessments (PIA) Affected populations, national and local actors should be involved at all stages ‘Learning partnerships’ between donors, implementing partners, communities, national actors and other stakeholders are needed.
OCHA Workshop on Inter-Agency Evaluations, June 2009.
9
5. Capacities and incentives
Lack of individual and organisational capacity to do good impact assessments Institutional incentives can override humanitarian ones; too few incentives to conduct good impact assessments; results-based approaches can create perverse incentives A number of cultural barriers and biases that hinder good quality humanitarian impact assessment Scope for sector-wide initiative to strengthen capacity and address disincentives?
OCHA Workshop on Inter-Agency Evaluations, June 2009.
10
Case studies
1. Impact evaluation of Community-Driven Reconstruction, Northern Liberia (IRC) 2. Participatory impact assessment in pastoral communities, Niger (LWR/Tufts) 3. Impact study of FAO’s emergency programme in DRC 4. Tsunami Recovery Impact Assessment and Monitoring System (TRIAMS)
OCHA Workshop on Inter-Agency Evaluations, June 2009.
11
Overview of Case Studies
Agency Countries Scope Timing
IRC, Colombia, Stanford Liberia LWR, FIC Niger FAO DRC IFRC, WHO, UNDP, National Governments affected countries Indonesia, Maldives, Sri Lanka, Thailand Programme level; 42 communities in 2 Districts Programme level: 10 communities Country-wide Regional, four countries Prospective; integrated into project from start Retrospective; carried out 18 months after the project began.
Retrospective; began 1 year after programme ended.
Ongoing; began 1 year after recovery began 200,000 5,000 100,000 542,000
Cost ($) OCHA Workshop on Inter-Agency Evaluations, June 2009.
12
Balancing stakeholder interests
Accountability vs. learning needs to be clear at the outset: negotiate, don’t avoid the debate!
Understanding effects vs. field-testing methods Collective international action vs. national buy-in and ownership
OCHA Workshop on Inter-Agency Evaluations, June 2009.
13
Definitions of Impact
1. The net difference that IRC’s work makes in people’s lives 2. Those benefits and changes to people’s livelihoods, as defined by the project participants, and brought about as a direct result of the project.
3. Positive and/or negative changes induced (more or less directly) by FAO emergency interventions on target groups, their households, organisations, communities or on the environment in which they live. 4. Positive and negative, primary and secondary long-term effects produced by a development intervention, directly or indirectly, intended or unintended
OCHA Workshop on Inter-Agency Evaluations, June 2009.
14
Methodologies – quantitative and qualitative combined
Case study Qualitative methods Quantitative methods 1. IRC impact evaluation of CDR, Liberia
Semi-structured interviews, behavioural goods measurement Randomised evaluation, ‘before and after’ household survey
2. PIA of support to pastoralists, Niger 3. Impact study of FAO emergency and rehabilitation work in DRC
Semi-structured household interviews, focus group discussions; ‘before and after’ ranking and scoring ‘Before and after’ recall methods, life histories, focus-group discussions at village level, qualitative survey of NGOs (Not applicable) Household survey with ‘control’ group
4. TRIAMS in Indonesia, Maldives, Sri Lanka and Thailand.
Encouraged qualitative approaches, including beneficiary perception surveys, in order to triangulate with quantitative routine and survey data Routine data-collection, household surveys
OCHA Workshop on Inter-Agency Evaluations, June 2009.
15
Methodology – indicators need to be flexible and robust
Adapting indicators to changing contexts is key (PIA example) ‘Proxy’ indicators of impact may be useful (PiA) or inadequate (FAO example)
OCHA Workshop on Inter-Agency Evaluations, June 2009.
16
Attribution – strengths
IRC Eliminated selection bias and some ‘confounding’ factors.
Perceived as more transparent by communities Mixture of quantitative and qualitative methods enabled triangulation of data PIA Participatory Can be done ex-post No ‘control’ group required.
Enables statistical analysis of qualitative data Low cost (approx $US 5, 000) IRC Both ‘before and after’ and ‘with and without’ comparisons Can be done ex-post Mixed methods enabled Rich data derived from story-telling TRIAMS Tracked recovery outputs and outcomes over time
OCHA Workshop on Inter-Agency Evaluations, June 2009.
17
Attribution – weaknesses
IRC Not all confounding factors eliminated e.g. ‘treatment’ communities more rural than urban.
Large sample size required.
Logistical challenges ‘Before and after’ survey results reflect
changes in survey responses
, rather than changes in behaviour.
Costly (approx US$200,000) PIA Did not eliminate confounding factors e.g. contextual factors posed challenge to attribution Qualitative data only Reliant on memory of situation months or years previously. Measures relative change rather than
OCHA Workshop on Inter-Agency Evaluations,
Prone to selection bias Qualitative data less rich IRC Reliant on memory of what happened several months/years previously ‘Control’ group prone to leakage Selection bias Relatively costly US$100,000 TRIAMS No attribution To date, focus on monitoring performance rather than impact
18
Analysis and Use of data
XXXX
OCHA Workshop on Inter-Agency Evaluations, June 2009.
19
Capacities and incentives
XXXX
OCHA Workshop on Inter-Agency Evaluations, June 2009.
20
Conclusions
Humanitarian impact assessment is not only desirable but possible Each case study has specific strengths and weakness Together, the five key areas form a conceptual framework which could be used as a starting point for developing and improving impact assessment in the humanitarian sector Work already underway with Save Alliance, framework being used to shape Tsunami impact assessment Advising OCHA work
OCHA Workshop on Inter-Agency Evaluations, June 2009.
21
Recommendations (1)
The humanitarian sector should develop and institutionalise
sustainable
approaches to impact assessment
Identify appropriate stakeholder analysis tools for use in discussions of impact assessment, which help to make interests explicit and identify common ground Initiate a cross-agency discussion on the feasibility and desirability of a clear
definition of humanitarian impacts and outcomes
Develop relationships with academic partners and other experts in the field to to design and deliver a toolkit outlining the key methods of impact assessment for use in the humanitarian sector. This could include practical examples of mixed-method approaches. Develop a shared database of impact indicators that could potentially be used in humanitarian evaluations Undertake further research on the mix of impact-assessment methods most appropriate in the different emergency phases of relief, recovery and reconstruction
OCHA Workshop on Inter-Agency Evaluations, June 2009.
22
Recommendations (2)
The humanitarian sector should develop and institutionalise
sustainable
approaches to impact assessment
Ensure the views of affected people are centre-stage to ensure credibility Promote the use of livelihoods approaches as a framework for analysis Invest in and build long-term, national and international partnerships for impact assessment between affected populations, academics, donors, governments, civil society and the private sector Review existing programming and funding approaches across the sector in terms of how they currently enable or inhibit effective and timely impact assessments Work towards improved project and programme, organisational and sector- wide performance frameworks which explicitly define impact and embed impact orientation in all stages of the project cycle Consider how donors, agencies and the sector as a whole can better reward individuals and organisations for doing effective impact assessments.
OCHA Workshop on Inter-Agency Evaluations, June 2009.
` 23
Pointers for the present discussion
Who are the stakeholders for IA impact assessments? What interests?
Do you have a sufficiently clear understanding of impact and theories of change?
What methodologies would be most appropriate given your stakeholder needs and understanding of impacts?
How will you engage affected people and national / local stakeholders?
What capacities do you have to undertake and use an impact assessment, and what incentives are in place?
OCHA Workshop on Inter-Agency Evaluations, June 2009.
24
3.1 Indicators: Moving Beyond Outputs
Identifying impact indicators involves value judgements about what kinds of changes are significant for whom (Roche, C. 2000)
OCHA Workshop on Inter-Agency Evaluations, June 2009.
25
3.2 The attribution problem
Comparative vs. theory-based approaches Quantitative vs. qualitative methods A mixed approach can provide the best information
OCHA Workshop on Inter-Agency Evaluations, June 2009.
26
3.3 Baseline and other data
“Reports were so consistent in their criticism of agency monitoring and evaluation practices that a standard sentence could almost be inserted into all reports along the lines of: It was not possible to assess the impact of this intervention because of the lack of adequate indicators, clear objectives, baseline data and monitoring .” (ALNAP, 2003)
Key issues include: Weak or non-existent baselines Data is often unavailable or unreliable Data collected is mainly quantitative Monitoring systems focus on process and outputs Lack of collective and coordinated approaches to data collection
OCHA Workshop on Inter-Agency Evaluations, June 2009.
27
3.4 Time and Timing
IA should be carried out when impacts are likely to be visible and measurable (depends on the specific goals and indicators).
Insufficient time can result in inadequate monitoring and data collection
OCHA Workshop on Inter-Agency Evaluations, June 2009.
28