Transcript Document

IDEV 624 – Monitoring and Evaluation
Evaluating Program
Outcomes
Elke de Buhr, PhD
Payson Center for International Development
Tulane University
Process vs. Outcome/Impact
Monitoring
Process
Monitoring
Outcome Impact
Monitoring Evaluation
LFM
USAID Results Framework
A Public Health Questions Approach
to HIV/AIDS M&E
Are we doing
them on a large
enough scale?
Determining Collective
OUTCOMES
Effectiveness
& IMPACTS
OUTCOMES
Are we doing
them right?
Monitoring &
Evaluating National
OUTPUTS
Programs
ACTIVITIES
Are collective efforts being implemented on a large
enough scale to impact the epidemic? (coverage;
impact)? Surveys & Surveillance
Are interventions working/making a difference? Outcome
Evaluation Studies
Are we implementing the program as planned?
Outputs Monitoring
What are we doing? Are we doing it right?
Process Monitoring & Evaluation, Quality Assessments
Are we doing
What interventions and resources are needed?
INPUTS
the right
Needs, Resource, Response Analysis & Input Monitoring
things?
Understanding
What interventions can work (efficacy & effectiveness)?
Potential
Efficacy & Effectiveness Studies, Formative & Summative Evaluation, Research Synthesis
Responses
What are the contributing factors?
Determinants Research
Problem
Identification
What is the problem?
Situation Analysis & Surveillance
7/17/2015
(UNAIDS 2008)
Strategic Planning for M&E:
Setting Realistic Expectations
All
Most
Input/ Output
Monitoring
Process
Evaluation
Number
of
Projects
Some
Outcome
Monitoring /
Evaluation
Few*
Impact
Monitoring /
Evaluation
Levels of Monitoring & Evaluation Effort
*Disease impact monitoring is synonymous with disease surveillance and should be part of all
national-level efforts, but cannot be easily linked to specific projects
7/17/2015
4
Monitoring Strategy
• Process  Activities
• Outcome/Impact  Goals and Objectives
Outcome Evaluation
Program vs. Outcome Monitoring
• Program process monitoring: The
systematic and continual documentation of key
aspects of program performance that assess
whether the program is operating as intended
or according to some appropriate standard
Process
Monitoring
• Outcome monitoring: The continual
measurement of intended outcomes of the
program, usually of the social conditions it is
intended to improve
A Form of
Impact
Evaluation
What is an Outcome?
• Outcome: The state of the target population
or the social conditions that a program is
expected to have changed
1. Outcomes are characteristics of the target
population or social condition, and not of
the program
2. Programs expect change but this does
not necessarily mean that program
targets have changed
(Rossi/Lipsey/Freeman 2004)
What is your project’s
outcome?
Outcome vs. Impact
• Outcome level: Status of an outcome at some
point of time
• Outcome change: Difference between
outcome levels at different points in time
• Impact/program effect: Proportion of an
outcome change that can be attributed
uniquely to a program as opposed to the
influence of some other factor
(Rossi/Lipsey/Freeman 2004)
(Rossi/Lipsey/Freeman 2004)
Outcome vs. Impact (cont.)
• Outcome level and change:
– Valuable for monitoring program performance
– Limited use for determining program effects
• Impact/program effect: the value added or
net gain that would not have occurred without
the program and the only part of the outcome
for which the program can honestly take credit
– Most demanding evaluation task
– Time-consuming and expensive
Outcome Variable
• Outcome variable: A measurable
characteristic or condition of a program’s
target population that could be affected by the
actions of the program
– Examples: amount of smoking, body
weight, school readiness
(Rossi/Lipsey/Freeman 2004)
What are your project’s
outcome variables?
Program Impact Theory
• Useful for identifying and organizing
program outcomes
• Expresses the outcomes of social
programs as part of a logic model that
connects program theory to proximal
(immediate) outcomes, that are expected
to lead to distal (long-term) outcomes
Program Impact Theory - Examples
(Rossi, Peter H et al., p. 143)
Logic Model
• Visual representation of the expected
sequence of steps going from program
service to client outcome
Logic
Model Example
(for a
teen
mother
parenting
program)
(Rossi, P. H. et al., p. 95)
Proximal vs. Distal Outcomes
• Proximal (immediate) outcomes:
– Usually the ones that the program has the greatest
capability to effect
– Often easiest to measure and to attribute to program
• Distal (longer-term) outcomes
– Frequently the ones of the greatest political and
practical importance
– Often difficult to measure and to attribute to program
– Usually influenced by many factors outside of the
programs control
What are your project’s
proximal/distal outcomes?
Measuring Program
Outcomes
• Select most important outcomes
• Take into account feasibility (e.g. distal ones
may be too difficult or expensive to measure)
• However, both proximal and distal outcomes
can be subject of an outcome evaluation
• Multidimensional outcomes often require
multiple measurements ( composite
measures)
Monitoring Program Outcomes
• Outcome monitoring:
– Simplest approach to measuring program
outcomes
– Similar to process monitoring with the difference
that the regularly collected information relates to
program outcomes rather than process and
performance
• Requires indicators that are practical to
collect routinely and that are informative with
regard to program effectiveness
Monitoring Strategies
Time
O1
X
O2
Time
O1
O2
O3
X
O4
O5
O6
O3
X
O4
Time
O1
X
O2
X
Selecting Outcome Indicators
• Need to be as responsive as possible to
program effects
– Include only members of target population
receiving services
– Not include data on beneficiaries who dropped out
of the program ( service utilization issue)
• The best outcome indicators, short of an
impact evaluation, are:
– Variables that only the program can effect
– Variables that are central to the program’s mission
Selecting Outcome Indicators (cont.)
• Concerns with selecting outcome
indicators:
– “Teaching to the test”: Program staff may
focus on critical outcome indicators to improve
program performance on these measures,
may distort program activities
– “Corruptibility of indicators”: Monitoring
data should be collected by outside evaluator,
or with careful processes in place that prevent
distortion (Role of participation?)
Advantage of Outcome
Monitoring
• Useful and relatively inexpensive
information about program effects, usually
in a reasonable timeframe (compared to
impact evaluation)
 Mainly a technique for improving program
administration, and not for assessing its
impact on the social conditions it intends to
benefit
(Rossi/Lipsey/Freeman 2004)
Limitation of Outcome Monitoring
• Requires indicators that identify change
and link that change to the program
• However, often many outside influences
on a social condition (confounding factors)
 Isolating program effects may require the
special techniques of impact evaluation
Project Monitoring Plan
Objective 1:
Monitoring Strategy:
What (Indicators)
How (Methods
and Tasks)
When
Who
Where
Logframe
Goal
OVIs
MOVs
Assumptions/Ri
sks
Purpose
Outputs
Activities
(Inputs)
Milestones
TAPGR: Development Project Planning
Discussion Questions
Discussion Questions
• How could the outcome of your program be
monitored?
• What are the critical outcome variables?
• What outcome monitoring strategy is feasible taking
into account the local implementation environment?
• What are the strengths of this methodology?
• What are its weaknesses?
• How would you judge the quality of the collected
data?