Transcript Slide 1

Impact Evaluation in
Education
Introduction to
Monitoring and Evaluation
Andrew Jenkins
23/03/14
The essence of theory of change – linking activities to
intended outcomes
I am cutting
rocks
http://img359.imageshack.us/img359/7104/picture420bt2.jpg
I am building a
temple
Theory of change
“the process through which it is expected that
inputs will be converted to expected outputs,
outcome and impact”
DfID Further Business Case Guidance “Theory of Change”
Theory of change
Start with a RESULTS CHAIN
The results chain: tips
Activities
We produce
We control
100% attribution
We are accountable for
Delivered annually
Readily changed
Outputs
Influence
We control
Some attribution
Outcomes
Contribute to
Clients control
Partial attribution
We expect
Should occurr
By end of program
Long-term
Less flexibility to change
Long term
Monitoring –
activities and outputs
Personal Monitoring Tools
No monitoring - blind and deaf
Monitoring and Evaluation
Monitoring
Efficiency
Measures how productively inputs (money, time,
personnel, equipment) are being used in the creation
of outputs (products, results)
 An efficient organisation is one that achieves its
objectives with the least expenditures of resources

Evaluation
 Measures
Effectiveness
the degree to which results / objectives
have been achieved
 An effective organisation is one that achieves its
results and objectives
MONITORING
EVALUATION
focused on project process
(per individual project)
focused on effectiveness of project process
(for many projects)
Inputs
Outputs
Outcomes
All
Most
Some
Short and
intermediate
effects.

Resources
 Staff
 Funds
 Facilities
 Supplies
 Training

Project
deliverables
achieved
 “Count”
(quantified)
what has
been done

Long term
effects and
changes

10
Resist temptation,
there must be a better way!
•
•
•
•
•
•
•
Clear objectives
Few key indicators
Quick simple methods
Existing data sources
Participatory method
Short feed-back loops
Action results!
Monitoring/Evaluation objectives
must be SMART
•
•
•
•
•
Specific
Measurable
Achievable
Realistic
Timed
(see 10 Easy Mistakes, page 5)
Evaluation: who evaluates whom?
The value of a joint approach
The Logical Chain
1. Define Objectives (and Methodology)
2. Supply Inputs
3. Achieve Outputs
4. Generate Outcome
5. Identify and Measure Indicators
6. Evaluate by comparing Objectives
with Indicators
7.Redefine Objectives (and Methodology)
Impact Evaluation
• An assessment of the causal effect of a
project, program or policy beneficiaries.
Uses a counterfactual…
Impacts = Outcomes - What would have
happened anyway
www.brac.net
When to use Impact Evaluation?
Evaluate impact when project is:
•Innovative
•Replicable/ scalable
•Strategically relevant for reducing poverty
•Evaluation will fill knowledge gap
•Substantial policy impact
•Use evaluation within a program to test
alternatives and improve programs
www.brac.net
Impact Evaluation Answers
What was the effect of the program on
outcomes?
How much better of the beneficiaries
because of the program policies?
How would outcome change if changed
program design?
Is the program cost-effective?
www.brac.net
Different Methods to measure
impact evaluation
 Randomised Assignment – experimental
Non Experimental:
 Matching
 Difference-in-Difference
 Regression Discontinuity Design
 Instrumental Variable / Random Promotion
www.brac.net
Randomization
• The “gold standard” in evaluating the
effects of interventions
• It allows us to form a “treatment” and
“control” groups
– identical characteristics
– differ only by intervention
Counterfactual:
randomized-out group
www.brac.net
Matching
• Matching uses large data sets and heavy statistical
techniques to construct the best possible artificial
comparison group for a given treatment group.
•Selected basis
of similarities in
observed
characteristics
•Assumes no
selection bias
based on
unobservable
characteristics.
Counterfactual:
matched comparison group
www.brac.net
Difference-in-difference
• Compares the change in outcomes
overtime between the treatment group and
comparison group
• Controls for the factors constant overtime
in both groups
• ‘parallel trends’ in both the groups in the
absence of the program
Counter-factual:
changes over time for the nonparticipants
www.brac.net
Uses of Different Design
Design
When to use
Randomization
► Whenever
possible
► When an intervention will not be universally
implemented
Random Promotion
► When
Regression Discontinuity
► If
Diff-in-diff
► If
Matching
► When
an intervention is universally
implemented
an intervention is assigned based on
rank
two groups are growing at similar rates
other methods are not possible
► Matching at baseline can be very useful
www.brac.net
Qualitative and Quantitative
Methods
Qualitative methods focus on how results
were achieved (or not).
They can be very helpful for process
evaluation.
It is often very useful to conduct a quick
qualitative study before planning an
experimental (RCT) study.
www.brac.net
Thank you!