Transcript Slide 1

A REVIEW OF IMPACT
EVALUATIONS CONDUCTED IN
SA
Benita van Wyk (Williams)
Feedback Research & Analytics
[email protected]
PURPOSE OF THE PRESENTATION
This presentation shares insights from a review
of a convenience sample of so called “Impact
Evaluations” commissioned by selected
government departments / agencies in South
Africa over the past 5 years.
 The purpose is to explore the understanding of
the concept “Impact Evaluation” as it is applied
in the South African context.
 This practical understanding of Impact
evaluation as it is implemented on the ground is
contrasted with various theoretical
understandings of impact evaluation.

BACKGROUND
FOCUS OF THE STUDY
Evaluations called “Impact Evaluation” or
“Evaluation” which included also an “Impact”
focus
 Excluded specifically ex-ante “Social Impact
Assessments” and “Environmental Impact
Assessments”
 Based on document review – TORs, Proposals,
Evaluation Reports

DEFINITIONS

“Impact evaluation is intended to determine more
broadly whether the program had the desired
effects on individuals, households, and
institutions and whether those effects are
attributable to the program intervention. Impact
evaluations can also explore unintended
consequences, whether positive or negative, on
beneficiaries” (Baker, 2000)
DEFINITIONS
As per the NONIE / DAC definition “impact” is:
 “positive and negative, primary and secondary
long-term effects produced by a development
intervention, directly or indirectly, intended or
unintended”. This definition broadens impact
evaluation beyond direct effects to include the
full range of impacts at all levels of the results
chain.


Where do borders around Impact get drawn in
reality?
BARRIERS TO IE USE
Deemed to be expensive,
 Time consuming, and technically complex, and
 Findings can be politically sensitive, particularly
if they are negative.
 Difficult to design IE to ensure

Timeous answers
 To the right questions
 with sufficient analytical rigor.


Limited availability and quality of data
CONTEXT

Government
The Government Wide Monitoring and Evaluation System
(GWMES) is still focusing on roll-out of monitoring
systems.
 Lack of formal Government Wide Evaluation Policy, and no
government policy on Impact Evaluation
 Sensitization to Impact Evaluation since 2006 encouraging
“thousand flowers blooming”


M&E Community
Have not made a public statement about its position on
Impact Evaluation
 NONIE statement was disseminated with some limited
discussion


Donor Community


Interested in creating more demand for IE
‘Stronger focus on “Outcome Evaluation” than “Impact
Evaluation”’
SAMPLE

Convenience Sample
Government Tender Bulletins over past 5 years
 Notice on SAMEA List Serve
 Personal appeals to key informants
 Snowball Methodology - referrals from initial
respondents

CHALLENGES
Access to information – sensitive findings and
careful public officials in an election year
 Knowledge management - “Which Impact
Evaluation?”
 Availability of documents – reports, terms of
reference
 Self Screening based on insufficiently clear
criteria regarding what constitutes an IE – “But
this is not a real impact evaluation”

FINDINGS - RESPONSES
Number of Leads
Number of leads found
91
Number of studies
received
46
Types of Responses
 Incomplete studies – TORs
 Complete studies – Reports, presentations,
summary reports
QUESTIONS & VARIABLES
FINDINGS- THE QUESTIONS




Descriptive questions: Questions that focus on
determining how many, what proportion etc. for the
purposes of describing some aspect of the evaluation
context.
Normative questions: Questions that compare outcomes
of an intervention against a pre-existing standard or norm.
Analytic-Interpretive questions that builds our
knowledge base: Questions that ask about the state of
the debate issues important for decision making about
specific policies.
Attributive questions: Questions that attempt to attribute
outcomes directly to an intervention like a policy change or
a programme
Chelimsky, E. (2007). Factors Influencing the Choice of
Methods in Federal Evaluation Practice. New Directions
for Evaluation 113. p 13 - 33
FINDINGS – VARIABLES UNDER
INVESTIGATION

Impact of uncontrolled independent variables looking
for various kinds of results


Independent Variables, less clarity
Impact of controlled independent variables looking
for various kinds of results
Impact of HIV/AIDS on employment
 Impact Evaluation of ECD,
 Socio Economic Impact of Gambling


Dependent and Independent Variables clear
Child Support Grant on Nutrition
 Public Awareness Campaign on audience knowledge,
attitudes

TIMING

The timing interacts with the questions and the
variables under investigation
Start?
Descriptive
Normative
Analytic-Interpretive
Attributive
How Long?
DESIGN & METHODS
(Experimental, Quasi-Experimental, MixedMethods, Qualitative Methods, etc.)
DESIGNS AND METHODS: EXAMPLES

Whole range of Designs – Mixed methods, Regression, QuasiExperimental
 The Impact of Unconditional Cash Transfers on Nutrition: The
South African Child Support Grant Jorge M. Aguero, Michael
R. Carter, Ingrid Woolard
 Rapid impact assessment of NMTT's work in Cape Town –
Impact Consulting
SA CHILD SUPPORT GRANT
EVALUATION
The SA Child Support Grant
 In 1998 the Child Support Grant was implemented – a
“no strings” grant paid to the “Primary Care Giver”
(PCG) of a child - 98% women in the evaluation
 Payable initially to children (under 7) in households
with a monthly income of <R800 (urban) or <R1100
(rural) – later the income test was changed to include
only income of PCG and his / her spouse.
 Means test have not changed despite inflation of 40%
1998 - 2004
 Value of grant was R100 in 1998 and currently R180
SA CHILD SUPPORT GRANT
EVALUATION
Evaluation Challenges
 Single National Program – no
purposefully randomized treatment and
control group existed
 No baseline data existed
 Selection into treatment is not random,
Dosage received is not uniform (delay in
enrolling), so a binary treatment variable
could not be used
SA CHILD SUPPORT GRANT EVALUATION
•

Focused the evaluation on the impact of CSG on nutritional gain
of children during their first 36 months or “window of nutritional
vulnerability”
Operational Definitions

Treatment:


Effect:


Check what outcomes are produced by different “dosages” of the
grant using a Continuous Treatment Estimator for the window of 0
-3
Height for Age z score – ex-post measure of the effect of 0 – 3 years
window of nutritional vulnerability (Measure height twice, and
took age from public health card
Control:


Developed a Standardized Eagerness measure (Did a child enrol
quicker than peers in the same locality / age cohort or not)
Other covariates – age, education, sex, marital status and
employment status
SA CHILD SUPPORT GRANT EVALUATION
 Findings

Targeted, unconditional CSG payments have
bolstered early childhood nutrition as signalled
by child height-for-age
Economical and statistical significant effects for large
dosages of CSG support.
 Effects are insignificant for children who received
CSG support for less than 50% of the 36 month
window
 Even holds across local differences (e.g. in the supply
of health related public goods)


Income and nutrition appear to be closely
connected – maybe because it is assigned to
women
RAPID IMPACT ASSESSMENT
Rapid impact assessment of Nial Mellon Township Trust's
work in Cape Town
 Housing Project Evaluated using rapid appraisal
methodology incorporating MSC
 Income earning adults
• Dignity
• Security from crime
 Grade 11s • Safety from fires – school equipment
• Dignity
 Primary care-givers • Psychological well-being
• Health/hygiene – self and children
• Dignity
 Senior citizens • Psychological well-being
• Health/hygiene
• Safety and security

LEARNING
LEARNING

The kinds of learning supported by the
conclusions and recommendations from impact
evaluations
process learning,
 organisational learning,
 impact learning,
 knowledge development and policy learning

INTENDED USE
The intended use supported by impact evaluation
- We refer to use as discussed by
 Marra (2000), Patton (1997), Sandison (2006) and
Weiss (1999)

USE – MARRA 2000

Instrumental


Decision makers have clear goals, seek direct
attainment of these goals and have access to relevant
information
Enlightenment

Users base their decisions on a gradual accumulation
and synthesis of information
USE – (SANDISON 2006)







Instrumental use
 Direct implementation of findings and recommendations
Conceptual use
 Evaluations influences through new ideas and concepts
Process use (learning)
 Involves learning on the part of the people and management involved in the
evaluation
Legitimising use
 Corroborates a decision or understanding that the organisation already holds
providing an independent reference
Ritual use
 Where evaluations serve a purely symbolic purpose, representing a desirable
organisational quality such as accountability
Mis-use
 Involves the suppressing, subverting, misrepresenting or distorting of findings
for political reasons or personal advantage
Non-use
 Is where the evaluation is ignored because users find little or no value in the
findings, are not aware, or the context has changed dramatically
Sandison (2006)
USE (PATTON 1997)

Rendering judgements


Facilitating improvements


Underpinned by accountability perspective (summative
evaluation, accountability, audits, quality control, cost
benefit decisions, decide a program’s future,
accreditation/licensing)
Underpinned by the developmental perspective (formative
evaluation, identify strengths and weaknesses, continuous
improvement; quality enhancement; being a learning
organisation; manage more effectively; adapt a model
locally)
Generating knowledge

Underpinned from the knowledge perspective of academic
values (generalisations about effectiveness, extrapolate
principles about what works; theory building; synthesize
patterns across programs; scholarly publishing; policy
making)
FINAL THOUGHTS
Up take of IE
 Definitions / Discourses around IE
 Capacity for IE
