presentation by Lesli Hoey - Cornell International Institute for Food

Download Report

Transcript presentation by Lesli Hoey - Cornell International Institute for Food

Logic Models:
How to Develop, Link to
M&E and Adapt
Lesli Hoey
PhD Candidate
Cornell Department of City and Regional Planning
Evaluating Int’l Development Projects: One-Day Skills Building Workshop on
M&E
Cornell International Institute for Food and Agriculture Development
November 5, 2011
Outline
1. How to develop a logic model
2. Using logic models to design M&E
3. M&E across program phases
4. Linear vs. complex interventions
Developing a Logic Model
Step 1: Purpose and use
Why are you developing a logic model? Who will use it? How?
Step 2: Involve others
Who should participate in creating the logic model?
Step 3: Set the boundaries for the logic model
What will the logic model depict: a single, focused endeavor; a
comprehensive initiative; a collaborative process? What level of
detail is needed?
Step 4: Understand the situation
What is the situation giving rise to the intervention? What do
we know about the problem/audience/context?
Adapted from: Taylor-Powell and Henert, 2008
Process Options
1) Everyone identifies resources, activities, participants and
outcomes on post-it notes arranged on wall. Check for “ifthen” relationships, edit duplicates, ID gaps, etc.
2) Small subgroups develop their own logic model of the
program. The whole group merges these into one.
3) Participants bring a list of program outcomes. Sort into
short- and long-term outcomes by target group. Edit
duplicates, ID gaps, etc. Discuss assumptions about chain
of outcomes, external factors. Link resources, activities.
4) Use web-based systems, e-mail or other distance methods.
5) Subcommittee creates the model and reviews with others.
Adapted from: Taylor-Powell and Henert, 2008
Logic Models & Evaluation
Helps us match evaluation to the program
Helps us know what and when to measure
- Are you interested in process and/or outcomes?
Helps us focus on key, important information
-Where will you spend limited evaluation
resources?
- What do we really need to know?
Source: Taylor-Powell and Henert, 2008
Types of Evaluation Mapped Across the Logic
Model
Needs/asset
assessment:
Process
evaluation:
Outcome
evaluation:
Impact
evaluation:
What are the
characteristics, needs,
priorities of target
population?
What are potential
barriers/facilitators?
What is most
appropriate to do?
How is program
implemented?
Are activities
delivered as
intended? Fidelity of
implementation?
Are participants being
reached as intended?
What are participant
reactions?
To what extent are
desired changes
occurring? Goals met?
Who is benefiting/not
benefiting? How?
What seems to work?
Not work?
What are unintended
outcomes?
To what extent can
changes be attributed to
the program?
What are the net
effects?
What are final
consequences?
Is program worth
resources it costs?
Source: Taylor-Powell and Henert, 2008
Water Quality Project Example
Formative Evaluation Questions
Summative Evaluation Questions
Indicators
Source: Taylor-Powell, 2002
Program Phases and Evaluation
F
O
R
M
A
T
I
V
E
Initiation – Need dynamic, flexible, rapid feedback about
implementation and process. Includes monitoring, postonly feedback, unstructured observation, sharing of
implementation experiences. Mostly qualitative.
S
U
M
M
A
T
I
V
E
Mature – When a program is routinized and stable, compare
outcomes with expectations, with performance in
alternative programs, or sites with no program. Includes
experimental and quasi-experimental designs, more
structured and comparative qualitative approaches.
Development – Focus on observation, assessment of change
in key outcomes, emerging consistency. Includes pre-post
differences. Qualitative or quantitative.
Dissemination – Focused on transferability, generalizability or
external validity. Measure consistency of outcomes across
different settings, populations or program variations.
Source: Trochim, 2006
Three ways of conceptualizing
and mapping theories of change
1. Linear
Newtonian causality
2. Interdependent
systems
relationships
3. Complex
Source: Patton, 2008
nonlinear dynamics
Interdependent Systems
Relationships
OUTPUTS
SHORT-TERM
OUTCOMES
Dept 1
MID-TERM
OUTCOMES
Dept 2
LONG-TERM
OUTCOMES
Dept 3
Dept 4
Adapted from Chapel, 2006 in Taylor-Powell and Henert,
2008
Complex, Non-Linear Intervention
Strong
High Capacity
Coalitions
Timely,
Opportunistic
Lobbying &
Judicial
Engagement
Solid
Knowledge
&
Research
Base
Source: Patton, 2008
EFFECTIVE
ADVOCACY
Collaborating
Funders/
Strategic
Funding
Strong
National/
Grassroots
Coordination
Disciplined
Focused
Message/
Effective
Communications
Conditions that challenge traditional
model-testing evaluation
•
•
•
•
•
High innovation
Ongoing development
High uncertainty
Dynamic, rapid change
Emergent (difficult to plan
and predict)
• Systems Change
• Interdependence
Adaptive
Management
Adapted from: Patton, 2008
Ideal Type Evaluation Models
Traditional
Developmental
Tests models
Renders definitive judgment of
success or failure
Measures success against
predetermined goals
Evaluator external, objective
Evaluator determines design
Design based on linear causeeffect model
Aim to produce generalizable
findings across time & space
Accountability directed
externally, to control
Engenders fear of failure
Supports innovation and adaptation
Provides feedback, generates learning and
affirms changes in certain direction
Develops new measures and monitoring
mechanisms as goals emerge and evolve
Evaluator part of team, ‘learning coach’
Evaluator collaborates on design
Design captures system dynamics, interdependencies, emergent interconnections
Aim to produce context-specific understand
to inform ongoing innovation
Accountability focused on commitment to
learning, for responding to lack of control
Engenders desire to learn
Adapted from: Patton, 2008
Useful Resources
See CIIFAD website for evaluation institutes and WMU
Visit U Wisconsin Extension website
Look at these books:
Bamberger, M., Rugh, J. and M. Linda. 2011 (2nd Ed). Real World
Evaluation Working Under Budget, Time, Data, and Political
Constraints. Los Angeles: Sage.
Patton, M.Q. 2008 (4th Ed). Utilization-Focused Evaluation. Los
Angeles: Sage.
Patton, M.Q. 2011. Developmental Evaluation: Applying Complexity
Concepts to Enhance Innovation and Use. NY: Guilford Press.
Williams, B and I. Imam. 2006. Systems Concepts in Evaluation – An
Expert Anthology. Point Reyes CA: Edge Press/AEA
World Bank.2006. Conducting Quality Impact Evaluations Under
Budget, Time and Data Constraints. Washington, DC: Author
References Cited
Patton, M.Q. 2008. “Evaluating the complex: Getting to
maybe”. Power point presented in Oslo, Norway.
Available online:
aidontheedge.files.wordpress.com/2009/09/patton_oslo.p
pt
Taylor-Powell, E. and E. Henert. 2008. “Developing a
logic model: Teaching and training guide”. Madison:
University of Wisconsin – Extension
Trochim, W. 2007. “Evolutionary perspectives on
evaluation: Theoretical and practical implications”. Paper
Presented at the Colorado Evaluation Network