Evaluation Workshop: Tools, Techniques and Tricks to Improve Your

Download Report

Transcript Evaluation Workshop: Tools, Techniques and Tricks to Improve Your

Developmental Evaluation
A New Way to Capture
Feedback on your Evolving
EE Programs
Sue Staniforth, BSc., MSc.
[email protected]
Webinar Objectives
• Briefly review the Main Types of Evaluation
• Explore Developmental Evaluation
– What is it?
– When it is useful?
– How is it practiced?
• Clarify the developmental evaluator’s role as a
long term partner with program stakeholders.
• Explore ways to incorporate developmental
evaluation to assess and improve your programs.
Why Do We Evaluate?
type in your responses
Why do we Evaluate??
• Program/ Project Improvement
• Maximize the impact of limited
resources
• Project Accountability
• Understand and Work Effectively
within Context
• Improve group dynamics and
processes
• Build support for programs/projects
• Deal with uncertainty and change
Evaluation
Not only to
critically analyze
but to provide a
positive
contribution
that helps make
programs work
better /
allocates
resources to better
programs.
Philip Cox - wonderful webinar Oct.18
- unpacked some of the evaluation
terminology around outcomes
measurement, and presented the main
methodologies and tool sets of evaluation:
• The Logic Model
• Risk Analysis
• Monitoring and Evaluation Planning
• Touched on some of less conventional
evaluation models: participatory,
developmental, user-focused, etc.
Think back to experiences you have
had being evaluated:
Question: Poll
Have you ever had a negative
evaluation experience?
Use your Yes or No buttons
If yes, what made it so?
Negatives:
•“Objectivity”
• not knowing or understanding
context of a program
•poor evaluation tools
•inaccurate questions and emphasis
•Powerlessness
•top-down – having an evaluation done
to you, not with you
Question 2:
Have you had a good evaluation
experience?
Use your Yes or No buttons
If yes, what made it so?
Positive Evaluation Experiences:
•Knew context in which evaluation was
taking place (culturally, regionally)
•Familiarity with the discipline
•Asked for input from all stakeholders
•Inclusive versus exclusive
•Asked good questions that get to the heart
of the program or project
•Supportive environment – success as goal,
not punitive punishment
Historically, evaluations looked at the goals
and objectives of a program, developed set of
questions and indicators used as the sole
measurement of success or failure /delivered
findings to an administrator – even though the
direction and focus of the program may have
changed along the way, due to changes in the
system, the stakeholders and /or the
environment:
•e.g. a teachers strike
•a program worked better at the middle
school level than at original target of
Grade 4 - so changed audiences
In evaluation, we are moving from one-of
studies to streams:
Monitoring and evaluation are starting to merge,
and analysis and data bases are continuous.
Most situations have multiple players, multiple
levels of impacts, actors, systems and actions
There is increased transparency – evaluations
can no longer be bureaucratically contained
New methods are evolving to capture and
assess innovations.
Main types of evaluation
For many years, evaluators and
evaluation methodologies have
tended to focus on three broad
purposes:
1.Formative Evaluation
2. Summative Evaluation
3. Accountability Evaluation
1. Formative Evaluation
- is used to help improve a program or
policy. Formative evaluation produces
information that is fed back during the
course of a program
- main purpose to provide information to
improve the program under study.
E.g. a pilot program is developed at the Calgary
Zoo, implemented to school groups or the
public, and staff collect feedback as to how it is
working from participants and observers.
2. Summative Evaluation
-used to judge the merit of a program or
policy to determine whether it should be
sustained, discontinued or scaled up.
-done after the program (or a phase of it)
is finished, to determine extent to which
anticipated outcomes were produced.
- intended to provide information about the
worth of the program - its effectiveness.
So – should that Zoo program be continued
next year? Why/ Why not?
Scriven simplified this distinction,
as follows:
“When the cook tastes the soup,
that’s formative evaluation;
when the guest tastes it, that’s
summative evaluation.”
3. Accountability Evaluation
-used to assess the extent to which an
organization or group is ‘implementing a
detailed model with fidelity’ to an already
approved – often rigid – blueprint.
e.g. often what we have to provide to our
funders…. Following our proposal
methodologies to meet their goals
(e.g. “the program will reduce carbon emissions
of Grade 12 high school students by X%...”)
Now – just to get a sense of the experiences
we have in the room:
Poll:
Have you done a program evaluation?
Use your Yes
or
No buttons
Is it any one of - or all of ?
– Formative
– Summative
– Accountability
Write your answers in the chat box
These are Outcome Measurement
evaluations: based on what the programs
goals are, what’s happening?
There are plenty of
situations where these
types of evaluations are
not appropriate and
may even be counterproductive.
For example:
When you are:
•creating an entirely new program or
policy;
•adapting a proven program in a fast
moving environment; E.g. - a funding model
for government contracts to provide funds to
many new evolving climate change NGO’s –
•importing a program or policy that proved
effective in one context into a new one; from one province or region to another ( British
text books to the colonies…!)
• dealing with complex issues where
solutions are uncertain and/ or
stakeholders are not on the same page.
E.g. Fraser Salmon and Watersheds
program to conserve river ecosystem
– fishers, conservation groups, aboriginal
groups ,communities, businesses
= many levels of government, very
complex and inter-related systems and
agendas.
Developmental Evaluation – A New
Player on the Block
-an emerging evaluative approach designed to
help decision-makers check in on how the
program is doing “in-flight” and make
corrections.
-creates less expectations up front, and is more
about what is happening as the program rolls
out.
Systems Thinking and Complexity Theory
- moving beyond linearity or direct cause and
effect to try and capture elements of the many
systems we humans operate in.
Using Different System Lenses to
Understand a “particular” System
Economic System
Biologic System
• Emergence
• Coordination/synergy
• Structure, Process, Pattern
• Vitality
 Inputs/Outputs
 Cost/Waste/Value/Benefits
 Customers/Suppliers
• Power
• Governance
• Citizenship
• Equity
Anthropologic
System
Sociologic System
• Relationships
• Conversations
• Interdependence
• Loose-tight coupling
• Meaning/sense
Mechanical / Physical System
• Flow
• Temporal Sequencing
• Spatial Proximities
• Logistics
• Information
Political System
• Values
• Culture/Milieu
Information System
Psychological System
• Organizing
• Force Fields
• Ecological/Behaviour
Settings
•Access
•Speed
•Fidelity/ utility
•Privacy/ security
•Storage
Michael Q. Patton
“Developmental evaluation refers to longterm, partnering relationships between
evaluators and those engaged in
innovative initiatives and development.
Developmental evaluation processes
include asking evaluative questions and
gathering information to provide feedback
and support developmental decisionmaking and course corrections along the
emergent path.
- MQP, 2008
Differs from Traditional Evaluation in several
key ways….
Traditional
Evaluation…
• Renders definitive
judgments of
success or failure
Developmental
Evaluation…
• Provides feedback,
generates learnings,
supports direction or
affirms changes in
direction
Traditional
Evaluation…
Developmental
Evaluation…
• Measures success
against
predetermined
goals
• Develops new
measures and
monitoring
mechanisms as
goals emerge &
evolve
Traditional
Evaluation…
• The evaluator is
external,
independent,
‘objective’.
Developmental
Evaluation…
• Evaluator is part of
a team, a facilitator
and learning
coach, bringing
evaluative thinking
to the table,
supportive of the
organization’s
goals
• A “critical friend”
Large, complex, challenging innovations do
not lend themselves to linear or easy
prediction, so it is important to be able to:
•track changes as they happen,
•feed the information back to the people
doing the work,
•and adjust the program accordingly:
in- flight adjustments.
Simple
Following a Recipe
The recipe is essential
Recipes are tested to
assure replicability of
later efforts
No particular
expertise; knowing how
to cook increases
success
Recipe notes the
quantity and nature of
“parts” needed
Recipes produce
standard products
Complicated Complex
A Rocket to the Moon






Formulae are critical
and necessary
Sending one rocket
increases assurance
that next will be ok
High level of
expertise in many
specialized fields +
coordination
Separate into parts
and then coordinate
Rockets similar in
critical ways
High degree of
certainty of outcome
Raising a Child






Formulae have only a
limited application
Raising one child
gives no assurance of
success with the next
Expertise can help
but is not sufficient;
relationships are key
Can’t separate parts
from the whole
Every child is unique
Uncertainty of
outcome remains
Certainty of same
results every time
Michael Q. Patton
Complex
developments need
flexible and
adaptable
approaches
Can be a very
useful approach
when you are
working in
Environmental
Education
DE helps to
unearth the
complexities of the
many systems we
work in, monitor
changes, and
provide a more
continuous picture
of what is
happening to your
program when it is
out in the real
world!
Are you
scoring
some
goals,
or….. did
a tidal
wave
hit?!!
When do you Use Developmental
Evaluation?
DE is not appropriate for all situations - some of the things to ask include:
1.The evaluation should be part of the initial program
design:
“Evaluation isn’t something to incorporate only after an
innovation is underway. The very possibility articulated in
the idea of making a major difference in the world ought to
incorporate a commitment to not only bringing about
significant social change, but also thinking deeply about,
evaluating, and learning from social innovation as the idea
and process develops.”
(2006: from “Getting to Maybe" by Frances Westley, Brenda Zimmerman and Michael Patton)
When the cook
tastes the soup,
that’s formative
evaluation; when the
guest tastes it,
that’s summative
evaluation.
When the cook is in the market
shopping for the best ingredients and
developing the recipe, that’s part of the
developmental evaluation!
2. Fit and Readiness
Does the group want to test new approaches?
Are they (you) a learning organization?
Is the program flexible enough to be adapted
as you go? - financial and logistical questions
to be answered here
What about accountability i.e. are the funders
open to changes?
3. Environment
What are the “ripples” that Philip Cox talks
about in his splash & ripple analogy – the
disturbances that get in the way of activities
and outcomes?
With EE being a non prescribed subject in
schools –its development, varied ways it is
implemented and by whom, range of
impacts and the many stakeholders:
administrators, teachers, parents, students,
NGO’s, custodians, etc. – lots of room for
variation.
4. Is the program socially complex,
requiring collaboration among stakeholders from
different organizations, systems, and/or sectors?
E.g. the Formal school
system, different
segments of the
public, several levels
of governments,
different cultures,
NGO’s, community
groups?
5. Is the program new or evolving?
- requiring real-time learning and development
-do you need to adapt,
change course,
incorporate new learning
from another program,
add new components such
as teacher training or
community involvement.
-Is it feasible to have an
“embedded” evaluator?
HOW IS
DEVELOPMENTAL
EVALUATION
PRACTICED?
The short answer is: any
way that works.
- an adaptive, contextspecific approach. As
such, there is no
prescribed methodology.
A few key entry points and practices that can be
applied to your program:
1.Get the Background Story: ORIENTING
YOURSELF
•What is the theory of change that is implicit in a
program??? - this needs to be clarified
•Look at the whys and hows of decisions and
systems that are in place – how did you get to this
place, and why?
Review existing documentation, meet with stakeholders,
ask questions, conduct mini-interviews, explore related
research, take people out for coffee.
Evaluators become part of
the institutional memory of
an organization – a
common myth is ‘ we knew
where we were going’
- documenting decisions,
course changes, and why
they happened is critical to
understand results.
2. BUILDING RELATIONSHIPS
Relationship building is critical to
developmental evaluation; because of
the importance of Access to
information:
Back to the question of How the
group makes decisions?
What is the problem, and then –
in choosing a solution, what
actions are considered and what
direction is chosen? - note and
document the forks in the road
3. Collective Analysis
The core of evaluation is getting people to
engage with the data.
In developmental evaluation, meaning-making is a
collective process. Shifting responsibility for the
meaning-making process from the evaluator to the entire
team can help to:
• Build capacity for evaluative thinking among other
team members
• Create a sense of ownership
• Increase understanding of the findings
• Increase the likelihood that the findings will actually be
used (Patton, 2008)
4. INTERVENING - in productive ways:
•Asking “Wicked” – or Good Questions – questions
that create openings, expose assumptions, push
thinking and surface values
•Facilitating - active listening, surfacing assumptions,
clarifying, synthesizing, ensuring all voices are heard
•Sourcing and providing information – bringing
information and resources into the system
•Reminding groups of their higher level purpose –
refocusing on priorities, goals
•History Keeper – keep track of past failures and
successes to build on what has gone before
•Matchmaking – connecting the group with people,
resources, organizations ideas
In Summary:
Even if you don’t do developmental evaluation in its
formal sense, there are still many facets of its practice
that can contribute to your evolving programs:
Consider implementing some of the practices discussed
above:
1.Get the Background Story:
Clarify your theory of change
Look at the whys and hows of decision-making and
systems that are in place
2. Relationships - How do you make decisions?
Build relationships, note and document the forks in the
road and the processes, strengths, and weaknesses
3. Collective Analysis
Ensure all program team members are part of any
evaluative process, help them engage with the data.
Help Build capacity for evaluative thinking, Create a
sense of ownership, Help ensure data collected is
useful and used.
4. INTERVENING - in productive ways:
Ask “Wicked” Questions about the program
Facilitate evaluative discussions amongst team
members
Source and provide information, ideas, people and
resources
Document what you do and why!