Introduction to RMAFs

Download Report

Transcript Introduction to RMAFs

Evaluation as a Tool for
Learning
A conversation with PSOs in
Winnipeg, Manitoba
on June 4th, 2008
Jennifer Birch-Jones
Performance Management Network Inc.
www.pmn.net
Desired Outcomes
 Increased awareness and
understanding of

what it is

why do it

what makes it hard

what we can do
 Increased commitment to using
evaluation as a tool for learning
2
What is it?
Strategic Management Cycle
Strategic Analysis
Where are we now?
Strategic Framework
Monitoring &
Evaluation
Implementation
Where do you want to
be?
How do we get there?
Are we getting there?
Source: Kent and Wilkinson, Applied Strategic Planning, 1991.
3
What is it?
 Evaluation involves looking at a specific aspect of what your
organization does, i.e., a program, project or initiative during a
specific period of time and asking:
1.
Is what we are doing working?
2.
How do we know it is working?
3.
Under what conditions does it work best? (Festen & Philbin, 2007)
 Involves a natural, though disciplined, use of three steps:
1.
Asking good questions
2.
Gathering and reviewing information
3.
Sharing the information to foster good decision-making (Gray &
Associates, 1998)
4
What is it?
 The purpose of evaluation is to plan for next
year, not to judge what you did in the past

Part of what you put in place beforehand to help you run your
programs
 Evaluation should not be:

a test or a punishment

a scientific research project requiring control groups


an occasional activity (i.e., once every five years) or a one time
event
only done to benefit the funder
5
What is it?
 Both outcomes measurement and evaluation



are tools for decision-making
allow you to change the conversation from “what
did we do” to “what difference we have made”
rely on the logic model as the fundamental point
of reference
6
What is it?
 A results chain or logic model is a diagram
showing the links from the activities through
the sequence of outcomes to the final outcomes
Inputs  Activities  Outputs  Goals /
Outcomes
 Goals / outcomes can be Short-Term 
Medium-Term  and Long-Term Goals /
Outcomes
7
What is it?
WHO?
WHERE?
HOW?
inputs
activities
outputs
WHAT
do we
want?
WHY?
users / clients /
direct and
final
co-deliverers /
intermediate outcomes
beneficiaries
outcomes
“Operational”
“Behavioural Change”
“State”
8
Spheres of Influence
WHY?
(State)
End
Outcomes
Your environment of indirect influence
e.g., individuals and/or
communities of interest where you do not
make direct contact
Performance needs to
be considered in terms
of its differing spheres
of influence. Actions
in the operational
sphere should directly
lead to changes in
targeted groups which
should in turn affect
the desired ‘state’.
WHAT do we want
by WHOM?
Intermediate
Outcomes
(Behavioral Change)
Your environment of direct influence
e.g., people, groups and organizations in
direct contact with your operations
Immediate
Outcomes
HOW?
(Operational)
Your operational
environment
You have direct control
over the behaviors within
this sphere
Outputs
Sources: Van Der Heijden (1996),
Montague (2000).
9
What is it?
• Outcomes measurement is primarily about
the here and now and is descriptive in
nature; it provides a descriptive look at where
we are today and asks how well are we doing
in relation to our high level outcomes; it looks
for evidence that the program is moving in
its’ intended direction, helping managers to
make mid-course directions, and providing a
basis for accountability (Schacter, 2002)
10
What is it?
• Evaluation takes a longer-term perspective,
looking back over a period of years at the
performance of a program (or policy), based
on in-depth research and analysis, goes
beyond measuring progress to trying to
explain whether the intended outcomes have
been achieved (and why / why not), whether
the rationale for the program remains valid,
and if there are better alternate ways of
achieving program outcomes (Schacter,
11
2002)
What is it?
Non-profit Work and the Nature of
Evaluation (Festen and Philbin, 2007)
You do work. When you evaluate how well
you do what you do, it’s called a
process evaluation.
Your work has results. When you evaluate the
results of your work, it’s called an
outcome evaluation.
Lots of work produces multiple outcomes
over time. This equals
impact or long-term outcomes.
12
What is it?
• The reality is that what we often do in sport
organizations is a mixture of outcomes
measurement and evaluation (project
evaluation?)
• Monitoring is a term that is also used to
denote regular collection and use of
outcomes measures
• Need to have a plan / strategy in advance for
measuring / evaluating outcomes
13
Why Do It?
 Measure progress on priority issues and identify priority areas






for improvement
Set realistic gaols by providing information for making and fine
tuning strategic program decisions
Identify staff and volunteer training and technical assistance
needs
Be accountable and credible to you constituents, your
community, your partners, your funders and yourself
Motivate by providing documentation of your achievements
Generate support for programs and make the case for added
resources
Guide budget and resource allocations (Gray and Associates,
1998)
14
What Makes it So Hard?
1. Tension Between Learning and
Accountability (philosophical)
2. Gap Between Requirements and
Capacity (structural)
15
What Makes it So Hard?
1. Tension Between Learning and
Accountability (did you achieve what you
agreed to)
 Compliance versus learning (Wesley,
Zimmerman and Patton, 2006)
 Learning by the funder versus the
organization (Festen and Philbin, 2007)
 Trust versus mistrust
16
What Makes it So Hard?
2. Requirements and Capacity Gap
 Expectations versus reality
 Recipients need the funds to do “good work” but
don’t want to admit that they can’t realistically
achieve the expected results given the resources
($ and time)
 The “gap” also increases when the time frame and
/ or $ amount changes for the results but the
expectations (usually the funder’s) do not
17
What Makes it So Hard?
2. Requirements and Capacity Gap (Cont’d)

Although funders are increasing their demands for
outcomes information, very few are providing the
necessary funds to organizations to do so (VSERP,
2003) - outcomes are more difficult to measure

The evaluation requirements from the funder(s) are
the same, regardless of the amount of funding

Everyone uses different language
18
What Can We Do?

Recognize the tension between accountability
and learning and the impact it can have on
what and how you evaluate – know how failure
is defined and will be considered

Be clear about what the language / terms mean

Be realistic about what can be achieved and renegotiate results expectations when timeframe,
$ and other circumstances impact on the
achievement of the results
19
What Can We Do?

Focus your resources on the most important
issues and only collect the information that you
need / will use

Consider “rolling” evaluation - choose one or
two questions you want to explore in any given
year that relate to a desired outcome and focus
only on that topic (Festen and Philbin, 2007)

Build the capacity of staff to do outcome
measurement / evaluation (lots of great free
online resources)
20
What Can We Do?

Create learning moments at staff and Board
meetings - reframe the conversation to
become one of questioning (Gray and
Associates, 1998):
“Since our last meeting, what difference have we made in
providing a quality sport experience for Manitobans?”
And / Or
“From the information we have just received, how well is
our organization holding our vision and attaining our
mission?”
21
What Can We Do?

Engage in conversations with your funder(s) on
how they can help you better use evaluation as
a tool for learning
1.
What excites you the most about the potential
for using evaluation as a tool for learning in
your PSO?
2.
If you could only had evaluation resources to
examine one issue in your PSO, what would
that issue be?
3.
In what ways can Sport Manitoba help?
22
Final Thoughts
“”The reason we got into this business was to change
lives. Now we have all the staff thinking in this
orientation. Outcome measurement creates
focus in a way no other management tool can do.
We used to have forms we filled out that were
meaningless. Now we have the same number
of forms, but we get real information. We
used to count the number of things we did.
Now we count results."
Jon Berry, Executive Director
Freeport West
Minneapolis, Minnesota
Source: Outcome Measurement: Showing Results in the
Source: Outcome Measurement: Showing Results in the
Nonprofit Sector, United Way of America, 1999.
Nonprofit Sector, United Way of America, 1999.
References and Resources
 Applied Strategic Planning (Kent and Wilkinson, 1991)
 Evaluation with Power: A New Approach to Organizational
Effectiveness, Empowerment and Excellence (Gray and
Associates, 1998)*
 Outcome Measurement: Showing Results in the Nonprofit
Sector (United Way of America, 1999)
 Not a “Tool Kit” – Practitioner’s Guide to Measuring the
Performance of Public Programs (Schacter, 2002)*
 Assessing Performance: Evaluation Practices &
Perspectives in Canada’s Voluntary Sector (Voluntary
Sector Evaluation Research Project, 2003)
24
References and Resources
 Getting to Maybe – How the World is Changed (Wesley,
Zimmerman and Patton, 2006)
 Learning from our Evaluation Practice (The J.W. McConnell
Foundation, 2006)
 Program Evaluation and Performance Measurement
(McDavid and Hawthorn, 2006)
 Level Best – How Small and Grassroots Non-Profits Can
Tackle Evaluation and Talk Results (Festen and Philbin,
2007)*
* Very practical, PSO-friendly and affordable evaluation
resources.
25