CESSR Workshops in Methods Introduction to Program Evaluation September 24, 2010 Mindy Hightower King, Ph.D. Research Scientist Center for Evaluation & Education Policy Indiana University.
Download
Report
Transcript CESSR Workshops in Methods Introduction to Program Evaluation September 24, 2010 Mindy Hightower King, Ph.D. Research Scientist Center for Evaluation & Education Policy Indiana University.
CESSR Workshops in Methods
Introduction to
Program Evaluation
September 24, 2010
Mindy Hightower King, Ph.D.
Research Scientist
Center for Evaluation & Education Policy
Indiana University
1
CEEP…
•Promotes and supports rigorous program evaluation and
education policy research primarily, but not exclusively, for
educational, human services, and nonprofit organizations.
•Takes a dynamic approach to evaluation and education policy
research, using both quantitative and qualitative
methodologies, including experimental designs.
•Represents a merger of the Indiana Center for Evaluation and
the Indiana Education Policy Center.
2
CEEP’s Mission
Improve education by providing nonpartisan information,
research, and evaluation on education issues to policymakers
and other education stakeholders
Encourage rigorous program evaluation across a variety of
settings by providing evaluation expertise and services to
diverse agencies, organizations, and businesses
Expand knowledge of effective strategies in evaluation and
policy research by developing, modeling, and disseminating
innovative approaches to program evaluation and policy
research
3
Current Projects
CEEP researchers currently manage over 60 projects in the
following areas:
•Charter Schools
•Professional Learning Communities’
•After School Programming
•Literacy
•Education Policy
•Science, Technology, Engineering and Math
•School Wellness
•Public Health
4
Presentation Overview
1. What is Evaluation?
2. Tools of the Trade
3. A Few Words on Program Goals/Objectives
5
What is Evaluation?
• The use of social science research activities
directed at collecting, analyzing, interpreting, and
communicating information about the workings and
effectiveness of programs.
• Differentiated from social science research in that:
1. Interpretation and communication of results is
essential, but less standardized
2. The engagement of stakeholders and the political
nature of evaluation requires additional skill sets
6
Why Conduct Evaluation?
• To assess the utility of new programs or initiatives;
• To satisfy accountability requirements of program
sponsors;
• To increase the effectiveness of program management
and administration;
• To aid in decisions regarding whether programs should
be continued, expanded, or curtailed.
7
Who Commissions Evaluation?
• Program funders
• Program managers
• Research organizations
• Program designers
• Any combination of the above
8
Some Background on the Field of Evaluation
The following factors have contributed to the
rise and professionalism of the field:
Public health programs targeting infectious disease
Post-WWII boom of federally and privately funded
social programs
1960’s war on poverty
More recently, the driving force behind evaluation has shifted
from social science researchers to consumers of the research
9
Challenges of Evaluation
• Dynamic nature of social programs
Programs may change as they are implemented, and the
evaluation will often need to do the same in response.
• Scientific versus pragmatic concerns
Challenge involves selecting techniques of the highest rigor with
available resources while maintaining utility.
• Diversity of perspectives and approaches
There are rarely absolutes in evaluation. Most often, it involves
finding the approach that best fits the situation.
10
Formative v. Summative Evaluation
Formative Evaluation
Summative Evaluation
• Deals with program
implementation
• Achievement of program
objectives
• Geared toward program
improvement
• Often focus on the “bottom line
• Provide feedback and advice
• Can include cost-efficiency
analyses
• Often focused on and/or involve
program managers and staff
• Often requested by funders or
advisory boards
11
General Steps
in Conducting a Program Evaluation
(The Evaluation Center, 2001)
• Evaluation Assessment
– Involves determination of purpose, key questions,
intended use, culture/environment, research design.
• Evaluation Study
– Involves instrument design, data collection and
analysis, report development and dissemination,
utilization.
12
Evaluation Assessment
1. Who are the clients of the evaluation?
1. What are the questions and issues driving the
evaluation?
2. What resources are available to do the
evaluation?
1. What has been done previously?
1. What is the program all about?
13
Evaluation Assessment - Continued
6. In what kind of environment does the program
operate?
7. Which research designs are desirable appropriate?
8. What information sources are available and
appropriate, given the evaluation issues and
environment, and the program structure?
9. Given all the issues raised in 1-8, which evaluation
strategy is least problematic?
10. Should the program evaluation be undertaken?
14
Evaluation Study
1. Develop the measures and collect the
data.
2. Analyze the data
3. Write the report
1. Disseminate the report / results
1. Make changes based on the data / utilize
the results
15
Quick & Dirty Evaluation Design
1. Who wants to know what and why?
1. What resources are available to do the evaluation?
2. What do I need to keep in mind about the context
in which the program operates?
1. What information sources are available and
appropriate?
2. Which evaluation strategy is most feasible and
least problematic?
16
Evaluation
Tools and Strategies:
Logic Models
Stakeholder Interest Matrices
Data Collection Plans
17
What is a Logic Model?
•
•
•
A simplified picture of a program, initiative, or
intervention.
Shows logical relationships among the resources that
are invested, the activities that take place, and the
benefits or changes that result.
(This is often called program theory or the program's
theory of action)
It is a "plausible, sensible model of how a program is
supposed to work" (Bickman, 1987).
18
Logic Model Basics
INPUTS
Program
Investments
What is invested
OUTPUTS
Activities
What is done
Participation
Who is
reached
OUTCOMES
Short Term
Learning
Intermediate
Long Term
(Impacts)
Action /
Performance
Conditions
Inputs - the resources invested that allow a program to achieve the desired
outputs.
Outputs - activities conducted or products created that reach targeted
participants or populations. Outputs lead to outcomes.
Outcomes - changes or benefits for individuals, families, groups, businesses,
organizations, and communities.
19
Teaching American History Program
1. Funded by the U.S. Department of Education
2. Provides grants to schools and districts
3. Purpose of the program: increase student and
teacher knowledge of American History
4. Program funds used to provide professional
development to teachers, purchase materials,
support collaborative efforts
20
PRACTICE EXERCISE: Developing a Logic Model:
(Articulate the desired long-term outcomes and work backwards)
INPUTS
Program Investments
STEP 3
OUTPUTS
Activities
OUTCOMES
Participation
STEP 2
Short Term
Intermediate
Long Term
STEP 1
21
Teaching American History Logic Model
INPUTS
Program
Investments
Staff
Volunteers
Money
Time
Materials
OUTPUTS
Activities
OUTCOMES
Participation
Teacher
Professional
Development
# of teachers
who attend
workshops
Peer
Mentoring
# of students
impacted by
trained
teachers
Curriculum
Development
Short Term
Long Term
Increased
Teacher
Knowledge
in American
History
Increased
Student
Achievement
in American
History
Technology
Partners
22
Logic Models:
Easy as pie…or cookies?
23
Stakeholder Interest Matrix
•
•
•
•
Used to identify individuals/groups who may be
interested and/or involved in an evaluation.
Clarifies stakeholder interests in evaluation results,
potential concerns and/or road-blocks, and opportunities
to increase buy-in.
Particularly useful for participatory evaluations.
May also help in identifying potential evaluation
resources.
24
Stakeholder Interest Matrix
Stakeholder
Interest in the
Program
Potential
Potential Uses of
Involvement in the
Evaluation Results
Evaluation
25
Stakeholder Interest Matrix
Potential
Potential Uses of
Involvement in the
Evaluation Results
Evaluation
Stakeholder
Interest in the
Program
USDOE
Accountability for
funds; efficacy
Continued
program funding
TA for design and
implementation
Students
Engagement;
interest
Decisions to
engage in learning
opportunities
Data source;
knowledge &
feedback
Teachers
Improved teaching
skills; improved
learning
Decisions to
engage in learning
opportunities
Data source; track
PD information
Community
Partners
Reaching students; Decision to partner
meeting mission
with schools
Data source on
partnerships
26
Data Collection Plans
•
•
•
Used to summarize evaluation methodology in grant
proposals or to summarize for stakeholders.
Illustrates the connection between project
goals/objectives and data collection strategies.
Particularly useful when space is limited in
proposals/applications or when evaluation strategies are
multi-dimensional.
27
Sample
Data
Collection
Plan
28
A few words on goals
and objectives…
29
Goals – Objectives – Performance Measures
PROGRAM GOAL
Project Objectives:
What your project is doing to support the overall program goa?l
Performance Measures: How you measure your progress
toward meeting your objectives?
30
PRACTICE EXERCISE: Developing a Logic Model:
(Articulate the desired long-term outcomes and work backwards)
INPUTS
Program Investments
STEP 3
OUTPUTS
Activities
OUTCOMES
Participation
Short Term
STEP 2
Intermediate
Long Term
STEP 1
Program Objectives
Process Measures
Outcome Measures
31
Performance Measures
A performance measure is a measurable indicator
used to determine how well objectives are being
met.
•
How will you assess progress?
•
How much progress will constitute success?
•
How will you know if your objective or part of
your objective has been achieved?
32
Relevance of Performance Measures
Objective 1
Performance
Measure 1a
Performance
Measure 1b
Performance
Measure 1c
33
Components of Performance Measures
1. What will change (or happen)?
2. How much change is expected? (What is the expected
quantity?)
3. Who will achieve the change (or who will the events
involve)?
4. When the change will take place (or happen)?
34
Performance Measure Examples
Five (how much) charter schools will be
developed in geographic areas with a
concentration of high priority schools (as
defined by state standards) (who/what)
throughout the state each year between
2010 and 2012 (when).
© 2010 CEEP
35
Performance Measure Examples
100% of charter school leaders and CFOs
(expected quantity) will attend the Fiscal
Review Workshop (what will happen/who
will be involved) during years one and two
of their grant period (when will it happen).
36
Objectives/Performance Measures
Objective:
• To encourage dissemination of best practices within charter schools to
the broader public.
Performance Measures:
• On an annual basis, 100% of charter schools will submit their best
practices to the SEA for inclusion to a catalogue of innovative methods.
• During each year of the grant, at least two venues/partner organizations
will disseminate collected charter school data.
• Follow up surveys attending partner organization training events will
show that at least 75% of those attending dissemination workshops will
implement new practices based on charter school innovations.
37
For more information…
Mindy Hightower King, Ph.D.
Center for Evaluation and Education Policy
1900 E. Tenth Street, Room 918
Bloomington, Indiana 47401
812-855-4438
[email protected]
38
CESSR Workshops in Methods
Introduction to
Program Evaluation
September 24, 2010
Mindy Hightower King, Ph.D.
Research Scientist
Center for Evaluation & Education Policy
Indiana University
39