Transcript Evaluation

Introduction to Program Evaluation
Victor Balaban, PhD
Program Evaluation Team (PET)
Field Services and Evaluation Branch (FSEB)
Division of Tuberculosis Elimination (DTBE)
NCHHSTP/CDC
TM
Disclaimer
• The contents and conclusions in this
presentation have not been formally
disseminated by CDC and should not be
construed to represent any agency
determination or policy.
TM
What is Evaluation?
TM
Evaluation
• Evaluation is the systematic investigation of
the merit, worth or significance of an object,
hence assigning “value” to a program’s efforts
means addressing those three inter-related
domains:
• Merit (or quality)
• Worth (or value, i.e., costeffectiveness)
• Significance (or importance)
source: CDC Framework for Program Evaluation in Public Health:
http://www.cdc.gov/eval/framework/index.htm
TM
Evaluation
• Evaluation is:
• An activity that assists in planning and
measuring programs
• a way of managing, improving and being
accountable for:
• resources
• activities
• results
• Evaluation answers the question- “Is the
program doing what we intend it to do?”
TM
What Can Be Evaluated?
• Direct service
interventions
• Community
mobilization efforts
• Research initiatives
• Surveillance systems
• Policy development
activities
• Outbreak
investigations
• Laboratory diagnostics
• Communication
campaigns
• Infrastructure-building
projects
• Training and
educational services
• Administrative
systems
Source: MMWR, 1999, Framework for
Program Evaluation in Public Health
TM
Why Do We Evaluate?
 Effectiveness - to determine if a program
achieved it’s objectives
 Impact - to assess how well program(s) are
working
 Improvement - to modify programs that are
not working according to plan or take
advantage of something that is working
exceptionally well
 Accountability - to report to stakeholders
 To help develop new efforts
TM
How Does Evaluation Differ from
Surveillance?
• Surveillance is the routine tracking of disease
status or behavior over time
• Surveillance is not necessarily in relation to
any specific program or intervention.
• Evaluation is conducted in relation to specific
program(s) or intervention(s)
TM
How Does Evaluation Differ from
Research?
 The purpose of research is to produce
knowledge about how the world works.
 Evaluation studies are used to improve
programs and inform decisions about future
resource allocations.
 The standards for evidence are higher in
research, and the time lines for generating
knowledge can be longer than for evaluation.
adapted
from: Michael Patton as interviewed by Lisa Waldick (IDRC). 2002-02-08
TM
Why is Evaluation Important?
•
•
•
•
•
•
•
•
Improve knowledge of program design
Improve program implementation
Reporting
Ensure that a program reaches those who
need it most
Give visibility to work
Demonstrate accountability
Share information
Enhance understanding of what works best
and what does not work – and why
TM
Example: TB – Completion of Treatment
• In a State, an organization received funds for
a TB program. The program’s goal was to
increase the proportion of newly diagnosed
TB patients who complete treatment within
12 months to 93.0%.
• Records showed that in the three years since
the program was funded, 85.0% of patients
completed treatment within 12 months.
• Was the program successful?
TM
Example: TB – Completion of Treatment
• The State felt that the target of 93.0% treatment
completion within 12 months was not reached
and therefore the program had failed.
• The program staff, however, were confident
that the program was a success because only
74.0% of patients had completed treatment
within 12 months in the three years before the
program was funded.
• Who is correct?
TM
Example: TB – Completion of Treatment
• Was the program a success or a failure?
• What program management issues does this
example present?
• What information is needed to make
management decisions for the way forward?
• How could evaluation have helped in this
case?
TM
Remember
• The apparent success or failure of a program
or activity must always be closely examined
• What you measure will determine what you
are able to find out
• Evaluation can help us to do things differently
and better understand the why and how of
program/activity success
TM
Summary
• Evaluation is an activity that helps in program
management
• Evaluation involves assessing a program or
activity to find out:
• What has been achieved
• What progress has been made
• What the successes and challenges are
• What difference has been made by the
program
TM
Types of Evaluation
TM
Types of Evaluation
Determining
Broader Impacts
Impact Evaluation
Determining If Activity
Caused Outcomes
Outcome Evaluation
Determining if Activity
Was Implemented As Intended
Process Evaluation
Planning Effective Activity
Formative Evaluation
TM
When to Evaluate?
--Public Health Program-Completion
Conception
Planning a
NEW
program
Assessing a
DEVELOPING
program
Assessing a
STABLE,
MATURE
program
Assessing a
program after it
has ENDED
TM
Formative Evaluation
• Collects data describing the needs of a
system or population, including those needs
to be addressed by a program or activity.
• Answers questions such as:
• How should the activity be designed or
modified to address participants’ needs?
• What can we learn from pilot-testing our
approach?
• Are the materials we are going to use
appropriate?
TM
Process Evaluation
• Collects more detailed data about the quality
of the activity, factors that affected quality,
and differences between intended and actual
delivery of the activity .
• Answers questions such as:
• Was the activity implemented as intended?
• Did the activity reach the intended audience?
• Why where there differences between
intentions and actual delivery?
TM
Outcome Evaluation
• Collects data to determine if, and by how
much, program activities or services
achieved their intended outcomes among the
targeted population (often with a comparison
or control group).
• Answers questions such as:
• Did the activity result in the expected outcomes?
• Can we attribute observed changes among the
targeted population to the activity?
• Can we indicate what might happen in the
absence of the activity?
TM
Impact Evaluation
• Collects data about a population or region
over time to establish a causal association
between programs and what they aimed to
achieve beyond the outcomes on individuals
targeted by the program(s)
• Answers questions such as:
• What long term effect does the activity,
combined with other initiatives, have?
TM
Selecting an Appropriate Evaluation
Method
TM
Criteria for Selecting Evaluation Method
• What evaluation question needs to be
answered?
• Who needs the data?
• What resources are available for evaluation?
TM
What Information Is Needed?
• Different stakeholders or users have different information
needs based on how they will use the information.
• Information needs also vary at the different stages of a
program and the type of evaluation being conducted
Input
Activities
Output
Outcomes
Impact
(Resources)
(Interventions,
Services)
(Immediate
Effects)
(Intermediate
Effects)
(Long-term
Effects)
• Staff
• Funds
• Materials
• Facilities
• Supplies
• Trainings
• Services
• Education
• Treatments
• Interventions
• # staff trained
• # condoms
distributed
• # test kits
distributed
• # clients served
• # tests conducted
• Provider behavior
• Risk behavior
• Service use
• Behavior
clinical outcomes
• Quality of life
• TB incid/prev
• Social norms
• STI incid/prev
• AIDS morb/mort
• Economic
impact
Process Evaluation
Outcome Evaluation
Impact
Evaluation
TM
Levels of Evaluation Effort
Monitoring and Evaluation Pipeline
Number of Programs
All
Most
Some
Few*
Impact Evaluation
Input/Output
Monitoring
Process
Evaluation
Outcome
Evaluation
Adaptation of Rehle/Rugg M&E Pipeline Model, FHI 2001
TM
What Information Is Most Important?
How do you prioritize your evaluation questions?
•
•
Identify the use for the information
•
Consider the feasibility of answering questions
given the available resources
•
Determine what you “need to know” vs. what is
“nice to know”
TM
Three Types of Questions
• Descriptive Questions - “What is”
• Describe a program/process
• Normative Questions – Compare “What
is” to “What should be”
• Measuring against a stated standard
• Cause and Effect Question – “Effect”
• Measure before and after – with and without
comparisons
TM
Main Evaluation Question/Issue
Questions
SubQuestions
Type of Measures
(Sub)
or
Questions Indicators
Target
or
Standard
Baseline
Data ?
TM
Indicators
• A measurable piece of information that helps
answer your evaluation question
• Indicators are signposts, markers or clues of
change; they are intended to indicate whether
objectives are being achieved
• Provide a reference point for program or
project planning, management, and reporting
• Relates to the objectives of your evaluation
• Can be related to processes or outcomes
TM
Indicators
• Is also referred to as a performance measure in
the NTIP
• Can use existing ones or develop ones tailored
to a particular question
• Allow you to assess trends and identify problems
• Can act as early warning signals for corrective
action
• An indicator is not the actual result, or the data
collection method or tool
TM
Measures vs. Indicators
• Measures are descriptions of program
functioning, while indicators measure one
aspect of a program or a project that is usually
directly related to particular objectives.
• Measures alone do not necessarily provide
enough information to indicate how effective a
program is in reaching its intended results
• Anything can be measured, however, every
measure is not an indicator of program
functioning
TM
Example
• You are buying a used car and want to know
what condition the car is in:
• You can measure many things when you
inspect the car:
•
•
•
•
Tire tread
How clean the oil is
Wear on brake pads
Rust on body of car
OR
• You can examine the number of miles the car
has been driven
TM
Example
• You are developing indicators to measure HIV
testing within your TB program:
• You can measure many things
• # of people tested
• # of people diagnosed
• # of test kits purchased
OR
• You can examine the percent of program
participants aged 15–49 receiving HIV test
results in the past 12 months
TM
Key Elements of a Good Indicator
Specific:
An indicator must be related to the conditions that
the program/project wishes to change
Measurable: An indicator must be quantifiable and allow for
analysis of the data
Appropriate: An indicator must be necessary and have
relevance to the management of information
needs of the persons who will use it
Realistic:
An indicator must be attainable at a
reasonable cost using appropriate collection
methods
Time-based: An indicator must have a time period for
collection clearly stated
TM
Examples of Indicators (from NTIP)
• proportion of patients, with newly diagnosed
TB for whom 12 months or less of treatment
is indicated, who complete treatment within
12 months
• proportion of contacts to sputum AFB smearpositive TB patients with newly diagnosed
latent TB infection (LTBI) who start treatment
• percent of cooperative agreement recipients
that have a TB training focal point
TM
Targets
• Reasonable expectations about what
“success” means
• Should create one for each indicator
• Based on the current status of an activity
• Consider program requirements
TM
Collecting Evaluation Data
TM
Why Use Data?
• Data can help your program evaluate its
program effectiveness and keep the focus on
program outcomes
• Data can provide feedback to stakeholders
about what is working, what needs to continue,
and what can be reduced
• Data can convince stakeholders of the need to
change
• Data can uncover problems that might otherwise
remain invisible
TM
Types of Data
 Quantitative Data
 Numbers
 More objective
 Epidemiological data
 Qualitative Data
 Words and/or concepts
 More subjective
 Observations
 Both can be used in evaluation
TM
Data Collection Methods
 Quantitative Data Collection
 Surveys/Questionnaire
 Secondary data
 Surveillance data
 Epidemiological data
 Qualitative Data Collection
 Focus groups
 Interviews/Case study
 Observations
 Mixed Methods
TM
Comparison of Data Collection Methods
Methods
Advantages
Disadvantages
Surveys
• Anonymity possible
• Can administer to groups
• Efficient & cost effective
•Forced choices limit response
•Wording may bias response
•Impersonal
Individual
interviews
• Can build rapport
• Can probe for more info
• Can get breadth/depth of info
•Time consuming
•Expensive
•Interview style may bias
Focus
groups
• Can get breadth & depth of
info in short time frame
• Can convey key info re
program
•Need trained facilitator
•Time consuming to analyze
responses
Observation
• Can assess fidelity as
activities occur
•Interpretation of behavior
difficult
•Expensive & time consuming
Document
review
• Info already exists
• Doesn’t disrupt program
•Depends on quality of info
•Time consuming
TM
Data Sources
TM
Data Sources
Two Options:
1. Collect information from existing sources:
surveys. program records, databases,
documents, etc.
2. Collect new data
TM
Data Sources
Where or from whom will you get data for each of
your indicators to answer your evaluation questions?
Data Sources
Examples
Documents
medical records, meeting minutes,
surveillance reports, interview records
Individuals
staff, providers, partnership members
Observations
data obtained from observations of staff,
environment (reception area), office flow,
activities, etc.
TM
Existing vs. New Data
• Be aware that gathering and analyzing new
data can be expensive and time consuming
• Before making any plans to gather new data
make sure to check if there are existing data
sources that have the information you need
• If no existing data sources provide the
information you need, then you may need to
consider collecting new data
TM
Data Needs and Sources
• Needs
• What data do we need to achieve objectives?
• For whom do we need to use it?
• Does the system do what it is supposed to
do?
• What is the timeframe for data use?
TM
Good Data Sources
• Provide the necessary information to answer
your evaluation questions
• Are feasible to access given the available
resources
• Offer confidence in the quality of information
gathered
• Are relevant to the time period you are
interested in
TM
Existing TB Data Sources
• Routinely collected data:
• Record forms at the health facility
• Record and report forms at the
city/county/state level
• Record and report laboratory forms
• Census / Vital statistics
• Surveillance / BRFSS
• NHANES / NHIS
TM
Existing TB Data Sources
• Other data sources at various levels:
• Work plans and budgets
• Annual reports
• Audits
• Meeting reports
• Planning documents
• Procurement records
• Storage facility stock cards
TM
Conducting an Evaluation
TM
Essential Steps to Evaluation
1. Identify program goals and objectives
2. Define the scope of the evaluation
3. Define evaluation questions & indicators
4. Define methods
5. Design instruments and tools
6. Carry out the evaluation
7. Analyze data and write a report
8. Disseminate and use data
source: FHI, Impact, USAID manual
TM
Defining
What to
Evaluate
Developing
Indicators
Collecting,
Analyzing
Data
Involvement of Stakeholders
Evaluation Process
TM
Gathering the Information You Need
1) Determine your evaluation question
2) Identify the type of data you need to answer
your questions
3) Identify sources where you can find the
information you need
4) Determine the methods you will use to
review existing information or collect new
data
5) Identify the tools you will use to collect new
data
TM
Evaluation Barriers
TM
Evaluation Barriers
 Unrealistic targets/goals
 Objectives not linked to program
 Not meaningful to the program
activities
 Measures poorly defined – not useful
TM
Overcoming Barriers
 Include evaluation during planning phase
 Involve key stakeholders from outset
 Establish realistic goals/objectives with time
frames
 Establish appropriate, well-defined evaluation
measures
 Provide training and/or technical assistance
 Build in feedback loops to program (quality
improvement)
 Establish baseline
 Build into existing work processes
TM
If We Remember Nothing Else …..
• Evaluation is not surveillance or
research
• Evaluation is an activity to help us
make decisions about a program and
to document its improvements
TM
Thank you
TM
Acknowledgements
• DTBE/FSEB/Program
Evaluation Team (PET)
• Awal Khan
• Christina Dahlstrom
• Judy Gibson
• Lakshmy Menon
• Brandy Peterson
• Lauren Polansky
• DTBE/FSEB/Field
Services Teams I & II
• Greg Andrews
• Dan Ruggiero
• Bruce Bradley
• Gail Burns-Grant
• Alstead Forbes
• Regina Gore
• Andy Heetderks
• Mark Miner
• Vic Tomlinson
• Dawn Tuckey
TM