No Slide Title

Download Report

Transcript No Slide Title

EVALUATION
Presented by:
Elaine A. Borawski, Ph.D.
Center for Health Promotion Research
Dept. of Epidemiology and Biostatistics
Case Western Reserve University
School of Medicine
CFHS Evaluation Workshop, January 2000
PRESENTATION OVERVIEW





Demand for outcome evaluations and
program accountability.
Different types of evaluations.
Developing goals and objectives.
How to operationalize (measure)
objectives.
Preparing your CMH grant application.
Demand
 Increasing
demand for agencies to not
only be accountable, but demonstrate
that what they do is effective.
 The
number of people served is no
longer sufficient evidence of
accountability.
 Funding
sources (Federal, State, Local
and Foundations) tying money to
outcomes.
HAVE NO FEAR





No need to fear evaluations.
A good evaluation can be your best tool
for program development and
refinement.
You know your programs the best.
If you believe your program works you
just need to figure out how to
measure it.
Include evaluation right from the start.
Types of Evaluations
Two Basic Types of Evaluations:
 Program Monitoring and
Accountability
 Impact/Outcome Assessment
Monitoring/Accountability
Addresses Two Key Questions:
 Is
program reaching appropriate target
population?
 Is
delivery of program/services
consistent with program design
specifications (are we doing what
we said?)
Purpose of Monitoring

Accountability
• Have a responsibility to demonstrate that
you are on top of your project and can
document you are doing what you said you
would do.

Program Management
• No matter how well planned a program may
be, unexpected results and unwanted side
effects emerge -- to discover this early on is
critical.
Types of Program Monitoring

Program Coverage (is program
reaching its intended target population?)

Program/Service Delivery
(congruence between program plan
and actual program implementation)

Fiscal Monitoring (how is money
spent? Do benefits justify the cost?)

Legal/Regulatory Monitoring
Program Coverage
 Essential
to your proposal is defining your
target population.
 But, you must also have a procedure for
determining the extent to which targets
actually participate.
 Two issues: coverage and bias.
 Coverage: does overall participation reach
the expected levels of participation?
 Bias: Are some subgroups covered more
densely than others?
Program Coverage

Proposal Points:
• Clearly define your target population.
• Clearly describe how you will keep track
of program participants (record keeping)
• Describe how you will periodically monitor
participation rates and patterns.
• Describe how will you keep track of those
who drop out of program/service (attrition
rates).
Delivery of Services
 When
a program fails to show an impact it
may be due to failure to deliver intervention or
service in the ways specified in the program
design.
 Three
types of implementation failures:
• No treatment/not enough treatment
• Wrong treatment delivered
• Treatment is unstandardized/uncontrolled
or varies across target populations.
Service Delivery
 Proposal
Points:
• Clearly define each service/program
component
• What are the tangible features of the
program and its setting?
• Describe how you will monitor each
component’s implementation.
Fiscal/Legal

Fiscal Monitoring
• Can you match your dollars to specific
participants or programs?
• How will you assess the cost/benefit of the
program?
 Legal/Regulatory
Monitoring
• Do you need to comply to any legal or
regulatory requirements?
• How will you monitor and report these?
Impact/Outcome Assessment
Does the intervention or
program produce the
intended results?
PREREQUISITES
 Objectives
must be clearly defined and
well articulated before program begins.
 The
evaluation plan (and how outcomes
will be measured) must be on board
when your program begins.
 The
program/service/intervention must
be sufficiently well implemented.
If you wait….
 You
may miss the most significant change
your program makes.
 You may find you can’t measure the
program as you’ve defined it.
 You may miss the opportunity to make
changes in your program that could benefit
your participants.
 Some data cannot be collected
retrospectively.
Where to start?
 Evaluate
your program goals and
objectives.
• Are they realistic and attainable?
• Are they measurable?
• Can you realistically collect the data
needed?
• Do the goals and objectives actually match
the program plan?
 If
not, revise your goals and objectives.
Doomed to Fail
Program: Early intervention of pregnant
teen mothers, to assure prenatal care
and reduce premature and low
birthweight births.
Goal: To decrease rate of prematurity
and low birthweight births in the county.
Goals and Objectives
 Goals:
Abstract, idealized statements of
desired outcomes.
• Increase the rate of prenatal care among
teenage mothers.
 Objectives:
Operationalization of
desired outcomes with a measurable
criteria for success.
• Participants will meet at least 75% of
scheduled prenatal appointments.
Program vs. Impact Goals
Program Goal: Participants will meet at
least 75% of scheduled prenatal
appointments
Impact Goal: Participants will have
significantly higher number of prenatal
visits when compared to other teen
mothers from the same geographic and
sociodemographic backgrounds.
Better than status quo
 Your
goals need to be defined in terms of
both program objectives and impact
objectives.
 For
strongest impact assessments, you need
a comparable population who did not receive
the treatment/program (controls).
 This
data may be derived from existing data
sources.
Types of Study Designs
 Ideal:
Randomized, case/control study
• One group = control = alternative treatment
or delayed treatment.
• Other group = case/experimental = receive
treatment/intervention
• Outcomes observed in both groups
• Any differences attributed to treatment or
intervention.
 Often
difficult to attain, particularly
randomization.
Control Samples
 Compare
against reported data.
 No treatment controls not necessary.
 Compare against a group who will get
program at later time (delayed
intervention).
 Compare against group who gets a
different type of program/intervention.
One Shot Design
Pretest
Post-test
Initial
Change
Case
X2

INTERVENTION
Pre/Post Test Design (no control)
Pretest
Post-test
Initial
Change
Case
X1
X2

INTERVENTION
Case-Control with Pre/Post
Pretest
Post-test
Initial
Change
Case
Controls
X1
Y1
X2
Y2

INTERVENTION
Sustained Change
Pretest
Post-test
Initial
Change
Case
Controls
X1
Y1
X2
Y2

INTERVENTION
Post-test
Sustained
Change
X3
Y3
What will this tell us?
 With
a case/control comparison, and
pre/post test design, we can make the
assumption that any change that occurs
in the intervention group (and not the
controls) is due to our program
intervention.
 Any element of the design left out adds
question to this assumption.
Program Example
 Teen
Pregnancy Prevention Program
 Primarily school-based programs
 Target population: 7th graders
 Includes both safer sex and abstinence
centered education programs.
 Classes conducted with same gender
and mixed classes.
Teen Pregnancy Prevention
Goals:
• To reduce the proportion of students who
engage in sexual activity through continued
abstinence or postponing sexuality
(measured as both intention and actual
behavior).
• To increase communication between
parents and children on subjects of dating
and sexual relationships.
Teen Pregnancy Prevention
• To increase knowledge about transmission
of AIDS/HIV.
• To reduce supportive beliefs of stereotypic
sexual behaviors.
• To increase knowledge and understanding
of linkages between substance use and
sexual behavior.
• To increase student’s sense of personal
control in resisting sexual urges and
advances.
Program Goals
 80%
of abstinent students will report
continued abstinence from the beginning
to the end of the program period.
 50%
of sexually active students will report
being abstinent during the program period.
 But
what % would have remained
abstinent without the program?
Study Design
 Participating
schools were matched on risk
behaviors (from previously collected data).
 For each matched pair, intervention school
received program in fall, control school in
spring.
 Pre and posttest questionnaires were
designed to measure the six goals.
 Identifiers were used to match individual pre
and posttest data.
Study Design
 Pre
and Posttest were given in both
intervention and control schools during
same time period.
 First
posttest is given after intervention
is provided.
 Intervention
posttest also included
specific questions concerning
satisfaction with program.
Study Design
 Control
posttests included questions
about any outside programs that
students may have attended.
 Booster
classes are provided to onehalf the intervention schools.
 Second
posttest is given four months
after first posttest to test sustained
change.
Your Proposal
 Clearly
define your evaluation plan.
 Clearly define goals, program objectives
and impact measures.
 Expect to designate 8-10% of your
budget to program monitoring and
impact assessment.
 Plans for technical and programmatic
assistance for all grant recipients is
being planned.