Effectiveness of health services

Download Report

Transcript Effectiveness of health services

Lecture 7: Evaluation of
interventions
• Types of intervention
• Introduction to social science terminology and
concepts of intervention study design
• Study design
– Experimental
– Quasi-experimental
– Observational
1
Requirements of health care
• Effective
– effectiveness vs efficacy?
• Efficient
– minimize use of resources
• Equitable
– equity in access, use related to need
• Acceptable
– client perception of care
2
Efficacy vs effectiveness
(Definitions from Last’s Dictionary of Epidemiology)
• Efficacy (Can it work?) The extent to which a specific
intervention procedure, regimen or service produces a
beneficial result under ideal conditions. Ideally, the
determination of efficacy is based on the results of a
randomized controlled trial.
• Effectiveness (Does it work?): The extent to which a
specific intervention procedure regimen or service when
deployed in the field does what it is intended to do for a
defined population. (The main distinction between
effectiveness and efficacy is that effectiveness refers to
average rather than ideal conditions of use).
3
Types of intervention
• Classified by purpose:
– primary prevention (prevention of onset of
disease)
– secondary prevention (screening, early
detection, and prompt treatment)
– tertiary prevention (of chronic conditions, to
decrease disability and increase quality of life)
4
Types of intervention
• Classified by complexity of technology
involved (technology assessment
paradigm):
–
–
–
–
drugs
devices
procedures
systems of care
5
Intervention study or study of an
intervention?
• Intervention study (referring to a study design): An
investigation involving intentional change in some aspect
of the status of the subjects, e.g., introduction of a
preventive or therapeutic regimen, or designed to test a
hypothesized relationship; usually an experiment such as a
randomized controlled trial (Definitions from Last’s
Dictionary of Epidemiology)
• Study of an intervention (referring to the study
purpose): study of a health care intervention; may be
experimental or non-experimental (observational)
6
Level of evaluation
• STRUCTURE: Staff, equipment needed to
deliver intervention.
• PROCESS: is the intervention service
provided as planned? (Interaction between
structure and patient/client)
• OUTCOMES: expected or unexpected
results, either positive or negative.
7
Level of evaluation
• In evaluation of intervention, outcomes are
of primary interest
• To help interpret the results, measures of
structure and process are desirable, e.g.:
– adherence to intervention
– “dose” of intervention actually received
– characteristics of staff who deliver intervention
8
Step 1: intervention objectives
• Specify positive and negative outcomes
expected
• Measurable outcomes
– Changes in natural history
• death, disease, disability, distress
– Behaviors, attitudes (e.g., educational
interventions)
9
Methodological issues in
evaluation of interventions
• Two paradigms:
– epidemiological (clinical and public health
roots)
– social science (sociological roots)
• Two sets of terminology!
10
Internal and external validity of
an intervention study
• Internal validity: The degree to which an observed effect
can be attributed to an intervention.
• External validity: The degree to which an observed effect
that is attributable to an intervention can be generalized to
similar populations and settings (generalizability). Note:
both internal and external validity are aspects of the
validity of a study and should be distinguished from the
validity of measurements.
11
Threats to internal validity
• History
– extraneous events (e.g. breast cancer screening)
• Maturation
– aging (e.g., drug abuse treatment)
• Testing
– e.g., effects of pretesting
•
•
•
•
Instrumentation
Regression (to mean)
Selection
Attrition
12
Threats to external validity
• Is intervention equally effective in different
populations, including more naturalistic
applications? Usually not - why?:
– Methodological
• Interaction of intervention with pre-testing
• Reactive effects (to testing) - Hawthorne effects
– Differences in intervention
• Characteristics of intervention personnel
• Process of implementation
13
Study designs
• Experimental
– investigator has complete control over
allocation and timing of intervention
– usually randomized
• Quasi-experimental
– investigator has no control
• Observational
– investigator has no control
14
Diagramming Intervention
Evaluation Designs
Campbell and Stanley
•
X = program
•
O = measurement
•
R = randomization
15
Randomized (Experimental)
Designs
• Randomized pre-test post-test control group
design
R O1 X O2
R O3
O4
• Post-test only control group design
R X O1
R
O2
16
Quasi-experimental study designs
• Investigator has “some control” over timing
or allocation of intervention
– Non-randomized or quasi-randomized trials
– Non-equivalent control group designs (MAY
OR MAY NOT BE RANDOMIZED):
• pre-test and post-test
• post-test only
• Solomon 4 group
17
Some quasi-experimental designs
Pre-test post-test non-equivalent control
group design
O1 X O2
O4
O3
Recurrent institutional cycle
X O1
O2 X O3
18
Solomon four-group design
R
R
R
R
O1 X O2
O3
O4
X O5
O6
19
Examples of pre-post nonequivalent control group design
• Stanford 5-city study of CHD prevention
• Intervention included mass media education
and group interventions for high-risk
• 5 cities selected - similar characteristics
– those with shared media market were allocated
to intervention
– isolated cities allocated to control group
20
Other designs: recurrent
institutional cycle design
• Finnish mental hospital study of dietary
intervention to prevent CHD
• 2 hospitals selected, received intervention
sequentially
• Useful design if considered unethical to
withhold intervention
21
Observational designs
• Investigator has NO control over allocation
or timing of intervention:
– Cross-sectional (after only)
– Separate sample pre- post-test
– Time series (trend) designs
– single or multiple
– Cohort studies
– Panel studies
22
Example of trend study:
Health insurance in Quebec
• 1961: universal hospital insurance
– included ER care for accidents
• 1970: universal health insurance (Medicare)
– added MD care including hospital outpatient
clinics and ERs
23
Example of trend study:
Health insurance in Quebec
• Population surveys before and after
• Effects on:
–
–
–
–
use of physician services by general population
physician workload
use of emergency rooms
hospitalization and surgery
24
MD visits/person/year by income
(household surveys)
8
7
6
5
Pre
Post
4
3
2
1
0
All visits <3000
3000-
5000-
9000-
15000+
25
MD visits/person/year (household surveys)
6
5
4
Pre
Post
3
2
1
0
All visits
Office
ODP/ER
Home
26
MD visits/person/year by income
(household surveys)
8
7
6
5
Pre
Post
4
3
2
1
0
All visits <3000
3000-
5000-
9000-
15000+
27
% adults with cough 2+ weeks who consulted
MD (household surveys)
70
60
50
40
Pre
Post
30
20
10
0
<$5000
$5000-
$9,000
Total
28
% children (<17) with tonsilitis or sore throat
and fever who consulted MD
(household surveys)
80
70
60
50
Pre
Post
40
30
20
10
0
<$5000
$5000-
$9,000
Total
29
% pregnancies with visit in first trimester
(household survey)
60
50
40
Pre
Post
30
20
10
0
<$5000
$5000-
$9,000
Total
30
Unsuccessful
Office/answering
machine
Spoke to MD
70
60
50
40
30
20
10
0
Tried to contact
% Tried to contact MD before ED visit;
of these, % successful (6 hospital sample)
Pre
Post
31
Time series designs
Time series desgn
O1 02 O3 X O4 O5 O6
Multiple time series design
O1 O2 O3 X O4 O5 O6
O7 O8 O9
O10 O11 O12
32
Example of time series study:
Tamblyn et al, 2001
• Evaluation of prescription drug cost-sharing
among poor and elderly
• Methods:
– Trend study: Multiple pre- and postmeasurements
– Cohort study:
33
Source: Tamblyn et al, JAMA 2001, 285(4): 421-429
34
Source: Tamblyn et al, JAMA 2001, 285(4): 421-429
35
Some Weak Observational
Designs
• One-shot case-study
X O
• Static group comparison:
X O1
O3
36
Time-series design: Home care in
terminal cancer
• Evaluation of home-hospice programme in
Rochester, NY
• Expansion of home-care benefits in 1978
• Hypothesis: home-hospice care in last
month of life reduces hospital days and
costs
• Data sources: Linkage of tumor registry and
health insurance claims databases
37
38
39
Epidemiological observational
analytical designs
• Difference in independent and dependent
variables:
– Studies of risk factors:
• independent variable: risk factor
• dependent variable: disease
– Studies of interventions:
• independent variable: intervention
• dependent variable: outcome
40
Cohort study
• Selection of controls: could they receive
either treatment?
• Example: medical vs surgical treatment of
CHD
• Sources of bias:
– confounding by indication
– selection bias
– detection bias (etc.)
41
Cohort study
• Cohorts with and without “exposure”
(intervention) followed to determine
outcomes
• Control cohort - concurrent or historical
(confounding by changes over tine in
patient population, aspects of treatment
other than intervention; measurement of
confounders)
42
Example of cohort study
• Do HMOs reduce hospitalization in
terminal cancer patients, during 6 months
before death?
• Administrative databases and tumor registry
from Rochester NY
• Cancer deaths in 100 pairs of HMO
members and non-members
• Matched by age, cancer site, months from
diagnosis to death
43
44
Case-control study
• Cases (with outcome) compared to controls
(without outcome) with regard to (previous)
intervention
• Limited to single, categorical outcome
• Sources of bias
–
–
–
–
Confounding by selection
Confounding by indication
Detection bias
(For screening programs) Separation of screening tests
from tests done after symptoms appear
45
Case-control study: Examples
• Screening programs:
– screening Pap test and invasive cervical cancer
– screening mammography and breast cancer
deaths
– screening sigmoidoscopy and colon cancer
deaths
• Vaccine effectiveness (e.g., BCG)
• Neonatal intensive care and neonatal deaths
46
Considerations in selection of a
study design
•
•
•
•
•
•
Cost
Feasibility
Ethical issues
Internal validity
External validity
Credibility
47