Performance Management and Use of Data to Target Areas in

Download Report

Transcript Performance Management and Use of Data to Target Areas in

Performance Management and
Use of Data to Target Areas in
Need of Improvement
Scott P. Novak, Ph.D.
RTI International
1
Overview
Today
• Examine new ways to use data to track client
outcomes and program performance
2
Key Terms
• Performance Monitoring- Collection of
client and/or organizational data
• Performance Management- Use of data
for continuous quality improvement efforts
Types of Performance Data
• Outcome data-Measures impact or
benefits of program on client (e.g.,
abstinence, arrests)
• Output data-Measures the services
provided and program performance (e.g.,
number of clients served, retention)
Data Collection Mechanisms
• Administrative data:
– Demographic, claims or other forms of data
routinely collected on clients
– Electronic health records, insurance claim
data, patient records
• Consumer data:
– Surveys of clients (self-administered or
provider administered)
Desirable Qualities of
Performance Data
• Quantifiable
• Easy to explain
• Show changes over time (both positive
and negative changes)
• Minimize unintended consequences
Client Populations
• Treatment Setting:
–
–
–
–
Detox
Inpatient (long-term/short-term)
Outpatient (long-term/short-term)
Opioid
• Demographics:
– Age (adolescent)/race/gender/homeless
• Co-occurring disorders:
– Substance abuse/mental health
– Variation in MH (personality disorders versus
mood/anxiety/other)
– HIV status and risk behaviors
Data Quality
• KEY to using data for performance management
• Administrative data:
– Variation in record-keeping systems
– Billing codes/diagnostic codes may lack specificity
• Client data:
– Misinterpret question
– Refuse to answer
Types of Missing Data
• Item-level: Respondents were present at survey wave,
but did not answer one or more questions.
Have some nonmissing items.
• Wave-level: Respondents were absent at survey wave, so
missing all items.
– Attrition- Respondents dropped completely out of
study.
– Missing- Respondents missed a wave, but have data
on subsequent waves.
Frequency of Item-Missing
Data: Random
Frequency of Item-Missing
Data: Nonrandom
Diagnosis of Reasons for
Missingness (Q13)
• Check for correct eligibility/skip patterns
• Review Item wording
– Q13— “Please rate your general satisfaction with your
health care provider.”
– Use logistic regression to predict factors associated
with item missing (where outcome is 0=not missing,
1=missing; predictors are race, gender, age,
education).
– Finding that Hispanics were more likely to be missing
on this item, perhaps due to lack of health insurance
coverage
Missing Completely At
Random (MCAR)
• Likelihood of missing data (item or assessment) is due to
“chance” factors. Missingness unrelated to any specific
survey item (observable)
– Sick on day of test administration (missed
assessment)
– Miss item on survey because not “paying attention”
– No differences on any factor that is not observed
(unobservable) in the study, but is related to study
outcome (e.g., no differences in the likelihood of being
sick between smokers and nonsmokers)
– Ignorable in that results will not be biased
Missing At Random (MAR)
• Likelihood of missingness (item or assessment) is due to a
respondent characteristic(s) collected in study.
– Users are more likely to be absent on day of data collection, and
information on usage status is collected.
– People who use are less likely to report income, and users have
lower incomes, in general. Usage is collected in study.
– Analyses will be biased unless appropriate procedures are used,
perhaps including information on use.
– Nonignorable, unless appropriate procedures are in place.
Non-Missing at Random
 Likelihood of missing data (item or assessment) are due to factors
not observed in study.
 Users more likely to be missing on income variable and users
have lower income. But do not have information on smoking.
 Unverifiable assumption. Can distinguish MCAR and MAR by
whether a factor is related to an observed covariate. Cannot
distinguish between MCAR and NMAR because it is purely a
hypothesis as to why data are missing.
Missing Data: Solutions
• May bias conclusions if retained sample is
different than drop-outs/missing
• Stratify analyses by completers/non-completers
and factors related to completion status
• Mean substitution*
• More complex statistical procedures
– (imputation)
• Always examine item and follow-up data to get a
characterization of respondents.
How to Use Performance
Data?
• Program Marketing: Inform stakeholders on
key successes
• Program Sustainability: Identify service/client
gaps to secure new funding sources
• Process Improvement: Identify areas in need
of attention (e.g., reduce wait times to increase
retention)
• Client Treatment: Use program data to identify
and manage clients
Some Key Things About Data
Statistics don’t lie, but Statisticians do!
• Population data- Collected on all clients,
so no statistical tests needed.
• Sample data- Targeted subset of
population of clients. May use statistical
tests.
Data Definitions
• In any data set, key to understand how
data are coded
• Skip patterns
• Composite Measures (e.g., National
Outcome Measure of Social
Connectedness)
Skip Patterns: Illustrated
20
Two-level Question –
Injection Drug Use and Shared Syringe
21
Example: Understanding Employment and
Education
Overall outcome definition:
Enrolled full time or part-time OR employed full time or part time.
22
Program Marketing
• Present what services
you provide
• Describe who you serve
in your community
• Show the ways your
program is effective
Program Sustainability
• Data may reveal “hidden” populations where
additional services may be expanded
• Use examples in grant applications and
progress reports
Tips for Presenting Data
• Use informative, action oriented headers
– “Prevalence Rates”
– “Improvement in Abstinence Rates from Intake to 6 month
Follow-Up by Race”
• Present the date the data were extracted (footnote)
• Use graphics as much as possible and diversify the type
of graphics (e.g., bar chart, pie charts)
• Use bullets to communicate key points about charts in
call-out boxes
Program Improvement:
Continuous Quality
Improvement (CQI)
• Reduction in variability in products and processes
• Method of evaluating and improving processes of
patient care which emphasizes a multidisciplinary
approach to problem solving, focuses on systems of
patient care which may be sources of variation
• Approach to study and improve the processes of
providing healthcare services to meet the needs of
clients
• Continuous process that identifies problems in health
care delivery, tests solutions to those problems and
constantly monitors the solutions for improvement.
Strategies for CQI
•
•
•
•
•
•
•
Provider reminder systems
Audit and feedback
Provider education
Patient education
Promotion of self-management
Financial incentives
Organizational change
Quality Improvement Cycle
PDSA: PLAN
• Identify one specific area
for improvement at a time
• Decide which strategy to
use for that specific area
of improvement
• EX: Increase follow-up
rates through website
(e.g., MySpace)
PDSA: Do
• Implement the planned
change in your treatment
setting (often on a pilot
basis)
• EX: Create basic website
and enroll small number of
clients
PDSA: Study
• Assess the effects,
both positive and
negative
• EX: Examine follow-up
rates and client
outcomes (abstinence)
A note on Study
• Observational: implement program and
track outcomes
• Experimental: In pilot programs, try to
randomize clients to study arms
– Placebo-Control
– Comparative Effectiveness
• Try to make conditions as equal as
possible
PDSA: Act
• Expand implementation
if the effort is successful
• Consider other
strategies if the effort is
unsuccessful
• If the problem has been
successfully addressed,
address another area
CQI is a Continuous Process
Key is Pilot Phase and Narrow Focus
Minimize risks, resources, and time
Reduce disruptions to clients and staff while making changes
Promote acceptance and reduce resistance
Learn from ideas that success and those that do not
Select “Change Leaders” to champion cause
Involve staff at all levels
Client Treatment
• Use data to identify at-risk clients
• Supplement “clinical experience” with
concrete data
• Promote consistency and coordination
among staff
• Understand Adoption and Adaptation of
evidence-based practices
Key Points for Interpreting Follow-up
Data for Outcomes
•
Identify who ‘drops-out’ of treatment and who remains
(length of stay)
•
Clients enter treatment with different initial status for the
outcome measure
•
Looking at total group of clients at intake and again at 6month follow-up can mask differences
•
Overall outcome measure does not show combinations
of responses to each part
•
Not all clients complete follow-up information:
– Those who don’t may be different from those who do 36
Understanding Drop-Out
• Drop-out: Only presenting data on clients
who were successfully followed up may
cloud interpretation of data
– Hard-to-Treat cases often drop-out
– If near 100% follow-up rate, then more
confidence in follow-up data
– Identifying cases lost-to follow-up can help
targeting/recruitment efforts
– Need to understanding “coding” of drop-out
(administrative discharge)
Understanding Retention
• Length of Stay —Number of days on
service
• More treatment contact is better, but at
a price
• Stratify analyses by type of service
received when presenting length of
stay data
Are my 6-month follow-up rates different for
any particular group of clients, such as by
gender?
39
Are my 6-month follow-up rates different for
any particular group of clients, such as by
race?
40
Are my 6-month follow-up rates different for
any particular group of clients, such as by
age?
41
Assessing Program Impact
• Always pair with retention data
• Consider intake and discharge together
• Consider positive and negative rates of change
• Consult program staff and external data sources
(e.g., NSDUH, TEDS) to understand the larger
social context (e.g., increase in unemployment
may drive intakes)
Key Points in Presenting Follow-up
Data on Outcomes
43
Positive outcomes to negative
outcomes
44
Negative to Negative
45
What is my clients’ change in
employment/education from intake to 6-month
follow-up during 2008?
46
How are my clients doing in terms of
employment/education compared with other RCSP
grantees overall?
How does employment/education differ by gender and
age
48
How do employment/education differ by
race and ethnicity
49
Types of Employment Outcomes:
50
How does Employment vary by living
situation?
51
Ecological Model
External Data Sources
• Help contexualize performance
– Understand trends in factors that influence
access/capacity/treatment (e.g., area
unemployment, drug epidemics)
– National Survey on Drug Use and Health,
Treatment Episode Data Set, Others
• Benchmarking facilitate comparisons
– Similar organizations based on population,
resources, and community
Summary
Examine combinations of variables to gain
insight into program performance (e.g.,
housing by employment)
Examine subgroups to identify “hidden”
program effects
Use data to motivate staff and share
successes and areas for improvement
Challenges
• Tendency to use clinical intuition
– Data are powerful tools of persuasion
• “I’m not a statistician”
– Simple analyses are critical
– Can use available programs (Microsoft excel)
– Available resources via the web
• Who has the time?
– Assign data coordination as part of job scope
– External evaluators are helpful
Benefits
• Foster staff interest
• Tangible feedback
• Provide insight into clinical and
organizational areas (improvement and
strengths)
• Compelling way to generate funding
streams through grant
applications/newsletters (e.g., identify
population groups in need of services)