Run Chart Rules Identifying Special Cause Variation Number of Runs

Download Report

Transcript Run Chart Rules Identifying Special Cause Variation Number of Runs

Best Practices in Selecting, Using and Assessing
Indicators for Research, Evaluation and Performance
© Fraser Health Authority, 2009
The Fraser Health Authority (“FH”) authorizes the use, reproduction and/or
modification of this publication for purposes other than commercial redistribution. In
consideration for this authorization, the user agrees that any unmodified
reproduction of this publication shall retain all copyright and proprietary notices. If
the user modifies the content of this publication, all FH copyright notices shall be
removed, however FH shall be acknowledged as the author of the source
publication.
Reproduction or storage of this publication in any form by any means for the purpose
of commercial redistribution is strictly prohibited.
This publication is intended to provide general information only, and should not be
relied on as providing specific healthcare, legal or other professional advice. The
Fraser Health Authority, and every person involved in the creation of this publication,
disclaims any warranty, express or implied, as to its accuracy, completeness or
currency, and disclaims all liability in respect of any actions, including the results of
any actions, taken or not taken in reliance on the information contained herein.
1
Outline
 Objectives
 Background
 Anatomy
 Good
Indicators
 Variation
Objectives
 Understand
and appreciate the purpose of
indicators
 Identify criteria for designing, appraising or
choosing indicators
 Apply indicators for research, evaluation or
quality improvement.
Background

What is an indicator?


Succinct measures that describe as much as possible
about a system
Why use indicators?



Understanding – how a system works and how it
might be improved (research)
Performance monitoring – to assess if the system is
performing to at an expected level, compare and
improve a system (improvement)
Accountability – to inform assessment of
effectiveness, efficiency and of responsibility
(evaluation)
Pop Quiz
Considerations

Indicators indicate:



Encourage explicitness:



Help to clarify our understanding of a system
Facilitate communication of expectations
Numerically based:


Cannot capture and reflect the complexity of a system
program or service
Must be considered within context
Rates, ratios etc.
Not specifically designed to detect fault:

Can be used for interpreting both strengths and
weaknesses
Improvement
 Indicators
primarily used to measure
systems and outcomes in health care.


Goal - improvement
Understand how things work leads to
understanding of how things can be done
better
 Measurement
improvement.
alone does not lead to
Anatomy
 Basic
construction
 How to deconstruct and assess
 Metadata

(indicator)
Title, rationale, and how indicator is
defined/constructed
 Data

Information that is entered into the indicator
Metadata
 Metadata



will help asses if an indicator is:
Important and relevant
Able to be populated with reliable data
Likely to have desired effect when
communicated well
Metadata










What is being measured?
Why is it being measured?
How is this indicator defined?
Who/what does it measure?
When does it measure?
Absolute or proportion?
What is the source of data?
What is the accuracy/completeness of the data?
Any considerations, warnings, caveats?
Are specific tests needed to test the meaning of the
data?
Statistics Canada – Health Indicators
Statistics Canada – Catalogue no. 82-221-X
Definition:
 Wait time for hip fracture surgery (same/next day)
 Proportion with surgery same or next day: risk-adjusted
proportion of hip fracture patients aged 65 and older who
underwent hip fracture surgery on the day of admission or the
next day.
 Wait time for surgery following hip fracture provides a
measure of the access to care. While some hip fracture
patients need medical treatment to stabilize their condition
before surgery, research suggests patients typically benefit
from timely surgery in terms of reduced morbidity, mortality,
pain, length of stay in hospital, as well as improved
rehabilitation.
 Rates for Quebec are not available due to differences in data
collection.
Source(s):
 Canadian Institute for Health Information, Discharge Abstract
Database.
Group Activity
 Deconstruct
the Wait Time for Hip Fracture
Surgery indicator using the Metadata
Assessment Questions in the handout.
 Identify information that you would need to
fully understand the composition of this
indicator.
 Report back.
Data
 Is
the indicator populated with the best
available data?






Reliable
Valid
Bias
Completeness
Error
Convenience
Acronyms
Good Indicators
SMART
S
M
A
R
T
Specific
Measureable
Achievable
Relevant
Time-bound
DUMB
D
U
M
B
Doable
Useable
Measurable
Believable
Criteria for Good Indicators
1.
2.
3.
4.
5.
Importance and relevance
Validity
Feasibility
Meaning
Implications
Adapted from: The Good Indicators Guide: Understanding how to use
and choose indicators. NHS Institute for Innovation and Improvement,
2008
http://www.apho.org.uk/resource/item.aspx?RID=44584
1. Importance and Relevance

Does the indicator measure what is relevant in
the system?



Is there balance?



Indicators must relate to objectives of the system
Key parts of process and/or outcome should have
associated indicators
All important system components are covered
No over or under-represented components
Will the indicators promote consensus?

Well formed indicators can be helpful in developing
consensus on the objectives of the system
2. Validity

Does the indicator actually measure what it is
intended to measure?


Accuracy and degree to which the indicator actually
measures the construct
May require validation if existing valid indicators are
not available or appropriate
• Translation validity


Face validity
Content validity
• Criterion-related validity




Predictive validity
Concurrent validity
Convergent validity
Discriminant validity
http://www.socialresearchmethods.net/kb/measval.php
3. Feasibility
 Are
reliable data available to populate the
indicator?
 Are data available at appropriate times?
 Are comparator data similarly available?
 Can the cost/resource allocation be
justified?
4. Meaning

Will the indicator be sensitive to detect and
demonstrate meaningful changes in the system?
(it should not identify random variations)
 Can high and low indicator values provide
appropriate signals to take action?
 Can the indicator be deconstructed to provide
insight about specific results or patterns?
 Will others accept the indicator as having
meaning?
5. Implications
 What
will you do if an indicator suggests
further action?
 Will indicator results induce perverse
incentives and unintended uses?
 Is there sufficient lag time in measurement
so as to allow interventions to have an
effect?
MacNamara Fallacy
 The
MacNamara Fallacy is named after
the Robert McNamara, the US Secretary
of Defence in the 1960s who was
obsessed with quantifying the Vietnam
War in a way that tended to ignore what
was truly going on.
 MacNamara argued that the ratio of Viet
Cong losses to US losses was an
important measure of effectiveness
MacNamara Fallacy
The first step is to measure whatever can be easily
measured. This is OK as far as it goes.
The second step is to disregard that which can’t be easily
measured or to give it an arbitrary quantitative value.
This is artificial and misleading.
The third step is to presume that what can’t be measured
easily really isn’t important. This is blindness.
The fourth step is to say that what can’t be easily
measured really doesn’t exist. This is suicide.
Charles Handy, ‘The Empty Raincoat’, page 219.
Qualitative Indicators
 Qualitative
indicators such as perceptions
are important.
 Qualitative indicators may enable or
hamper improvement/change.
 It is important to balance quantitative with
qualitative indicators to provide context
SPICED Indicators
Subjective: Informants have a special position or experience that gives
them unique insights which may yield a very high return on the
investigators time.
Participatory: Indicators should be developed together with those best
placed to assess them.
Interpreted and Communicable: Locally defined indicators may not mean
much to other stakeholders, so they often need to be explained.
Cross-Checked and Compared: The validity of assessment needs to be
cross-checked, by comparing different indicators and progress, and by
using different informants, methods, and researchers.
Empowering: The process of setting and assessing indicators should be
empowering in itself and allow groups and individuals to reflect critically
on their changing situation.
Diverse and Disaggregated: There should be a deliberate effort to seek
out different indicators from a range of groups. This information needs to
be recorded in such a way that differences can be assessed over time.
Roche (2002)
Program Theory
A statement of goals accompanied by the
underlying assumptions that guide a
system/program/service delivery strategy
and are believed to be critical to producing
the desired outcomes.
Program Theory Considerations
 Who
are you serving?
 What are you striving to accomplish for the
population that you are serving?
 What strategies do you believe will help
you successfully accomplish your goals?
Logic Model
 A program
logic model links outcomes with
program activities … and the theoretical
principles of the program” (Kellogg, 2001)
 Thus, logic models set up both formative
and summative evaluation questions
 Evaluative answers are “useful” when they
reduce the risks of making the wrong
decision
Types of Evaluation

Formative





“Improve”
Periodic and timely
Focus on program
activities and outputs
Leads to early
recommendations for
program improvement
* Kellogg logic model development guide
Summative




“Prove”
Were resources
committed worthwhile
Focus on outcomes
and impact
Measures value of
program based on
impact
Example Logic Model
Program Goal: To improve the oral health of low-income children who receive primary care in a
community health center
Resources
Dental Clinic
Coordinator
Community Health
Director
Staff dentist
Staff pediatrician
Medical providers
Activities
Outputs
Outcomes
Training
•Develop curriculum
•Two one-hour didactic
trainings to medical
providers in oral health
assessment
•One-on-one training to
medical providers on
oral health
Training
# of two-hour trainings
held
# of one-on-one
trainings held
# of medical providers
trained
Medical providers
demonstrate accurate oral
health assessment,
education and prevention
activities
Outreach
•Order dental supplies
for packets
•Make up packets
•Distribute to parents
at end of each visit
Outreach
# of parents/children
receiving packets
Parents/children are more
knowledgeable about oral
health and caring for
children’s teeth
Money for supplies
More children receive highquality oral health
assessment, education and
prevention activities during
well-child visits
Reduced incidence of caries
in children at the community
health center
Example – Logic Model Linked with
Indicators
Framework
Indicators Linked with Program
Framework
Group Activity

Using the DoctorDad logic model:



Develop one indicator for an anticipated output
Develop one indicator for an anticipated outcome
Create a definition and identify sources of data
 Consider your indicator with respect to Indicator
Quality and Planning checklist.
 Report back.
Variation
indicators indicate – variation informs action
Objectives:



Understanding variation.
Know when to investigate.
How to respond to variation.
Statistical Process Control
 Common

Everyday and inevitable variation which tends
to be small, with observed values close to the
average.
 Special


Cause Variation
Cause Variation
Variation outside the historical experience
base.
Signal of some important change in the
system.
Statistical Process Control
 Statistical
Process Control (SPC) is an
effective method of monitoring a process
through the use of control charts.
 Control charts enable the use of objective
criteria for distinguishing background
variation from events of significance based
on statistical techniques.
Statistical Process Control

Allows you to determine:

That the system is working with an acceptable level of
performance and there are no outliers
• No action needed

That the system is working with an acceptable level
of performance and there are outliers
• Address outliers

That the system’s average level of performance is not
acceptable
• Address entire system
SPC Run Chart

Time ordered
presentation
of
observations

A centre line
(average or
median) of all
the
observations
plotted.
SPC Control Chart

three basic
components:
 a centre line,
usually the average
of all the samples
plotted.
 upper and lower
statistical control
limits that define
the constraints of
common cause
variations.
 performance data
plotted over time.
SPC Run Chart Example

Steps to create a Run Chart
 Ideally, there should be a minimum of 15 data points.
 Draw a horizontal line (the x-axis), and label it with the
unit of time.
 Draw a vertical line (the y-axis), and scale it to cover the
current data, plus sufficient room to accommodate future
data points. Label it with the outcome.
 Plot the data on the graph in time order and join adjacent
points with a solid line.
 Calculate the mean or median of the data (the centre
line) and draw this on the graph.
Run Chart

Run chart of the number of red beads drawn across 25
draws
Run Chart – What to look for:
Observations – number of
observations that do not fall directly on
centre line.
 Run – sequence of one or more
consecutive useful observations on the
same side of the centre line.
 Trend – sequence of successive increases
or decreases in useful observations
 Useful
Run Chart Rules
Identifying Special Cause
Variation
Number of Runs
 If there are too few or too many runs in the process.
Run Chart Rules
Identifying Special Cause
Variation
Number of Runs
 If there are too few or too many runs in the process.
Run Chart Rules
Identifying Special Cause
Shift
Variation

If the number of successive useful observations falling
on either side of the centre line is greater than 7.
Run Chart Rules
Identifying Special Cause
Trend
Variation
If the number of successive useful observations increasing or

decreasing is greater than 7.
Run Chart Rules
Identifying Special Cause
Zig-Zag
Variation
If the number of useful observations increasing or decreasing

alternately (zig-zag pattern) is greater than 14.
Control Chart Rules
Identifying Special Cause
Control Limits – 2SD above
or below centre line
Variation
If there is one or more observations beyond the control limits

Warning Limits – 3SD above or below centre line
 If there are two successive observations beyond the control limits
Warning Limit
3SD
Control Limit
2SD
Example – uniform system
underperforming
Control Chart Resource
http://www.indicators.scot.nhs.uk/SPC/SPC.html
Addressing Special Cause
Variation
When you detect a special cause:
 Control any damage or problems with an immediate, short-term fix.
 Once a quick fix is in place, search for the cause. Ask people in the process what was
different that time. What was out of the ordinary? It might not have been much – an
unexpected emergency, a change in schedules, or new employees.
 Once you have discovered the special cause, you can develop a longer-term remedy.
Most special causes have a negative impact on the output of the process and need to
be removed.
 Occasionally, a special cause can have a positive impact depending on the nature of
the process. If this is the case, finds ways to capture and integrate it into the system.





Avoid these mistakes:
Be careful not to view short-term fixes as a permanent solution or the process will
never be improved.
Changing the process to accommodate the special cause. This usually adds cost and
bureaucracy.
Blaming individuals.
Warn employees to do better. People can only do as well as the system allows.
Addressing Common Cause
Variation
Common Cause Variation – ideal situation for
quality improvement
 If a process indicator is stable, or in statistical
control, does not mean that its results are
satisfactory.
 An indicator may be very consistent, but not
meeting an expected outcome
 Variation can be systematically reduced, even in
stable processes, enabling a gradual tightening
of control limits, and an overall increase in
quality.