Can’t wait for the weekend to begin…..

Download Report

Transcript Can’t wait for the weekend to begin…..

Understanding Hospital Mortality
Indicators
Paul Aylin
Clinical Reader in Epidemiology and Public Health
Dr Foster Unit at Imperial College London
Dec 2013
[email protected]
Contents
Why monitor mortality?
HSMR: effect of changing its construction
What does the HSMR mean?
The SHMI
HSMR and SHMI compared
Individual diagnosis/procedure based SMRs
Interpretation of hospital mortality figures
What to measure
• Process vs. Outcome
• Debate simmering!!
The case FOR measuring outcomes
 It really matters to the patient
 Common endpoint in RCTs etc for assessing
treatments
 A ‘hard’ i.e. objective measure
 Often readily available in routinely collected
data
And the case AGAINST
• Casemix adjustment is needed for fair
comparison and is difficult
• Can be affected by artefact, e.g. deaths post
discharge, inter-hospital transfers
• All-cause mortality is not the same thing as
preventable mortality
• Attribution: which hospital?
Hospital Standardised Mortality Ratio
Originally developed by Brian Jarman
• Jarman et al. “Explaining Differences in English
Hospital Death Rates Using Routinely Collected
Data,” BMJ 1999;318:1515-1520
Covers diagnoses leading to 80% of all inhospital deaths (56 dx groups)
We use a set of 56 casemix adjustment
models
Florence Nightingale
Uniform hospital statistics would:
“Enable us to ascertain the relative mortality
of different hospitals as well as of different
diseases and injuries at the same and at
different ages, the relative frequency of
different diseases and injuries among the
classes which enter hospitals in different
countries, and in different districts of the same
country”
Nightingale 1863
Purpose and rationale of (H)SMRs
• Many studies show link between quality
of care and mortality
• 5% of in-hospital deaths are
preventable through case note review –
but will vary by hospital
• Can’t spot these in routine data, so
monitor all deaths
Hogan H, Healey F, Neale G, Thomson R, Vincent C, Black N. Preventable deaths due to problems in care in English acute hospitals: a
retrospective case record review study. BMJ Qual Saf. 2012 Sep;21(9):737-45. doi: 10.1136/bmjqs-2011-001159.
Process vs. Outcome
• Review of relation between quality of care
and death rates (36 studies):
• 26/51 processes: good care -> low rates
• 16/51 processes: no relation found
• 9/51 processes: good care -> high rates…
Pitches D, Mohammed MA, Lilford R. What is the empirical evidence that hospitals with higher-risk adjusted
mortality rates provide poorer quality of care? A systematic review of the literature. BMC Health Serv. Res
2007;7:91
HSMR construction
• Build one regression model per dx group
• Attach a predicted risk of death to each
admission based on their age, sex etc
• Sum them up by dx group to get the
expected number of deaths E
• Compare this with the observed deaths O:
SMR for a diagnosis (CCS) group = O/E
• HSMR = (sum of 56 sets of O) / (sum of 56
sets of E) x100
Interpretation
• 100 is national average
• HSMRs>100 mean more deaths than
predicted
• O minus E is NOT number of preventable
deaths. Reasons? N, O, P, Q, R
Noise in the data
Organisation factors – transfers, step-down
Patient factors – unaccounted-for casemix
Quality of care – only after excluding others
Random variation
Current casemix adjustment model for each dx group
HSMRs are adjusted for:
•
•
•
•
•
•
•
•
•
•
•
Age (<1, 1-4, 5-9, …, 85-89, 90+)
Sex
Elective status
Areal socio-economic deprivation (Carstairs)
Diagnosis group and subgroups
Co-morbidity – Charlson index (switching to Elixhauser + dementia)
Number of emergency admissions in previous 12 months
Palliative care (secondary dx code or specialty)
Year of discharge
Month of admission
Source of admission, e.g. pt’s own home, other hospital, care home
HSMRs with and without adjusting for Charlson
150
Comparison of HSMR calculated
with and without Charlson
English Acute Trusts 2007/8
140
y = 0.9872x + 1.4207
R² = 0.937
130
Trusts
HSMR without Charlson
Identical match
120
25% more than standard HSMR (adj. for Charlson)
10% more than standard HSMR (adj. for Charlson)
10% less than standard HSMR (adj. for Charlson)
110
25% less than standard HSMR (adj. for Charlson)
Linear (Trusts)
100
Linear (Trusts)
90
80
70
60
60
70
80
90
100
HSMR
110
120
130
140
150
Palliative care – with and without adjustment
150
Comparison of regular HSMR and HSMR
without adjustment for palliative care
English Acute Trusts 2008/9
HSMR without adjusting for palliative care
140
130
Trusts
120
Identical match
25% more than standard HSMR
110
10% more than standard HSMR
10% less than standard HSMR
100
25% less than standard HSMR
Linear (Trusts)
90
80
y = 0.9979x
R2 = 0.8426
70
60
50
50
60
70
80
90
100
110
Regular HSMR
120
130
140
150
56 diagnosis groups HSMR vs. all 259 diagnosis groups
150
Comparison of HSMR calculated
using 56 diagnoses compared with all 259
diagnoses English Acute Trusts 2008/9
140
HSMR based on all diagnosis groups
130
y = 0.9743x + 2.2451
R² = 0.9666
Trusts
Identical match
120
25% more than standard HSMR
10% more than standard HSMR
110
10% less than standard HSMR
25% less than standard HSMR
100
Linear (Trusts)
90
80
70
60
60
70
80
90
100
HSMR
110
120
130
140
150
Excluding zero-day emergency survivors
HSMR excluding unplanned zero-day live discharges
150
Comparison of regular HSMR and HSMR
excluding unplanned zero-day
live discharges
English Acute Trusts 2008/9
140
130
Trusts
Identical match
120
25% more than standard HSMR
110
10% more than standard HSMR
10% less than standard HSMR
100
25% less than standard HSMR
Linear (Trusts)
90
y = 0.9998x
R2 = 0.9805
80
70
60
50
50
60
70
80
90
100
110
Regular HSMR
120
130
140
150
An HSMR by month (random hospital)
…and the same by quarter: less fluctuation
The SHMI: Summary Hospital-level Mortality Indicator
• Owned by the NHS Information Centre
• Developed from DH Mortality Technical
Working Group
• Informed by limited further modelling
commissioned by NHSIC
• Ongoing review
• Intended to be supported by the private
sector who help Trusts understand their data
SHMI v HSMR
SHMI
Adms and
deaths
included

Risk
adjustment





Assigning
deaths

All in-hospital deaths for inpatients
and day cases and deaths 30d
post discharge
For all diagnosis supergroups
except births, stillbirths
Age group
Comorbidities
Admission method
Gender
Discharge year (1-3)

Models based on just three years
of data and rerun each quarter

Diagnosis groups much broader

The last acute provider in
superspell
HSMR
















All in-hospital deaths for inpatients
and day cases
For the 56 diagnosis (CCS) groups
accounting for 80% of deaths
Age group
Comorbidities
Admission method
Gender
Palliative care coding
Diagnosis/procedure subgroup
Deprivation
Number of previous emergency adms
Discharge year
Month of admission
Source of admission
Models based on 11 years
Includes interaction terms between
age and co-morbidity
Every provider in the superspell prior
to death
Casemix
• HSMR casemix adjustment seems to
account for more variation
• 30% more trusts above 99.8% control
limit for SHMI
Do we need both?
• Neither is “right”
• Neither is “wrong”
• As they look at the situation from a
different perspective, they are both
useful, if you understand the
perspective: “multiple lenses” (Berwick)
• Other quality measures also needed
Reasons for high or low values
Noise in the data
Organisation factors – transers, stepdown, on-site hospice
Patient factors – unaccounted-for
casemix
Quality of care – only after excluding
others
Random variation
What can HSMRs and SHMIs tell us?
• High (or rising) suggests potential
quality of care issue (Mid Staffs)
• Some of this could relate to other parts
of the care pathway, e.g. primary care
• …or failure to submit all the data (e.g.
no secondary dx coding)
• Falling value suggests potentially care
improvement or various artefacts
What can HSMRs and SHMIs not tell us?
• If there is definitely a problem in the
hospital
• If hospital is good/safe in all clinical
areas
• The number of preventable deaths
• What to do next other than
“investigate”
Some suggestions for investigating your SHMI
•
•
•
•
•
•
•
Split it by hospital site
Break it down into dx-specific SMRs
Check electronic then paper data
Look at place of death
Pathways – transfers, LCP
Weekday v weekend, elec v emerg
Audit etc as usual
Keogh Review
• 14 hospital trusts covered by the review
were selected using national mortality
measures as a "warning sign" or
"smoke-alarm" for potential quality
problems
• 11 of the 14 trusts were placed into
special measures by Monitor and the
NHS Trust Development Authority.
In summary
• HSMR
• Summary figure, developed by Imperial and produced by DFI
• Observed/Expected for 56 diagnosis groups accounting for 80% of all inhospital deaths
• Screening tool, so various possible reasons for high/low figure
• Not a measure of avoidable deaths
• Detailed breakdowns and analyses available
• SHMI
• Summary figure produced by Information Centre
• Also a screening tool and not a measure of avoidable deaths
• More limited casemix adjustment for dx, age, sex, emerg/elec, Charlson
comorbidity
• Some minor differences cf HSMR (100% vs 80% admissions)
• Some larger differences cf HSMR (out of hospital deaths, attribution of
death to final acute centre)
Detecting outliers
“Even if all surgeons are equally good, about half will
have below average results, one will have the worst
results, and the worst results will be a long way below
average”
• Poloniecki J. BMJ 1998;316:1734-1736
Adjusted (EuroSCORE) mortality rates for primary isolated CABGs by centre (3
years data up to March 2005) using SCTS data with 95% and 99.8% control limits
based on mean national mortality rates
6.0%
Adjusted mortality rate
5.0%
4.0%
3.0%
2.0%
1.0%
0.0%
0
500
1000
1500
2000
2500
Number of operations
3000
3500
4000
Funnel plots
No ranking
Visual relationship with volume
Takes account of increased variability of
smaller centres
…. but not as useful for continuous
surveillance and less sensitive to sudden
increases in mortality
http://www.erpho.org.uk/statistical_tools.aspx
Risk-adjusted Log-likelihood CUSUM charts
• STEP 1: estimate pre-op/admission risk for
each patient, given their age, sex etc. This may
be national average or other benchmark
• STEP 2: Order patients chronologically by date
of operation
• STEP 3: Choose chart threshold(s) of
acceptable “sensitivity” and “specificity” (via
simulation)
• STEP 4: Plot function of patient’s actual
outcome v pre-op risk for every patient, and see
if – and why – threshold(s) is crossed
More details
• Based on log-likelihood CUSUM to detect a
predetermined increase in risk of interest
• Taken from Steiner et al (2000); pre-op risks
derived from logistic regression of national data
• The CUSUM statistic is the log-likelihood test
statistic for binomial data based on the predicted
risk of outcome and the actual outcome
• Model uses administrative data and adjusts for
age, sex, emergency status, socio-economic
deprivation etc.
Bottle A, Aylin P. Intelligent Information: a national system for monitoring clinical performance. Health Services Research (in press).
How do you investigate a signal?
Imperial College Mortality alerts
• Look at alerts generated at 0.1%
statistical False Alarm Rate (default in
Real Time Monitoring is 1%)
• Write to trusts with doubling of odds of
death over previous 12 months
Francis report 2013
• Recognised the role that our work on HSMRs and our surveillance system
of mortality alerts had to play in identifying Mid Staffs as an outlier
• Report of the Mid Staffordshire NHS Foundation Trust Public Inquiry 2013. Volume 1. Pages 458 - 466
http://www.midstaffspublicinquiry.com/report
.
• “All healthcare provider organisations should develop and maintain
systems which give effective real-time information on the performance of
each of their services, specialist teams and consultants in relation to
mortality, patient safety and minimum quality standards.”
• Report of the Mid Staffordshire NHS Foundation Trust Public Inquiry 2013. Executive Summary.
Recommendation 262: http://www.midstaffspublicinquiry.com/report).
• “Summary hospital-level mortality indicators should be recognised as
official statistics. ”
• Report of the Mid Staffordshire NHS Foundation Trust Public Inquiry 2013. Executive Summary.
Recommendation 271: http://www.midstaffspublicinquiry.com/report.
Further reading
• Bottle A, Jarman B, Aylin P. Hospital Standardised
Mortality Ratios: sensitivity analyses on the impact of
coding. Health Serv Res 2011; 46(6): 1741-1761
• Bottle A, Jarman B, Aylin P. Hospital Standardised
Mortality Ratios: Strengths and Weaknesses. BMJ
2011; 342: c7116
• Campbell MJ, Jacques RM, Fotheringham J,
Maheswaran R, Nicholl J. Developing a summary
hospital mortality index:retrospective analysis in English
hospitals over five years. BMJ 2012; 344: e1001