Do Hospital Standardized Mortality Ratios Measure Patient Safety? HSMRs in the Winnipeg Regional Health Authority

Article (PDF Available)inHealthcarePapers 8(4):8-24; discussion 69-75 · February 2008with64 Reads
DOI: 10.12927/hcpap.2008.19972 · Source: PubMed
The Canadian Institute for Health Information began publishing hospital standardized mortality ratio (HSMR) data for select Canadian hospitals in November 2007. This paper describes the experience of the Winnipeg Regional Health Authority in assessing the validity of the HSMR through statistical analysis, coding definitions and chart audits. We found a lack of empirical evidence supporting the use of the HSMR in measuring reductions in preventable deaths. We also found that limitations in standardization as well as differences in palliative care coding and place of death make inter-facility comparisons of HSMRs invalid. The results of our chart audit show that the HSMR is not a sensitive measure of adverse events as defined by "unexpected death" in the Canadian Adverse Events Study. It should not be viewed as an important indicator of patient safety or quality of care. We discuss the cumulative sum statistic as an alternative to the HSMR in monitoring in-hospital mortality.
Do Hospital Standardized Mortality
Ratios Measure Patient Safety?
HSMRs in the Winnipeg Regional
Health Authority
Robert B. Penfold, PhD
Division of Research and Applied Learning, Winnipeg Regional Health Authority
Department of Community Health Sciences, University of Manitoba
Stafford Dean, PhD
Director, Health Systems Analysis Unit
Quality Improvement and Health Information, Calgary Health Region
Ward Flemons, MD
Vice-President, Quality, Safety and Health Information
Calgary Health Region
Clinical Professor of Medicine, University of Calgary
Michael Moffatt, MD
Director, Research and Applied Learning
Winnipeg Regional Health Authority
Department of Community Health Sciences, University of Manitoba
The Canadian Institute for Health Information began publishing hospital standard-
ized mortality ratio (HSMR) data for select Canadian hospitals in November 2007.
This paper describes the experience of the Winnipeg Regional Health Authority in
assessing the validity of the HSMR through statistical analysis, coding definitions
and chart audits. We found a lack of empirical evidence supporting the use of the
HSMR in measuring reductions in preventable deaths. We also found that limita-
tions in standardization as well as differences in palliative care coding and place of
death make inter-facility comparisons of HSMRs invalid. The results of our chart
audit show that the HSMR is not a sensitive measure of adverse events as defined by
“unexpected death” in the Canadian Adverse Events Study. It should not be viewed as
an important indicator of patient safety or quality of care. We discuss the cumulative
sum statistic as an alternative to the HSMR in monitoring in-hospital mortality.
Some in-hospital deaths that are judged
to have been avoidable result from a complex
series of contributing factors; some of these
factors include errors of omission or commis-
sion made by healthcare providers, while
others are latent conditions existing within an
organization – often as a result of the poli-
cies that it establishes. One of the goals of
the Safer Health Care Now! campaign (www. is to reduce avoidable
deaths in Canadian hospitals. The hospital
standardized mortality ratio (HSMR) is a
central tool in this campaign. Developing
tools and processes to monitor and prevent
these deaths is imperative. However, identify-
ing which in-hospital deaths are preventable
and which are expected is a non-trivial task.
Aggregating all the deaths at a facility into
a valid, informative safety measure is even
more difficult. How then should this aspect of
patient safety be monitored and improved?
There is a substantial body of literature
that discusses the relative merits of using
risk-adjusted mortality rates and standardized
mortality ratios to monitor and evaluate the
quality of care in hospitals (Austin et al. 2004;
Baker et al. 2002; Jarman et al. 2005; Wright
et al. 2006). It is well known that differ-
ent statistical approaches to standardization
have a measurable impact on hospital scores
and ranks (Delong et al. 1997; Glance et al.
2006; Goldman and Brender 2000; Julious
et al. 2001; Li et al. 2007). The creator of the
HSMR has also argued that several significant
predictors of in-hospital mortality are outside
the control of hospital policy (Jarman et al.
1999). Others have found that differences in
in-hospital mortality are usually not related
to differences in quality of care (Iezzoni
et al. 1996; Park et al. 1990; Thomas and
Hofer 1999). Moreover, hospital administra-
tors often have difficulty using the mortality
measures because the data are too aggregate
(Mehrotra et al. 2003) – limiting their utility
with respect to quality improvement.
In November 2007, the Canadian
Institute for Health Information (CIHI)
began publishing HSMRs for hospitals with
more than 2,500 HSMR cases in each of
the fiscal years 2004–2005, 2005–2006 and
2006–2007. This initiative follows similar
projects in Britain and the United States,
where publishing HSMRs motivated admin-
istrators to examine in-hospital mortality
more closely and to introduce interventions
to try and reduce HSMRs. However, publish-
ing in-hospital mortality data has also been
followed by increases in 30-day mortality in
some cases – hypothesized to be the result
of differences in discharge rates. This high-
Do Hospital Standardized Mortality Ratios Measure Patient Safety?
HealthcarePapers Vol. 8 No. 4
lights the importance of understanding how
the HSMR is derived and the need to disen-
tangle the discharge rate from the mortality
rate (Austin et al. 2004; Farsi and Ridder
2006). Publishing Canadian HSMRs will
presumably motivate hospital administrators
to examine in-hospital mortality, with simi-
lar pressure to reduce it as was seen in other
countries; however, whether this will actually
lead to a wise investment of time and atten-
tion by healthcare leaders – with the ultimate
goal of reducing ‘preventable’ deaths – remains
an open question.
This paper describes the experience of
the Winnipeg Regional Health Authority
(WRHA) in assessing the validity of the
HSMR through statistical analysis, coding
definitions and chart audits. The first section of
this paper evaluates the rationale for using the
HSMR and the empirical evidence in support
of using the HSMR as a tool to learn from in-
hospital death. The following section presents
HSMRs for comparable facilities in Winnipeg
and Calgary during the development phase
of the Canadian HSMR and the results of
a WRHA chart audit. In the final section,
we discuss caveats for facility and regional
administrators in using the HSMR to make
decisions. We also revisit the cumulative sum
statistic as a method of learning from deaths
that involves patient-level statistical analysis.
Rationale for Monitoring and
Publishing HSMRs in Canada
The HSMR is thought to be an indicator
of patient safety and hospital quality of care
(Institute for Healthcare Improvement [IHI]
2003). It is also argued that death is a “definite
event” – presumably making “safety” easier
to define, measure and monitor. Leeb et al.
(2005) state that the HSMR provides a means
for hospitals to track changes in adverse
events. As such, it is meant to be an indicator
of avoidable deaths, that is, deaths that could
have been prevented if different (higher-
quality, safer) care were given.
Adverse events are not directly incor-
porated into calculating the HSMR. The
number of avoidable deaths is inferred from
“excess” mortality – the number of deaths that
would be prevented if a facility were to have
the same distribution of deaths that occurs
nationally. Logistic regression is used to model
the likelihood of death for all patients in
Canada, limited to the 65 disease codes that
account for 80% of in-hospital mortality. The
national model controls for age, sex, length of
stay, admission category, Charlson index and
patient transfer. Patient risk factors within
diagnostic groups (e.g., Acute Physiology
and Chronic Health Evaluation [APACHE]
scores for patients in intensive care) are not
modelled. The results of the logistic model-
ling are then used to derive the number
of expected deaths at each hospital given
the patient characteristics mentioned. The
HSMR is simply the ratio of observed (actual)
deaths to expected deaths, multiplied by 100.
HSMR = f
observed number of deaths
p • 100
expected number of deaths
For example, the observed number of
deaths at Health Sciences Centre (HSC) in
Winnipeg in the fiscal year 2006–2007 was
231. The expected number of deaths (derived
from logistic modelling) was 312. Thus, the
HSMR is 231/312 × 100 = 74. CIHI also
produces 95% confidence intervals for the
Administrators in facilities with an
HSMR >100 (the national benchmark) are
encouraged to investigate the reasons why
their HSMR is above average. It is further
argued that monitoring the HSMR over time
allows a facility to measure the effectiveness of
initiatives to improve quality of care or reduce
the occurrence of adverse events.
To date, there are no peer-reviewed
studies validating the HSMR as an indica-
tor of the occurrence of adverse events. Two
published studies in Britain have examined
predictors of changes (decreases) in HSMRs
over time (Jarman et al. 2005; Wright et al.
2006) and concluded that policy interven-
tions reduced in-hospital mortality. Other
studies lead to similar conclusions (Leeb et
al. 2005). No robust rule for predicting an
individual hospital’s HSMR could be found in
an American study (Whittington et al. 2005).
It is also important to note that none of these
studies specifically examined adverse events.
Nevertheless, these studies are provided as
examples of how the HSMR is effective in
monitoring patient safety.
Evidence Supporting the HSMR as a
Quality Monitoring Tool
Closer examination of the data that are
used to justify using the HSMR reveals that
evidence of their utility is weak. Figure 1
shows HSMRs for the Walsall NHS Trust
facility between 1996 and 2006. The data
up to 2004 have been presented as evidence
that improvements starting in 2000 reduced
Do Hospital Standardized Mortality Ratios Measure Patient Safety?
          
HSMR = hospital standardized mortality ratio.
Source: Based on data from and Dr. Foster Intelligence (2005, 2007).
Figure 1. Walsall Hospitals NHS Trust HSMRs, 1996–2006
HealthcarePapers Vol. 8 No. 4
in-hospital mortality for this facility from
130 in 2000 to 92.8 in 2004; a discussion
of the period from 1996 to 1999 is absent.
While there are certainly five years of declin-
ing values beginning in 2000, the first three
years of this trend seem to return the HSMR
to an average level. The conclusion that these
changes are associated with improved qual-
ity of care or a reduction in adverse events
appears unwarranted when the 1996 (pre-
intervention) and 2003 (post-intervention)
HSMRs are statistically identical. Moreover,
the Walsall HSMRs returned to average in
2005 and 2006, further eroding the contention
that changes in 2000 led to any improvement
in patient safety that is measurable with the
HSMR. As Wright et al. (2006) acknowledge,
the observed trend could be due to chance,
regression to the mean, coding changes, differ-
ent discharge policies or referral of compli-
cated patients to other facilities (i.e., a change
in admission policies). It could also be true
that the HSMR does not measure avoidable
deaths but, rather, trends in overall mortality.
Some evidence of this latter point may be
found in Wright et al. (2006). For the facil-
ity they discuss, Figure 2 shows the HSMRs
between 1996 and 2005. The authors argue
that the mortality reduction program, begun
in 2002, was responsible for a decrease in
mortality at that facility. The data in Figure
2 show that mortality had been decreasing
at this facility for at least two years prior to
the mortality reduction program. Indeed, the
rate of decrease between 2002 and 2004 is
similar to the rate of reduction in the 1996–
1998 period, when no program was in place.
Further still, Figure 3 shows that the quarterly
HSMRs were virtually unchanged between
1996 and 2005 (the 95% CIs overlap).
         
 
HSMR = hospital standardized mortality ratio.
Source: Wright et al. (2005).
Figure 2. Annual HSMRs for facility discussed in Wright et al. (2005)
Applying common heuristics (Hart and
Hart 2002; Hart et al. 2003, 2004) of statisti-
cal process control to Figures 1 and 2, there
is preliminary evidence that mortality is
“under control” at both facilities. Nearly all
the observations lie within the three stand-
ard deviation control limits. In other words,
the annual HSMRs are fluctuating around
the facility average, and we observe the
annual variation one would expect to occur
randomly. However, these annual data may
only be considered preliminary since there are
not enough observations to establish reliable
control limits (25 observations are needed)
(Lee and McGreevey 2002).
Assuming that the mortality process is
stable, only one data point (the HSMR for
2005 in Figure 2) lies outside the three sigma
limits ordinarily used to identify an “out of
control” process. Figure 2 does show evidence
of a trend (seven consecutive falling values),
but this trend starts three years before the
mortality reduction program. As such, it is
unclear whether the HSMR is actually meas-
uring avoidable deaths.
HSMR Development
WRHA managers, researchers, auditors,
analysts and clinicians participated in the
development of the HSMR with CIHI.
WRHA’s participation involved three versions
of HSMR. Definitions, coding, inclusion
criteria, statistical methodology and validation
occurred for each version of the HSMR.
Do Hospital Standardized Mortality Ratios Measure Patient Safety?
Figure 3. Quarterly HSMRs for facility discussed in Wright et al. (2005)
HSMR = hospital standardized mortality ratio.
Source: Wright et al. (2005).
HealthcarePapers Vol. 8 No. 4
Standardization and Coding
CIHI’s HSMR version 3.0 selects cases based
on 65 diagnosis groups (most responsible
diagnosis) accounting for the top 80% of
in-hospital deaths in Canada (CIHI 2007b).
This definition is important because the
methodology selects cases based on the diag-
noses for which the most deaths occur rather
than the diagnoses for which the most avoid-
able deaths occur.
An interesting example of the differ-
ence can be found in the version 3.0 HSMR
compared with previous versions. Version
3.0 excludes “neonates less than 750 grams.”
This change arose from findings in WRHA
that these deaths were almost always planned
terminations. The decision to exclude these
cases is appropriate but demonstrates that a
collection of diagnoses that accounts for 80%
of deaths does not account for 80% of avoid-
able deaths.
A second important issue that arose
during consultation with WRHA involved
the inclusion and exclusion criteria around
patients receiving palliative care (also known
as comfort care). Notably, IHI in the United
States specifically excludes all patients receiv-
ing comfort care [Boxes 1 and 2 in Figure 1
of Whittington et. Al. (2005)] from HSMR
follow-up with their global trigger tool
(Whittington et al. 2005). This is important
because an above-average HSMR for a facil-
ity may be a signal that this facility has more
adverse events, but it may also signal a higher
propensity to admit patients to manage the
dying process (Seagroatt and Goldacre 2004).
WRHA (recognizing the IHI methodology in
the United States) argued that patients receiv-
ing palliative care should be excluded from the
HSMR calculation since these deaths were
expected and the patients involved usually
had a designation of alternate level of care or
withdrawal of treatment. It is not a failure in
patient safety when these patients expire. Many
patients included in the HSMR were termi-
nally ill. High HSMR levels may in fact repre-
sent a failure to reorient the healthcare system
to facilitate more home and hospice deaths.
CIHI now produces separate HSMRs:
one that includes patients receiving pallia-
tive care and one that excludes these patients.
Separate logistical models are used to calcu-
late expected deaths. However, this approach
does not entirely solve the problem because
there are wide discrepancies in the coding of
palliative care. It is well known that measures
of hospital performance are influenced by
variable coding (Austin et al. 2005). A new
palliative care coding standard is currently in
place, but the HSMRs for 2004–2005 are not
calculated based on this standard. Fiscal year
2004–2005 is the reference year for facilities
to monitor their HSMR. All standardiza-
tion (logistic modelling) was performed using
2004–2005 national data.
An example of what happens when this
standard is applied is shown in Figure 4.
Figure 4 illustrates the HSMRs at HSC before
and after the coding change. The coding of
palliative care has an enormous impact on
facility HSMR. The HSMR goes from 148 in
the fourth quarter of fiscal year 2004–2005 to
49 in the first quarter of fiscal year 2005–2006.
Looking at fiscal year 2005–2006, the HSMR
with palliative care is 118 (unchanged from
2004–2005) and 55 without palliative care. It
is unclear whether HSC is 18% worse or 45%
better than the national average.
The coding problem becomes more
prominent when comparing other facilities in
Winnipeg. In fiscal year 2004–2005, all the
HSMRs with palliative care are lower than
those that exclude palliative care. In fiscal
year 2006–2007, all the HSMRs with pallia-
tive care are higher than those that exclude
palliative care. In fiscal year 2005–2006, the
HSMRs with palliative care in four of six
facilities are lower than the HSMRs without
palliative care (but higher in the two teach-
ing facilities). This is an example of how both
temporal and inter-facility differences in
admission for end-of-life care or differences in
coding make comparisons of HSMRs difficult.
A coding issue separate from the national
standard is the timing of a palliative designa-
tion. For example, should the HSMR exclude
only those patients who are admitted with a
palliative care designation or should it exclude
anyone who is palliative at any time during
an admission? If the latter criterion is chosen,
when is the palliative designation valid? If a
patient initially receives all possible life-saving
measures and subsequently has care with-
drawn, should this patient be included in the
HSMR under “with palliative care” or “with-
out palliative care”? Should this line be drawn
at 72 hours? One week? One month? Clinical
decisions concerning alternate levels of care
have been shown to vary between Canadian
cities (Cook et al. 2001). In any of these cases,
clinical discretion (which is arguably desir-
able) regarding the withdrawal of care seri-
ously undermines the validity of the HSMR
in measuring avoidable deaths.
There are two remaining arguments
against using the HSMR as a marker of
“definite” mortality events. First, discharge
decisions and policies vary regionally. Thus,
it is unclear whether in-hospital mortality or
30-day mortality is a better measure of quality
and safety. The arbitrariness of the decision
to discharge means that “death” is not defi-
Do Hospital Standardized Mortality Ratios Measure Patient Safety?
Figure 4. Health Sciences Centre HSMRs*, FY 2004–2005 to FY 2006–2007
FY = fiscal year; HSMR = hospital standardized mortality ratio.
*All values based on HSMR version 3.0.
Source: Data from CIHI reports created May 18, 2007 (FY 2004–2005), May 23, 2007 (FY 2005–2006) and September 11, 2007 (FY 2006–2007).
        
    
HealthcarePapers Vol. 8 No. 4
nite because it is associated with a particular
place (the hospital). There is some evidence
to suggest that regional differences in the
proportion of people who die in hospital affect
HSMR calculations (Harley 2004; Seagroatt
and Goldacre 2004). As such, the HSMR is
very likely an indicator of the degree to which
death management is performed in hospital
versus elsewhere.
Finally, it is well known that patients
within a diagnostic group are not homo-
geneous with respect to the likelihood of
dying; yet, the HSMR does not adequately
standardize for within-diagnosis variation.
For example, the CIHI 30-day acute myocar-
dial infarction (AMI) in-hospital mortality
indicator calculates expected deaths using
parameters for age, sex, shock, diabetes, heart
failure, cancer, cerebrovascular disease, pulmo-
nary edema, acute renal disease, chronic renal
disease and cardiac dysrhythmias. All of these
elements affect the likelihood of death due to
AMI, but none are used in the standardiza-
tion of International Classification of Diseases
Tenth Revision code I21 (AMI) – a diagnos-
tic group included in the HSMR calculations.
Instead, the expectations of death are derived
from the national average of deaths in this
category. This uncontrolled between-patient
variability is compounded across 65 diagnostic
categories when calculating HSMR. Inter-
hospital comparisons would only be valid if
the mortality rates across the 65 diagnostic
strata had a consistent relationship (i.e., all
65 mortality rates at one facility were a scalar
multiple of those at the comparison facility
and this scalar were approximately constant
across diagnostic groups) (Breslow and Day
1987; Wolfenden 1923).
Inter-hospital Comparisons
CIHI has been clear to point out that inter-
hospital comparisons of HSMRs are not valid
(personal communication, June 21, 2006).
CIHI does not want to produce hospital
rankings as has been done in other countries.
Further evidence showing that HSMRs are
not adequately standardized to make inter-
hospital comparisons comes from considera-
tion of the patient populations at HSC in
Winnipeg and Foothills Medical Centre in
Calgary. Both of these facilities provide terti-
ary level care and are the university medical
centres for large urban and rural populations.
However, in 2004–2005 the HSMR (without
palliative care) for Foothills Medical Centre
was 60 (95% CI 54–67) while the HSMR for
HSC was 125 (95% CI 113–137).
Table 1 shows the counts and distribution
of HSMR cases for fiscal year 2005–2006 at
these two facilities. The HSC cases do not
include any patients in age categories one
(zero to four) or two (five to 14). (Note that
the version 3.0 methodology uses age in years
at time of admission, not age categories.)
This means that people <15 years of age are
excluded from the mortality rates at HSC. (A
children’s hospital is administratively part of
HSC but is considered a separate pediatric
facility for purposes of CIHI administrative
data, and pediatric facilities are excluded.)
Table 1. Age distributions for FMC and HSC
HSMR cases in Fiscal Year 2005/06
Age Group FMC HSC
0–4 979 5.13 0 0
5–14 556 2.92 0 0
15–44 3,016 15.82 1,843 10.8
45–64 4,487 23.53 3,938 23.2
65–74 3,227 16.92 3,293 19.4
75–84 4,345 22.79 4,904 28.8
85–120 2,457 12.89 2,972 17.5
FMC = Foothills Medical Centre (Calgary); HSC = Health Sciences Centre
(Winnipeg); HSMR = hospital standardized mortality ratio.
However, this subpopulation makes up >8%
of cases at Foothills Medical Centre. Since the
national model of expected mortality includes
people 14 and under, there is serious mis-speci-
fication error associated with expected deaths
at HSC, and two otherwise highly comparable
facilities have incomparable HSMRs. This is
because the age distributions of the two facili-
ties, and therefore the distributions of expected
numbers of deaths, are disparate (Breslow and
Day 1987). The inconsistency of mortality
rates across population subgroups (and there-
fore insufficient standardization of HSMRs) is
one reason that Sir Brian Jarman has avoided
making comparisons of HSMRs between UK
and US hospitals (personal communication,
June 2, 2006). Facility HSMRs are not compa-
rable internationally.
Further evidence that HSMRs in Calgary
and Winnipeg are not comparable comes from
a comparison of place of death. Seagroatt and
Goldacre (2004) found that hospitals with
the highest HSMRs were in regions where
a large percentage of people died in hospital.
Hospitals with the lowest HSMRs tended to
be in regions where the percentage of people
dying in hospital was low. In-hospital death
ranged from 45% in Plymouth to more than
60% in Walsall.
Table 2. Distribution of place of death in
the Calgary and Winnipeg Health Regions
Place of Death Calgary % Winnipeg %
Acute care hospital 35.3 54.6
Palliative hospital 20.5 13.6
Long-term care facility 22.1 19.1
Home 18.3 11.3
Other locations 3.8 1.5
Source: Canadian Institute for Health Information (2007a).
Table 2 shows the distributions of place of
death for the Calgary and Winnipeg Health
Regions in fiscal year 2003–2004 (CIHI
2007a). Nearly 20% more deaths occurred in
hospital in Winnipeg than in Calgary, and
these figures are comparable to differences
found in Britain.
Audit Results
WRHA conducted a chart audit of fourth
quarter HSMR cases in 2005–2006 to learn
from these deaths and to determine whether
the HSMRs were truly indicating that a
higher than average number of preventable
deaths was occurring in Winnipeg hospitals.
In phase one of the HSMR audit, 553 charts
from six hospitals were identified for pre-
screening. Of the 553, 245 were selected to be
audited in phase two. A chart was selected if
a patient was categorized as receiving non-
comfort care based on two sets of criteria.
First, the patient was not in the process of
being panelled (scheduled for transfer to a
personal care/nursing home), did not have
an alternate level of care designated, was not
transferred from a Winnipeg personal care
home (nursing home) and was under 75 or
admitted to an intensive care unit (ICU)
during the stay. Second, the auditor desig-
nated a patient as receiving comfort care
if anywhere in the chart any of the terms
comfort care, poor prognosis or informal pallia-
tive appeared or if there was any documented
conversations with the family regarding pallia-
tion, comfort care or withdrawal of treatment.
A patient may also have been designated as
receiving comfort care if he or she had an
advance care plan (ACP1, ACP2 or ACP3;
see Appendix for WRHA definitions). The
245 patients selected for phase two of the
audit were those receiving non-comfort care,
according to at least one of the definitions. Of
the 245 cases, 55 (22%) met both definitions
of non-comfort care and 190 (78%) met at
least one definition.
Do Hospital Standardized Mortality Ratios Measure Patient Safety?
HealthcarePapers Vol. 8 No. 4
Level of Care
Of the 245 patients, 144 (59%) died on the
ward and 76 (31%) died in the ICU. The
remaining 25 deaths (10%) occurred in the
following locations: ward transfer from ICU
(nine), emergency room (five), observation
(three) and other (eight). A do not resuscitate
(DNR) order was in place for 190 (78%) of
the 245 patients. Eighty-three (44%) of the
190 patients with DNR orders were located
on the ward when the DNR was ordered.
Twelve percent of the patients with a DNR
order had it in place before being admitted
to the hospital. Sixty-three (33%) patients
had a DNR order on the day of or previous
to admission. One hundred thirty-six (72%)
of the 190 patients with DNR orders were
designated DNR by the end of the third day
of their hospital stay. Of the 53 patients with
DNR orders after three days of admission, 30
(57%) had cancer. Twenty-four (10%) of 245
patients had an advance care directive. While
HSC does not use the ACP form for designa-
tion of ACP or DNR level, 103 (42%) patients
had an ACP form in their chart. Of these
patients, 43 (42%) patients had a designation
of ACP 3 (the second highest level of care).
Treatment Withdrawn
Fifty-one (21%) of the 245 patients had
their treatment withdrawn. At HSC, 45%
of patients had their treatment withdrawn.
Forty-one (80%) of the 51 patients had a
DNR order. The average age of patients who
had their treatment withdrawn was 70 years.
Of the 51 patients, 18 (35%) had their treat-
ment withdrawn for the reason of poor prog-
nosis or poor status.
One hundred thirty (53%) of the 245
patients were informally palliative. The infor-
mal designation of a patient being considered
palliative occurred if any of the terms pallia-
tive, comfort care or poor prognosis was used in
the physician’s or nurse’s progress notes. This
could also include patients taking a turn
for the worse or discussions with the family
regarding withdrawing treatment. Eighty-one
percent of Victoria Hospital patients were
informally palliative.
There was anticipation of death within 72
hours of death noted for 82 (33%) of the 245
patients. For 45% of HSC patients and 43%
of St. Boniface patients, concerns about death
were documented within 72 hours prior to
death occurring.
Global Triggers
The Canadian Adverse Events Study (CAES)
(Baker et al. 2004) refined 18 screening crite-
ria for detecting adverse events. The study
was designed to describe the frequency and
type of adverse events in patients admitted to
Canadian acute care hospitals and to compare
the rate of these adverse events across types
of hospitals and between medical and surgical
care. One of the screening criteria is “unex-
pected death.” Of the 3,745 cases in the study,
75 (2%) were flagged as unexpected death.
A modified version of the CAES tool was
applied to the 245 HSMR cases audited (we
added several dozen other screening crite-
ria). If many of the deaths in these cases were
preventable, we would expect the proportion
of unexpected death triggers to be high. In
the sample of 553 charts in Winnipeg, 1.8%
(10 of 553) of cases had an unexpected death
trigger. Of these, three involved unwitnessed
arrests and six involved sepsis. The propor-
tion of unexpected deaths is similar to that
in the CAES study; thus, the HSMR does
not appear to be a sensitive tool for detecting
unexpected death.
The results of the audit performed in
WRHA indicate that most of the deaths used
in calculating HSMRs were expected rather
than preventable. Over half of the patients
had an alternate level of care, and death was
anticipated in one third of cases. Further, only
1.8% of patients had a patient safety concern.
The adverse event trigger tool provides some
confirmation that unexpected death was rare,
given that death in 1.8% of the 553 cases
reviewed was classified as unexpected. This
is not to argue that 1.8% is an acceptable
number but, rather, that the HSMR is not a
sensitive tool for detecting the occurrence of
adverse events or unexpected death.
How Can Administrators Best Learn
from Deaths?
The HSMR should not be viewed as an
important indicator of patient safety or quality
of care for several reasons. The indicator is
highly aggregate and difficult to adequately
standardize given the large number of diagno-
sis groups. This makes inter-hospital compari-
sons invalid. The definition of palliative care
and differences in discharge rates make in-
hospital and out-of-hospital mortality diffi-
cult to distinguish. Many changes in policy
unrelated to patient safety could significantly
change a facility’s HSMR. As we have seen
in the case of HSC, when the intended use is
to compare one facility’s HSMRs over time,
the choice of reference year makes interpreta-
tion difficult and cause-effect relationships
between the HSMRs and patient safety
programs even more difficult. How then can
facilities best learn from in-hospital mortality?
Use of the cumulative sum (CUSUM)
statistic is a better approach to monitoring
in-hospital deaths. It has been used both
by Jarman et al. (2005) and Wright et al.
(2006) to monitor mortality over time. The
CUSUM statistic allows one to differentiate
between variations in performance that are
due to chance and variations that are greater
than what would be expected from a random
process and therefore a possible cause for
concern (Yap et al. 2007). The measure can be
risk-adjusted in a variety of ways (Steiner et
al. 2000) and involves recording instances of
“failures” (deaths) and comparing the likeli-
hood of this failure to the likelihood of death
in all patients from the beginning of the time
series (hence the name cumulative sum). The
main advantage of the CUSUM over quar-
terly or annual HSMRs is that the CUSUM
statistic is calculated for each patient. As such,
it allows administrators to focus on a subset
of patients within a narrow time frame in
which deaths were occurring at a higher rate
than expected. Armed with this information
on a small number of patients (as opposed to
all patients at a hospital within a given fiscal
quarter), auditors or mortality review commit-
tees can focus their attention on what may
have been happening during this period of
elevated mortality. The CUSUM approach
also provides “signal alarms” that sound when
mortality is occurring at a higher-than-
expected rate within a given time frame. Thus,
it can be used relatively silently until a prob-
lem arises.
Following Steiner et al. (2000), use of the
CUSUM statistic involves setting an odds of
death “alarm” rate, RA. For example, RA would
be set at 2 if administrators desired to detect a
doubling in the odds of death. Defining R0 as
the current odds of death (usually set at 1), the
CUSUM sequentially tests (for each patient)
H0: odds ratio = R0; versus the alternative,
HA: odds ratio = RA. CUSUM scores are
then calculated based on the individual risk
factors of each patient (likelihood of death
based on prior risk factors such as severity of
illness), the alarm odds ratio and the current
odds ratio. The CUSUM scores are added
with each discharge and an “alarm” signals
when the sum of these scores is greater than
a predefined control limit. The control limits
varies depending on the data characteristics at
each hospital.
Both Jarman et al. (2005) and Wright
et al. (2006) use the odds ratio version of
the CUSUM statistic to measure changes in
Do Hospital Standardized Mortality Ratios Measure Patient Safety?
HealthcarePapers Vol. 8 No. 4
mortality. Jarman et al. (2005) set their graph
to detect a doubling in the odds of death (RA
= 2; a rising CUSUM is bad news). Wright
et al. (2006) set their graph to detect a halv-
ing in the odds of death (RA = 0.5; a falling
CUSUM is bad news). These authors set an
alarm at approximately 4.5; thus, each time
the graph crossed this line, there was suffi-
cient evidence to conclude that a new level of
mortality had been reached (e.g., the odds of
death had doubled).
The CUSUM statistic detects changes
in mortality when the HSMR does not.
Whereas the quarterly HSMRs in Figure
3 do not show much change in observed
mortality compared with expected mortal-
ity, the CUSUM (Figure 2 in Wright et al.
2006) shows that in-hospital mortality was,
in fact, lower after. As such, the CUSUM is a
more effective tool for monitoring changes in
mortality rates over time.
As noted above, use of the CUSUM
statistic does not avoid the need to adjust for
individual patient risk factors beyond diag-
nosis, age, sex, length of stay and comorbid
conditions. For example, Steiner et al. (2000)
use the Parsonnet score to control for illness
severity when calculating the CUSUM statis-
tic. A variety of other risk-adjustment meth-
ods are available to control for the generic
severity of illness, such as scores on the Duke
University Severity of Illness (DUSOI) scale
and APACHE. One study found that a
patient’s first APACHE III score explained
90% of the variations in hospital mortal-
ity among critically ill patients (Knaus et al.
1991). One way to interpret this finding is
that 10% of in-hospital mortality is related to
errors, quality of care or other factors.
Of the 245 charts audited, 43.7% (107
of 245) involved an admission to ICU and
an APACHE score. As of 2006, four of the
six hospitals in the Winnipeg Health Region
have calculated APACHE scores for patients
when they are admitted to the Department of
Medicine. When ICU patients are included,
approximately 70–80% of patients admit-
ted to these four hospitals have at least one
APACHE score. Use of APACHE scores to
derive expected deaths would significantly
improve the ability to control for between-
patient (within-diagnosis) variations in the
risk of death. A limitation of using APACHE
scores is that these scores are not included in
the discharge abstract database. However, the
data are routinely collected in databases and
could easily be linked to the HSMR data.
A weakness of the CUSUM approach
is that, like the HSMR, it does not provide
a reason for why mortality might be higher.
More detailed investigations of deaths are still
required since the CUSUM is only an indica-
tor or signal. However, a major advantage of
the CUSUM approach is that administra-
tors would be able to identify small groups
of patients and short time frames in which
patients expired at a higher rate than expected
(as opposed to having one HSMR for the
fiscal quarter). Narrowing the field of patient
deaths that are contributing to higher-than-
expected deaths would permit more focused
chart audits and process evaluations.
Policy Implications in WRHA
In the absence of better standardization
of coding for palliative care and within-
diagnostic group variations in level of illness,
we reiterate our contention that HSMRs
cannot be fairly compared between hospitals.
As such, the argument for publishing HSMRs
in tables that facilitate such comparisons is
weak. Further, there is pressure on administra-
tors to decrease their HSMRs in the absence
of detailed and actionable data to do so. This
may encourage “gaming” through admission
and discharge policies. Such games might
include discharging a patient multiple times
to inflate the number of live” discharges in a
diagnostic group, recoding patients as pallia-
tive, discharging patients to long-term care
more quickly or refusing to admit critically
ill patients from personal care homes. These
examples, though perverse, illustrate the ease
with which HSMRs can be manipulated
when there is an incentive to do so.
Even if the HSMR is eventually shown
to be a valid, sensitive and specific indicator
for unrecorded adverse events or deficiencies
in quality of care, it is not clear that focus-
ing attention on death is the best route to
improving patient safety or hospital quality.
The vast majority of patients who experi-
ence adverse events do not die from their
injury. Thus, focusing on death as a means
of improving quality is highly inefficient.
Data regarding deaths is simply convenient
because they cannot go unreported. Since
mortality has been chosen as the pathway
to improving quality, efforts would be better
focused on measuring safety and quality at
the sub-hospital level. For example, hospital
administrators in the United Kingdom are
provided (monthly) adjusted standardized
mortality ratios for 78 diagnostic groups and
128 procedures, including CUSUM charts
and mortality odds ratios (B. Jarman, personal
communication, 2007). This information is
highly actionable – giving administrators a
much better sense of where problems might
exist. CIHI should consider developing
similar data to augment the distribution of
HSMRs. The experience in WRHA is that
identifying specific problems via HSMRs is
difficult and costly.
Because problem areas are difficult to
identify, WRHA is making extraordinary
efforts to investigate deaths. The new WRHA
Mortality Diagnostic Process involves rapid
screening of all deaths by a nurse auditor and
referral of obvious problems directly to the
Critical Incident Review Process and/or the
Chief Medical Officer. Periodically, groups
of questionable cases are referred to an inter-
disciplinary diagnostic team in which two
members perform a more detailed review
of a chart. The budget for the Mortality
Diagnostic Process is $125,000. A compre-
hensive formative evaluation of the entire
process will be conducted by the Research and
Evaluation Unit at WRHA, and in a year or
two there will be detailed information about
the value of such a process and the learning
that has occurred.
CIHI and the Canadian Patient Safety
Institute should be commended for bringing
heightened awareness and scrutiny to in-
hospital mortality in Canada. They are clearly
concerned about the welfare of Canadians and
improving patient safety. Most hospitals are
now reviewing, resurrecting or re-designing
death review processes. However, the HSMR
must be interpreted and used with caution; it
is difficult to interpret without better patient,
hospital and regional level risk stratification.
Our chart audit found that the HSMR does
not appear to be a sensitive measure of adverse
events or unexpected death. Further, the meas-
ure is difficult to use because the information
is too aggregate. While the CUSUM approach
also has weaknesses, it provides a much better
starting place for auditors and administrators.
Mortality data by diagnostic group and proce-
dure would be even more helpful.
The publication of HSMRs from facilities
across Canada will inevitably lead to contin-
ued public (media) comparisons between
different hospitals across the country. We
hope the arguments presented here will rein-
force CIHI’s stated caveat that these compari-
sons should be avoided. It is simply not a valid
conclusion that differences in HSMRs are due
to differences in quality of care. Such compar-
isons are also not a useful way for patients to
choose a hospital for care. The usefulness of
Do Hospital Standardized Mortality Ratios Measure Patient Safety?
HealthcarePapers Vol. 8 No. 4
the HSMR lies entirely in promoting facilities
to investigate deaths and make incremental
improvements over time. But it must be kept
in mind that even tracking trends over time is
complicated by the use of a baseline year that
precedes the development of a palliative care
coding guideline.
The authors gratefully acknowledge the
contributions of Gerry Taylor, Lisa Kaita and
Christy Rogowski from Audits and Quality
Analysis at WRHA. We would also like to
thank Evelyn Fondse and Anne Hakansson
from Health Information Services. The
thoughtful and thorough technical advice
provided by our team made this study possible.
Austin, P.C., D.A. Alter, G.M. Anderson and J.V.
Tu. 2004. Impact of the Choice of Benchmark on
the Conclusions of Hospital Report Cards.” American
Heart Journal 148(6): 1041–46.
Austin, P.C., J.V. Tu, D.A. Alter and C.D. Naylor.
2005. The Impact of Under Coding of Cardiac
Severity and Comorbid Diseases on the Accuracy of
Hospital Report Cards.” Medical Care 43(8): 801–89.
Baker, D.W., D. Einstadter, C.L. Thomas, S.S. Husak,
N.H. Gordon and R.D. Cebul. 2002. “Mortality
Trends during a Program That Publicly Reported
Hospital Performance.” Medical Care 40(10): 879–90.
Baker, G.R., P.G. Norton, V. Flintoft, R. Blais, A.
Brown, J. Cox, E. Etchells, W.A. Ghali, P. Hebert, S.R.
Majumdar, M. O’Beirne, L. Palacios-Derflingher, R.J.
Reid, S. Sheps and R. Tamblyn. 2004. The Canadian
Adverse Events Study: The Incidence of Adverse
Events among Hospital Patients in Canada.” Canadian
Medical Association Journal 170(11): 1678–86.
Breslow, N.E. and N.E. Day. 1987. Statistical Methods
in Cancer Research: Vol. II. The Design and Analysis of
Cohort Studies. New York: Oxford University Press.
Canadian Institute for Health Information. 2007a.
Health Care Use at the End of Life in Western Canada.
Ottawa, ON: Author.
Canadian Institute for Health Information. 2007b.
HSMR Technical Notes. Ottawa, ON: Author.
Retrieved October 12, 2007. <
Cook, D.J., G. Guyatt, G. Rocker, P. Sjokvist,
B. Weaver, P. Dodek, J. Marshall, D. Leasa, M.
Levy, J. Varon, M. Fisher and R. Cook. 2001.
“Cardiopulmonary Resuscitation Directives on
Admission to Intensive-Care Unit: An International
Observational Study.” Lancet 358(9297): 1941–45.
DeLong, E.R., E.D. Peterson, D.M. DeLong,
L.H. Muhlbaier, S. Hackett and D.B. Mark. 1997.
“Comparing Risk-Adjustment Methods for Provider
Profiling.” Statistics in Medicine 16(23): 2645–64.
Dr. Foster Intelligence. 2005. The Hospital Guide,
December 2005. London: Author. Retrieved August
2, 2007. <
Dr. Foster Intelligence. 2007. How Healthy Is Your
Hospital? Special Edition Hospital Guide. London:
Author. Retrieved August 2, 2007. <www.drfoster.>.
Farsi, M. and G. Ridder. 2006. “Estimating the Out-
of-Hospital Mortality Rate Using Patient Discharge
Data.” Health Economics 15(9): 983–95.
Glance, L.G., A. Dick, T.M. Osler, Y. Li and D.B.
Mukamel. 2006. “Impact of Changing the Statistical
Methodology on Hospital and Surgeon Ranking: The
Case of the New York State Cardiac Surgery Report
Card.” Medical Care 44(4): 311–19.
Goldman, D.A. and J.D. Brender. 2000. “Are
Standardized Mortality Ratios Valid for Public Health
Data Analysis?” Statistics in Medicine 19(8): 1081–88.
Harley, M.J. 2004. “Hospital Mortality League
Tables: Influence of Place of Death.” BMJ April 20.
Available at:
ters/328/7450/1235. Accessed June 10, 2008.
Hart, M.K., J.W. Robertson, R.F. Hart and K.Y.
Lee. 2004. “Application of Variables Control Charts
to Risk-Adjusted Time-Ordered Healthcare Data.”
Quality Management in Health Care 13(2): 99–119.
Hart, M.K., K.Y. Lee, R.F. Hart and J.W. Robertson.
2003. “Application of Attribute Control Charts to
Risk-Adjusted Data for Monitoring and Improving
Health Care Performance.” Quality Management in
Health Care 12(1): 5–19.
Hart, M.K. and R.F. Hart. 2002. Statistical Process
Control for Health Care. Pacific Grove, CA: Duxbury/
Thomson Learning.
Iezzoni, L.I., A.S. Ash, M. Shwartz, J. Daley, J.S.
Hughes and Y.D. Mackiernan. 1996. “Judging
Hospitals by Severity-Adjusted Mortality Rates:
The Influence of the Severity-Adjustment Method.”
American Journal of Public Health 86(10): 1379–87.
Institute for Healthcare Improvement. 2003. Move
Your Dot: Measuring, Evaluating, and Reducing
Hospital Mortality Rates (IHI Innovation Series White
Paper). Boston: Author. Retrieved August 3, 2007.
Jarman, B., A. Bottle, P. Aylin and M. Browne. 2005.
“Monitoring Changes in Hospital Standardised
Mortality Ratios.” BMJ 330(7487): 329.
Jarman, B., S. Gault, B. Alves, A. Hider, S. Dolan, A.
Cook, B. Hurwitz and L.I. Iezzoni. 1999. “Explaining
Differences in English Hospital Death Rates Using
Routinely Collected Data.” BMJ 318(7197): 1515–20.
Julious, S.A., J. Nicholl and S. George. 2001. “Why
Do We Continue to Use Standardized Mortality
Ratios for Small Area Comparisons?” Journal of Public
Health Medicine 23(1): 40–46.
Knaus, W.A., D.P. Wagner, E.A. Draper, J.E.
Zimmerman, M. Bergner, P.G. Bastos, C.A. Sirio,
D.J. Murphy, T. Lotring and A. Damiano. 1991. “The
APACHE III Prognostic System. Risk Prediction
of Hospital Mortality for Critically Ill Hospitalized
Adults.” Chest 100(6): 1619–36.
Lee, K. and C. McGreevey. 2002. “Using Control
Charts to Assess Performance Measurement Data.”
Joint Commission Journal on Quality Improvement
28(2): 90–101.
Leeb, K., J. Zelmer, G. Webster and I. Pulcins. 2005.
“Safer Care – Measuring to Manage and Improve.”
Healthcare Quarterly 8(Special Issue): 86–89.
Li, Y., A.W. Dick, L.G. Glance, X. Cai and D.B.
Mukamel. 2007. “Misspecification Issues in Risk
Adjustment and Construction of Outcome-Based
Quality Indicators.” Health Services and Outcomes
Research Methodology 7(1–2): 39–56.
Mehrotra, A., T. Bodenheimer and R.A. Dudley.
2003. Employers’ Efforts to Measure and Improve
Hospital Quality: Determinants of Success.” Health
Affairs 22(2): 60–71.
Park, R.E., R.H. Brook, J. Kosecoff, J. Keesey, L.
Rubenstein, E. Keeler, K.L. Kahn, W.H. Rogers
and M.R. Chassin. 1990. “Explaining Variations
in Hospital Death Rates. Randomness, Severity
of Illness, Quality of Care.” Journal of the American
Medical Association 264(4): 484–90.
Seagroatt, V. and M.J. Goldacre. 2004. “Hospital
Mortality League Tables: Influence of Place of
Death.” BMJ 328(7450): 1235–36.
Steiner, S.H., R.J. Cook, V.T. Farewell and T.
Treasure. 2000. “Monitoring Surgical Performance
Using Risk-Adjusted Cumulative Sum Charts.”
Biostatistics 1(4): 441–52.
Thomas, J.W. and T.P. Hofer. 1999. “Accuracy of
Risk-Adjusted Mortality Rate as a Measure of
Hospital Quality of Care.” Medical Care 37(1): 83–92.
Whittington, J., T. Simmonds and D. Jacobsen.
2005. Reducing Hospital Mortality Rates: Part 2 (IHI
Innovation Series White Paper). Cambridge, MA:
Institute for Healthcare Improvement. Available
pitalMortalityRates2WhitePaper2005.pdf. Accessed
June 10, 2008.
Wolfenden, H.H. 1923. “On the Methods of
Comparing the Mortalities of Two Communities, and
the Standardization of Death-Rates.” Journal of the
Royal Statistical Society 86(3): 399–411.
Wright, J., B. Dugdale, I. Hammond, B. Jarman, M.
Neary, D. Newton, C. Patterson, L. Russon, P. Stanley,
R. Stephens and E. Warren. 2006. “Learning from
Death: A Hospital Mortality Reduction Programme.”
Journal of the Royal Society of Medicine 99(6): 303–08.
Yap, C.H., M.E. Colson and D.A. Watters. 2007.
“Cumulative Sum Techniques for Surgeons: A Brief
Review.” ANZ Journal of Surgery 77(7): 583–86.
Appendix 1. Definitions for Levels
in the Winnipeg Regional Health
Authority’s Advance Care Plan
Advance Care Plan 1
Advance care plan 1 is often referred to
as palliative or comfort care. It focuses on
aggressive relief of pain and discomfort. There
is no cardiopulmonary resuscitation (CPR
– intubation, assisted ventilation, defibril-
lation, chest compressions or advanced life
support medications) performed. There are
also no life-sustaining or curative treatments,
such as intensive care , tube feedings, transfu-
sions, dialysis, intravenous (IV) hookups and
certain medications. All available tests and
treatments necessary for palliation are done,
including medications and transfer to hospital
if necessary.
Advance Care Plan 2
Advance care plan 2 provides palliative and
comfort care, as above, but also allows for
treatment of reversible conditions (e.g.,
pneumonia, blood clot) that may have devel-
Do Hospital Standardized Mortality Ratios Measure Patient Safety?
HealthcarePapers Vol. 8 No. 4
oped. There is no CPR (intubation, assisted
ventilation, defibrillation, chest compressions,
advanced life support medications). Intensive
care , all available tests and treatments for
reversible conditions are offered, based on
medical assessment, except for CPR. Certain
tests and treatments for any reversible condi-
tions (e.g., tube feedings, dialysis, intensive
care , transfusions, IV hookups, certain medi-
cations, certain tests, transfer to hospital, etc.)
may be refused based on the patient’s values.
Advance Care Plan 3
Advance care plan 3 provides any necessary
palliative and comfort care, as above, plus
available treatment of all conditions, both
reversible and nonreversible, with no restric-
tions, except for CPR. There is no CPR
(intubation, assisted ventilation, defibrillation,
chest compressions, advanced life support
medications). As above, a person may elect to
refuse any tests or treatments for both nonre-
versible and reversible conditions. If so, these
should be listed.
Advance Care Plan 4
Advance care plan 4 provides for all available
treatment of all conditions, and includes full
Listen and learn.
Longwoods Radio available now at
    • "Indeed, the use of hospital standardised mortality ratios as a measure of quality of care has been heavily criti- cised [4,5]. Part of the problem stems from our inability to predict when death was preventable and, in particular , when patients are at the end of their natural life [4,6]. Nevertheless, for specific subpopulations and in conjunction with other relevant information, mortality remains an important outcome measure of safety. "
    [Show abstract] [Hide abstract] ABSTRACT: The last two decades have seen an unprecedented growth in initiatives aimed to improve patient safety. For the most part, however, evidence of their impact remains controversial. At the same time, the healthcare industry has experienced an also unprecedented growth in the amount and variety of available electronic data. In this paper, we provide a review of the use of routinely collected electronic data in the identification, analysis and surveillance of temporal patterns of patient safety. Two important temporal patterns of the safety of hospitalised patients were identified and discussed: long-term trends related to changes in clinical practice and healthcare policy; and shorter term patterns related to variations in workforce and resources. We found that consistency in reporting is intrinsically related to availability of large-scale, fit-for-purpose data. Consistent reported trends of patient harms included an increase in the incidence of post-operative sepsis and a decrease in central-line associated bloodstream infections. Improvement in the treatment of specific diseases, such as cardiac conditions, has also been demonstrated. Linkage of hospital data with other datasets provides essential temporal information about errors, as well as information about unsuspected system deficiencies. It has played an important role in the measurement and analysis of the effects of off-hours hospital operation. Measuring temporal patterns of patient safety is still inadequate with electronic health records not yet playing an important role. Patient safety interventions should not be implemented without a strategy for continuous monitoring of their effect.
    Full-text · Article · Feb 2015
    • "Some have demonstrated that use of HSMRs may reduce in-hospital mortality by supporting improvement initiatives for reducing hospital mortality with reduced HSMRs as a result. 39–43 Nevertheless, as discussed in this report, many others have previously demonstrated that the HSMR is not a reliable measurement of quality of care, 2,8,11,20,3031323335,44 –48 with the most important shortcomings summarized inTable 2. Besides, hospitals might try to lower the HSMR, although it is not even proven yet that a lower HSMR is an indicator of good quality. Also, hospitals with an incorrectly low HSMR might conclude that their quality of care is of a sufficient level, and will not pay full attention to areas that actually require it. "
    [Show abstract] [Hide abstract] ABSTRACT: Background: The standardised mortality ratio (SMR) for rectal or anal cancer was above average in a large tertiary referral centre for locally advanced rectal cancer in the Netherlands. The aim of this study was to investigate whether the increased SMR was indeed related to poor quality of care or whether it could be explained by inadequate adjustment for case-mix factors. Methods: Between 2006 and 2008, 381 patients were admitted for rectal or anal cancer. The SMR score of this diagnostic group was 230 (95% CI 140 to 355), corresponding with 20 in-hospital deaths. The hospital dataset was merged with data from the Eindhoven Cancer Registry to obtain more detailed information. Results: Patients admitted for palliative care only accounted for 45% (9/20) of the in-hospital mortality. In contrast to the high SMR, postoperative mortality was low, i.e. 2.6%. The majority of the rectal or anal cancer patients were diagnosed in and referred from another hospital. Referred patients more often had an advanced tumour stage, more often underwent resection and were more frequently treated with chemotherapy and/or radiotherapy than non-referred patients (p<0.01). Postoperative mortality rates for referred and non-referred patients were 2.9% and 1.9%, respectively. Conclusions: The increased SMR appeared to be caused by the admission of patients who received palliative care only. Consequently, the SMR is unreliable for the assessment of quality of care in patients with rectal or anal cancer.
    Full-text · Article · May 2013
    • "The SMR is often used as a hospital-wide measure, such as the Hospital Standardised Mortality Ratio (HSMR) [8], [9] and the Summary Hospital Mortality Indicator (SHMI) [10]. Additionally, and perhaps more informatively111213, SMRs are also often reported by clinical sub-specialties, for example, cardiac surgery [14] and neonatal survival [15]. The SMR is, therefore, a measure of how the outcomes for an individual healthcare provider compared with those of a reference population: e.g. "
    [Show abstract] [Hide abstract] ABSTRACT: The Standardised Mortality Ratio (SMR) is increasingly used to compare the performance of different healthcare providers. However, it has long been known that differences in the populations of the providers can cause biased results when directly comparing two SMRs. This is potentially a particular problem in neonatal medicine where units provide different levels of care. Using data from The Neonatal Survey (TNS), babies born at 24 to 31 weeks gestational age from 2002 to 2011 and admitted to one of 11 UK neonatal units were identified. Risk-adjusted SMRs were calculated for each unit using a previously published model to estimate the expected number of deaths. The model parameters were then re-estimated based on data from each individual neonatal unit ("reference" unit) and these then applied to each of the other units to estimate the number of deaths each unit would have observed if they had the same underlying mortality rates as each of the "reference" hospitals. The ratios of the SMRs were then calculated under the assumption of identical risk-specific probabilities of death. 7243 babies were included in all analyses. When comparing between Network Neonatal Units (Level 3) the ratio of SMRs ranged from 0.92 to 1.00 and for the comparisons within Local Neonatal Units (Level 2) ranged from 0.79 to 1.56. However when comparing between neonatal units providing different levels of care ratios up to 1.68 were observed. If the populations of healthcare providers differ considerably then it is likely that bias will be an issue when directly comparing SMRs. In neonatal care, the comparison of Network Neonatal Units is likely to be useful but caution is required when comparing Local Neonatal Units or between units of different types. Tools to quantify the likely bias are required.
    Full-text · Article · Apr 2013
Show more