Changes in Patient Sorting to Nursing
Homes under Public Reporting:
Improved Patient Matching or
Rachel M. Werner, R. Tamara Konetzka, Elizabeth A. Stuart,
and Daniel Polsky
Objective. To test whether public reporting in the setting of postacute care in nursing
homes results in changes in patient sorting.
Data Sources/Study Setting. All postacute careadmissionsfrom 2001 to 2003 inthe
nursing home Minimum Data Set.
Study Design. We test changes inpatientsorting (or thechanges inthe illness severity
of patients going to high- versus low-scoring facilities) when public reporting was ini-
tiated in nursing homes in 2002. We test for changes in sorting with respect to pain,
delirium, and walking and then examine the potential roles of cream skimming and
downcoding in changes in patient sorting. We use a difference-in-differences frame-
work, taking advantage of the variation in the launch of public reporting in pilot and
nonpilot states, to control for underlying trends in patient sorting.
Principal Findings. There was a significant change in patient sorting with respect to
pain after public reporting was initiated, with high-risk patients being more likely to go
to high-scoring facilities and low-risk patients more likely to go to low-scoring facilities.
There was also an overall decrease in patient risk of pain with the launch of public
reporting, which may be consistent with changes in documentation of pain levels (or
downcoding). There was no significant change in sorting for delirium or walking.
patients to high-quality facilities. However, efforts should be made to reduce the in-
centives for downcoding by nursing facilities.
Key Words. Public reporting, quality of care, nursing home quality
Publicly reporting quality information is a commonly adopted strategy that is
typically thought to improve quality of care through two pathways (Berwick,
James, and Coye 2003). First, having quality information publicly available
makes it possible for health care consumers (or their agents and advocates) to
rHealth Research and Educational Trust
Health Services Research
shop on quality, thus increasing the number of patients choosing high-quality
providers. Second, in response to the increased demand for high-quality care,
public reporting may induce health care providers to invest in and improve
the quality of care they deliver. Prior work has documented quality improve-
ment from public reporting through both of these pathways (Marshall et al.
2000; Fung et al. 2008).
While these consumer- and provider-driven pathways may represent the
additional ways. One possibility is that public reporting affects the sorting of
patients to providers. Whereas typical demand-side changes under public re-
porting imply that the number of patients choosing high-scoring providers in-
creases,sortingmay changethe type ofpatientschoosinghigh-scoringproviders
(e.g., changing the case mix of patients going to high- versus low-scoring fa-
out high-scoring providers). They may also be provider driven, either resulting
from cream skimming (i.e., if low-scoring providers are more likely to seek out
low-risk patients) or downcoding (i.e., if low-scoring providers change their
documentation to make their patients [and their quality] appear better).
Our objective in this paper is to test whether public reporting in the
patient sorting, and to test whether changes in patient sorting are driven by
patient or provider behavior.
BACKGROUND AND PRIOR LITERATURE
Public reporting may shift the type of patients seen at high- and low-scoring
providers for several reasons. First, patient–provider matching may improve
Address correspondence to Rachel M. Werner, M.D., Ph.D., Center for Health Equity Research
and Promotion, Philadelphia VAMC, Division of General Internal Medicine, University of
Pennsylvania School of Medicine, 1230 Blockley Hall, 423 Guardian Drive, Philadelphia, PA
19104; e-mail: firstname.lastname@example.org. R. Tamara Konetzka, Ph.D., is with the Department of
Health Studies, University of Chicago, Chicago, IL. Elizabeth A. Stuart, Ph.D., is with the
Department of Mental Health, Johns Hopkins Bloomberg School of Public Health, Baltimore,
MD. Daniel Polsky, Ph.D., is with the Division of General Internal Medicine, University of
Pennsylvania School of Medicine, Leonard Davis Institute of Health Economics, University
of Pennsylvania, Philadelphia, PA.
556 HSR: Health Services Research 46:2 (April 2011)
to seek out high-scoring providers if they stand to gain more by finding a high-
quality provider (or lose more if they are cared for by a low-quality provider).
Limited prior work has examined whether public reporting results in better
Second, certain types of providers——such as low-quality providers——may
benefit from improving their reportcard scores by either seeking out lower-risk
patients (cream skimming) or by changing their documentation of patient char-
acteristics to make patients appear healthier (downcoding). Cream skimming is
adjusted or if providers have superior information about patient risk than the
risk adjustment used in public reporting. Cream skimming may result in fewer
high-risk patients being treated overall by providers subject to public reporting
(particularly if there are provider or treatment substitutes available), and down-
coding may result in the appearance of lower risk on average (without changes
in the true underlying risk levels). However, cream skimming and downcoding
may cause the appearance of patient matching when improved patient match-
ing does not exist or exaggerate estimates of improvements in true patient
matching. This might happen if the likelihood of cream skimming or down-
coding differs by provider type (e.g., low-quality providers may be more likely
to engage in cream skimming or downcoding to improve their quality scores).
Prior work has documented cases of both cream skimming (Omoigui
(Green and Wintfeld 1995) in the setting of public reporting. One prior study
of low-risk patients by examining trends in six clinical characteristics among
newly admitted nursing homes residents. They find that one characteristic
declined after the initiation of public reporting (the percentage of newly ad-
mitted residents in pain) and that this decline was larger among facilities
reported to have poor pain control (Mukamel et al. 2009). While suggestive of
cream skimming, this descriptive study did not attempt to differentiate patient
matching, provider cream skimming, and downcoding.
We focus our analyses on postacute care (or short-stay) residents of nursing
Changes in Patient Sorting under Public Reporting557
(or another long-term care setting) for over 5.1 million Medicare beneficiaries
annually (MedPAC 2008), providing health care services, including rehabil-
itation, skilled nursing, and other ancillary services in a variety of health care
settings (MedPAC 2009). Over 40 percent of Medicare beneficiaries use post-
acute care annually and the largest proportion of postacute care occurs in
nursing homes (or SNFs), with over 2.5 million SNF stays in 2007 for which
Medicare paid over U.S.$21 billion. Approximately 11 percent of nursing
home beds are filled by Medicare postacute care patients at any given time;
these services provide an important revenue stream for nursing homes.
Poor quality of care has been pervasive in nursing homes for decades
(Institute of Medicine 1986). In an attempt to address these quality deficits,
the Department of Health and Human Services announced the formation of
theNursingHomeQualityInitiativein 2001,with amajorgoal ofimprovingthe
information available to consumers on the quality of care at nursing homes. As
part of this effort, the Centers for Medicare and Medicaid Services (CMS) re-
leased Nursing Home Compare, a guide detailing quality of care at over 17,000
Medicare- and/or Medicaid-certified nursing homes (Centers for Medicare and
Medicaid 2008). Nursing Home Compare was first launched as a pilot program
Washington). Seven months later, in November 2002, Nursing Home Compare
was launched nationally. At that time, Nursing Home Compare included four
measures of postacute care quality: the proportion of postacute care residents
with moderate to severe pain, delirium (with and without adjustment for facility
admissions profile), and improvement in walking. The facility-adjusted delirium
measure was dropped soon thereafter, leaving three postacute care measures.
Several recent studies have examined whether quality improved under
Nursing Home Compare. These studies have found that report card scores
improved on some clinical measures but not others (Mukamel et al. 2008b;
Werner et al. 2009). For measures where quality scores improved, such as the
proportion of nursing home residents with pain, the size of the improvement
was modest (Werner et al. 2009).
change in patient sorting——or the illness severity of patients going to high-
versus low-scoring facilities after the initiation of public reporting——and then
examining the role of cream skimming and downcoding in changes in patient
558 HSR: Health Services Research 46:2 (April 2011)
variation in release of Nursing Home Compare in pilot and nonpilot states, to
of pilot states provides a stronger empirical test of the association between
Nursing Home Compare and observed changes in patient sorting and illness
severity than just relying on within-state pre–post changes in patient sorting.
reasons. First, the postacute population has a high turnover rate and less
cognitive impairment compared with the chronic care nursing home popu-
lation. This makes it feasible empirically to find changes in patient sorting in
response to public reporting over a short timeframe, if they exist. Second,
postacute care residents can be linked with important variables used in case
mix adjustment from the qualifying Medicare-covered hospitalization, en-
changes in quality. Finally, with the high spending on postacute care, im-
provingthequalityof postacute care has become a high priority forMedicare.
We include all nursing homes that were persistently included in public
reporting for postacute care measures over this period (SNFs with fewer than
20 eligible patients over a 6-month period are excluded from Nursing Home
Compare [Morris et al. 2003]). While approximately one-half of SNFs are
excluded from our analyses due to their small size, only 6 percent of SNF
discharges are excluded. Among SNFs exposed to public reporting, we in-
cluded all Medicare fee-for-service beneficiaries age 65 years or older.
Our primary data source is the nursing home Minimum Data Set (MDS). The
MDS contains detailed clinical data collected at regular intervals for every
resident in a Medicare- and/or Medicaid-certified nursing home. These data
arecollectedand usedby nursing homesto assess needs and developa plan of
care unique to each resident and by the CMS to calculate Medicare prospec-
tive reimbursement rates. Because of the reliability of these data (Gambassi
et al. 1998; Mor et al. 2003) and the detailed clinical information contained
therein, they are the data source for the report card measures included in
Nursing Home Compare. We also used two secondary sources of data. We
obtained facility characteristics from the Online Survey, Certification and
Changes in Patient Sorting under Public Reporting559
Reporting dataset. We used the 100 percent MedPAR data (with all Part A
claims) to calculate health care utilization covariates used in risk adjustment.
Dependent Variables: Patient Risk
We define our dependent variables of patient risk in three ways: patients with
moderate or severe pain on admission; delirium on admission; and difficulty
walking on admission. These are baseline assessments that are not publicly
reported. However, they correspond to the three postacute clinical areas for
which quality is measured from 14-day assessments and publicly reported.
Main Independent Variable: Facility Quality Report Card Scores
Our main independent variables are the three facility quality report card
scores used by Nursing Home Compare: percent of short-stay patients who
did not have moderate or severe pain; percent of short-stay patients without
delirium; and percent of short-stay patients whose walking remained inde-
pendent or improved. (All report card scores are scaled so that higher levels
indicate higher quality.) We calculate these report card scores directly from
MDS (rather than using the scores that were publicly reported), enabling us to
consistently measure facility report card scores both before and after public
reporting of these scores was initiated.
We calculated report card scores for each facility following the method
used to calculate the publicly reported postacute care report card scores on
Nursing Home Compare (Morris et al. 2003): Each measure is based on patient
assessments 14 days after admission; is calculated based on assessments over a
to have a 14-day assessment; is calculated only at facilities with at least 20 cases
during the target time period; and is based on data collected 3–9 months before
the score’s calculation. To ensure accurate replication of the report card scores,
we benchmarked our calculated report card scores against the report cards
availablefromCMS.Ourcalculated scoresdid notsignificantly differfrom those
excellent match between the results of our calculations and the publicly reported
scores. Note that we measure patient risk at admission and that facility quality is
based on patient outcomes on the 14th day of their postacute care stay.
We include patient- and facility-level characteristics as covariates in all
analyses. Patient characteristics include sociodemographic characteristics
560 HSR: Health Services Research 46:2 (April 2011)
(age, sex, and race), comorbidities (including cognitive performance, func-
tional status, and indicators of medical diagnoses), resource utilization group,
stays,and total Medicare charges).Facility characteristics includeprofit status,
size, occupancy, hospital based, and payer mix. We also include postacute
care market fixed effects (defined by the Dartmouth Atlas Hospital Service
Areas) and time quarter fixed effects.
All analyses include SNF admissions between the years 2001 and 2003, span-
ning the launch of the first published reports from Nursing Home Comparein
April and November 2002, for pilot and nonpilot states, respectively.
Ourfirstgoal is totest forchangesinpatientsortingrelatedtothelaunch
of public reporting using two empirical approaches. First, we describe quar-
terly estimates of patient sorting using an interrupted time-series design and
test whether we observe changes in patient sorting in the quarters when
Nursing Home Compare was launched. Using a linear probability model, we
estimate the following:
Riski;j;m;t¼ aReportCardScorej;t?1þ bQtrtþ gReportCardScorej;t?1
? Qtrtþ zXi;j;tþ Zmþ ei;j;m;t
of quarterly time dummies, and the interaction between the two. We also in-
clude a vector of time-varying patient and facility characteristics and market
fixed effects. We lag the report card score by one-quarter to allow consumers to
respond to the report card score from the prior quarter. The coefficient on the
report card score (a) represents patient sorting in the baseline and omitted time
period——the first quarter of 2001——and reflects the correlation between a facil-
ity’s report card score in a given clinical area and the status of newly admitted
represent quarterly changes in patient sorting after the baseline period. Before
public reporting, we expect this correlation to be low and unchanged from
quarter to quarter, as consumerswere not aware of scores and facilities had little
incentive to engage in cream skimming or downcoding. After public reporting,
a pilot versus nonpilot state, as Nursing Home Compare was initiated 7 months
earlier in the pilot states, hypothesizing that the g will be higher in post-Nursing
Changes in Patient Sorting under Public Reporting 561
Home Compare quarters than in the respective preceding quarters, with the
(second quarter of 2002 for pilot states and fourth quarter of 2002 for nonpilot
states). We estimate this specification separately for the clinical areas of pain,
delirium, and walking.
Second, to more directly test whether changes in patient sorting are
attributable to Nursing Home Compare, we use a difference-in-differences
specification that mirrorsthat usedin equation (1) but pools pilot and nonpilot
states and adds an interaction between a post-Nursing Home Compare
indicator variable and facility report card scores:
þ gReportCardScorej;t?1? Qtrtþ lNHC
þ dNHC ? ReportCardScorej;t?1þ zXi;j;tþ Zmþ ei;j;m;t
The Nursing Home Compare indicator variable (NHC) equals 1 after public
reporting was launched and zero otherwise; it thus varies between pilot and
nonpilot states because of the different timing of the launch of Nursing Home
Compare. The coefficient on the second interaction term (d) represents a
difference-in-differences estimate of change in patient sorting between pilot
and nonpilot states (controlling for secular trends). Identification using this
specification comes from the 7-month gap in launching public reporting na-
tionwide afteritwasinitiatedinthepilotstates.As above,we estimatechanges
Our second goal is to explore whether identified changes in patient
sorting could be due to provider behavior (cream skimming or downcoding).
First, we look for changes in admission severity, or the overall incidence of
pain, delirium, and difficulty walking on admission to postacute care in SNFs.
We test for these incidence changes using the same difference-in-differences
estimation strategy described above, estimating differences in patient risk
between pilot and nonpilot states.
Second,wetestwhetheranyobserved declinesinadmission severity are
more likely due to cream skimming or downcoding. To do this we use data
from the prereporting period and regress a patient’s admission risk on 33
predictors of admission risk from MDS and MedPAR (including demograph-
ics, prior SNF and hospital utilization, RUG groups, clinical characteristics,
and comorbidities) and then use the coefficients from these regressions to
predict each patient’s admission risk in the postreporting period. If patients
562 HSR: Health Services Research 46:2 (April 2011)
poor outcomes (and risk levels were correlated with other observable patient
characteristics), we expect that predicted admission risk would also decline
after Nursing Home Compare was launched. This would suggest that ob-
served declines in risk were due to true declines in patient severity, which
might be due to cream skimming. On the other hand, if actual patient risk did
not change but SNFs engaged in downcoding after Nursing Home Compare
was launched, we expect that predicted risk would remain stable while
observed risk declines.
Third,we test whether any observed declines in admission risk might be
driving the main result of changes in patient sorting. To do this, we redefine
our dependent variable with a simulated variable that takes on the values of
observed admission risk in the prereporting period and predicted risk in the
postreporting period (starting in 2002q2 in pilot states and 2002q4 in nonpilot
states).We then usethis simulateddependentvariable to test whetherchanges
in patient sorting remain using the same difference-in-difference model
defined in equation (2).
A total of 8,139 SNFs from Nursing Home Compare were included in the
study, covering 4,437,746 postacute care admissions. Characteristics of these
SNFs and SNF stays are summarized in Table 1, stratified by location in pilot
versus nonpilot states.
Quarterly estimates of patient sorting based on equation 1 (i.e., the cor-
relation between patient risk and report card scores) are displayed in Figure 1.
For pain, low-scoring facilities are more likely to serve high-risk patients at
baseline (as evidenced by the negative correlation between patient risk and
report card scores) and this sorting did not significantly change quarter to
quarter before the initiation of public reporting. However, the correlation
between patient risk and report card scores significantly increased in the
quarter after these scores became publicly available via the launch of Nursing
HomeCompareinbothpilotandnonpilot states,suggestingthat patientswith
admission pain were more likely to choose higher-scoring facilities when
quality information became publicly available. Evidence of improved match-
ing remained for four quarters after public reporting was initiated, but then
declined to prereporting levels. For delirium, although there was an increase
in the correlation between patient risk and report card scores at the launch of
Nursing Home Compare in the pilot states, the change was not statistically
Changes in Patient Sorting under Public Reporting563
significant and was not seen in nonpilot states. There was no change in patient
sorting with respect to difficulty walking.
Difference-in-differences estimates of changes in patient sorting in re-
Changes in matching from Nursing Home Compare for pain is 0.095, sug-
gesting that after Nursing Home Compare was launched a 10-point higher
facility report card score was associated with an approximately 1-percentage
point higher admission pain level in the following quarter. Changes in match-
ing for delirium and walking were not statistically significant.
There was also a decline in the incidence of moderate to severe pain
across all SNFs at the time of admission to postacute care after Nursing Home
Compare was launched, with the incidence of admission pain declining by
2.01 percentage points on a base of approximately 30 percent (standard error
Stratified by Participation in the Nursing Home Compare Pilot Program
Characteristics of SNFs and SNF Admissions in Study Sample,
Pilot States Nonpilot States
Number (%) of SNFs
Number (%) of SNF admissions
Number (%) of SNF admissions with
Moderate or severe pain
Facility report card scores, mean (SD)
% Residents without pain
% Residents without delirium
% Residents who are independent in
walking or whose walking improved
Ownership, n (%)
Not for profit
Total number of beds, mean (SD)
Occupancy rate, mean (SD)
Hospital based, n (%)
Payer mix, mean (SD)
% Private pay
SNF, skilled nursing facility.
564 HSR: Health Services Research 46:2 (April 2011)
2001 2002 2003
Pilot states Non-pilot states
Correlation between patient risk on admission and facility report card score
Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4
Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4
Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4
*0.5≤p-value<0.10, **0.01≤p-value<0.05, ***p-value<0.01;
p values for statistically significant change in sorting from 1st
Card Score (Or Patient Sorting) before and after Report Card Scores Were
Notes.Results are stratifiedbylocationina pilotversusnonpilot statebecausepublicreporting was
launched at different times in pilot and nonpilot states. The vertical lines show the timing of the
launch of public reporting in pilot states (in black) and nonpilot states (in gray). Results from
multivariate regression described in Equation 1.
Changes in Patient Sorting under Public Reporting565
0.19; p-value o.001). The changes in admission incidence of delirium and
difficulty walking were small and statistically nonsignificant.
While the observed incidence of pain on admission declined by 2 per-
centage points after Nursing Home Compare was launched, the predicted
incidence of admission pain did not significantly change. The change in the
predicted value of admission pain after Nursing Home Compare was
launched was close to zero (change 0.07 percentage points, standard error
0.04; p-value .09). However, when substituting this predicted value for the
dependent variable in the postpublic reporting period, the estimate of patient
sorting did not change substantially. With the simulated dependent variable,
publicreportingwasassociated with a statistically significantchangeofpatient
sorting of 0.093 (standard error 0.013; p-value o.001).
in Matching between High-Quality SNFs and High-Severity Patients after
NHC Was Launched
Difference-in-Differences Estimates (from Equation 2) of Changes
Report card score ? post-NHCw
Quarterly estimates of report card scores
Quarterly fixed effects
Market fixed effects
Number of observations
Notes. Key coefficients are highlighted in bold. All regressions include patient and facility char-
acteristics (profit status, number of total beds, occupancy, hospital based, and payer mix).
Robust standard errors in parentheses.
wReport card score is defined as the percent of residents without pain, without delirium, who are
independent in walking or whose walking improved for regressions of changes in admission-level
pain, delirium, and difficulty walking, respectively. These report card scores are included in
Nursing Home Compare. We include a one-quarter lag of the report card scores to allow con-
sumers time to respond to the scores that were reported in the prior quarter.
zNursing home compare was launched in April 2002 (or 2002q2) in pilot states and November
2002 (or 2002q4) in nonpilot states.
NHC, Nursing Home Compare; SNF, skilled nursing facility.
566 HSR: Health Services Research 46:2 (April 2011)
We find a significant change in patient sorting with respect to pain after public
reporting was initiated, with high-risk patients being more likely to go to high-
scoring facilities and low-risk patients more likely to go to low-scoring facil-
ities. We also find that the incidence of admission pain levels decreases after
the launch of public reporting in a way that is not predicted by other patient
characteristics, suggesting that facilities were downcoding high-risk patients.
Nonetheless, even after accounting for potential down coding, significant
evidence of patient sorting remained.
Although we find evidence of changes in patient sorting and changes in
admission risk profiles with respect to pain, we find little evidence of either
patient sorting or changes in admission risk with respect to delirium or walk-
ing. There are plausible explanations for these discrepant findings. First,
admission delirium and difficulty walking had very low and high prevalence
changes in patient sorting, if it exists. Second, to the extent that improved
matching is due to patient behavior, the report card measure of pain control
may be more salient and thus patients (or their agents) may be more likely to
respond to it. Because of the low levels of within-facility correlation between
spill over to improved matching on another uncorrelated measure.
We also find that changes in patient sorting on pain waned over four
quarters in both pilot and nonpilot states. While one explanation for this
is related to the inadequate risk adjustment of the quality measures used in
Nursing Home Compare (Arling et al. 2007; Mukamel et al. 2008a). If quality
measures are poorly risk adjusted, as high-scoring facilities attract high-risk
of improved sorting that we document may be related to the delay built into
the calculation of the report card scores——report card scores are calculated
three quarters after the data are collected; we include a one-quarter lag of the
report card scores to allow consumers a chance to respond to the information.
Thus, the changes in patient illness severity that occur at the time of Nursing
Home Compare would take four quarters to appear in the report card score.
We make several important contributions to the existing literature. To
our knowledge, we are the first to directly examine changes in patient sorting
in response to Nursing Home Compare. While it is usually assumed that
Changes in Patient Sorting under Public Reporting567
publicreportingwill improvequality ofcareby increasingthe market share of
high-quality providers and/or giving providers incentive to improve the qual-
ity of care they deliver, changes in patient sorting suggests an alternative
mechanism to improve quality of care, implying that there are changes in the
type of patients a provider sees, rather than or in addition to the number of
patients a provider sees. Improved matching suggests that the quality effect of
public reporting may be largest among the sickest patients.
While prior work has described a decline in the incidence of admission
pain after Nursing Home Compare was launched (Mukamel et al. 2009), we
affirm these findings using a robust methodological approach that controls for
underlying secular trends. Even when controlling for underlying trends in
states where Nursing Home Compare was not simultaneously released, we
find clinically meaningful declines in levels of admission pain. However, our
analyses suggest that these declines are most consistent with downcoding
rather than cream skimming. While it remains possible that nursing homes
engage in cream skimming, particularly in ways that are unobservable to us in
the data, we find that observable patient characteristics are predictive of
higher, andstable,levelsof admissionpain afterNursingHomeComparewas
launched. Researchers have found evidenceof downcoding inthepresence of
public reporting (Green and Wintfeld 1995). In addition, prior evidence sug-
gests the reliability of the pain measure may be low and varies with patient
characteristics (Wu et al. 2005a,b). Despite the evidence in support of down-
coding in this setting, downcoding does not substantially alter our estimates of
sorting. Even after controlling for changes in coding we find significant
changes in patient sorting in association with public reporting.
A few study limitations should be considered. First, the relationship
we estimate between a facility’s report card score and admission severity may
be endogenous, particularly in the presence of inadequate risk adjustment
where the severity of patients admitted influences that facility’s report
card score. Although our lagged report card scores account in part for this
limit our analyses to the postacute care patients in nursing homes, which
limit the generalizability of our results. However, these results provide impor-
tant information about the potential for public reporting to induce patient
matching and downcoding in any health care sector. Third, our difference-
in-differences model depends on the assumption that secular trends in pilot
and nonpilot states are the same, and potential violations of this assumption
make causal attribution of changes in sorting and case mix to Nursing Home
568 HSR: Health Services Research 46:2 (April 2011)
Despite these limitations, our findings have important implications. Download full-text
Public reporting may have the largest impact on improving quality for the
sickest patients. Thus, looking for changes in quality on average, rather than
among subsets of patients, may lead to an underestimate of the effect of public
reporting on quality of care. Although improved matching of patients to pro-
viders under public reporting is good news, it is accompanied by the possi-
bility that public reporting may also induce downcoding by providers.
Changes in coding may be a justified response to data inaccuracies that must
be fixed to more accurately measure quality. However, these changes obfus-
cate true changes in quality in response to quality improvement incentives.
Joint Acknowledgment/Disclosure Statement: This research was funded by a grant
from the Agency for Healthcare Research and Quality (R01 HS016478-01).
Disclaimers: The content of this article does not reflect the views of the
VA or of the U.S. Government.
Arling, G., T. Lewis, R. L. Kane, C. Mueller, and S. Flood. 2007. ‘‘Improving Quality
Assessment through Multilevel Modeling: The Case of Nursing Home Com-
pare.’’ Health Service Research 42 (3, part 1): 1177–99.
Berwick, D. M., B. James, and M. J. Coye. 2003. ‘‘Connections between Quality Mea-
surement and Improvement.’’ Medical Care 41 (1, suppl): I-30–8.
Centers for Medicare and Medicaid. 2008. ‘‘Nursing Home Compare’’ [accessed on Sep-
tember 19, 2008]. Available at http://www.medicare.gov/Nhcompare/Home.asp
Dranove, D., D. Kessler, M. McClellan, and M. Satterthwaite. 2003. ‘‘Is More Infor-
of Political Economy 111 (3): 555–88.
Fung, C. H.,Y.-W. Lim, S. Mattke, C. Damberg,andP. G. Shekelle. 2008. ‘‘Systematic
Review: The Evidence That Publishing Patient Care Performance Data Im-
proves Quality of Care.’’ Annals of Internal Medicine 148 (2): 111–23.
Gambassi, G., F. Landi, L. Peng, C. Brostrup-Jensen, K. Calore, J. Hiris, L. Lipsitz,
V. Mor, and R. Bernabei. 1998. ‘‘Validity of Diagnostic and Drug Data in Stan-
dardized Nursing Home Resident Assessments: Potential for Geriatric Pharma-
Changes in Patient Sorting under Public Reporting569