Do Hospitals Alter Patient Care Effort
Allocations under Pay-for-Performance?
Lauren Hersch Nicholas, Justin B. Dimick, and
Theodore J. Iwashyna
Objective. To determine whether hospitals increase efforts on easy tasks relative to
difficult tasks to improve scores under pay-for-performance (P4P) incentives.
Data Source. The Centers for Medicare and Medicaid Services Hospital Compare
data from Fiscal Years 2003 through 2005 and 2003 American Hospital Association
Annual Survey data.
Study Design. We classified measures of process compliance targeted by the Premier
Hospital Quality Incentive Demonstration as easy or difficult to improve based on
whether they introduce additional per-patient costs. We compared process compliance
on easy and difficult tasks at hospitals eligible for P4P bonus payments relative to
hospitals engaged in public reporting using random effects regression models.
Principal Findings. P4P hospitals did not preferentially increase efforts for easy tasks
in patients with heart failure or pneumonia, but they did exhibit modestly greater effort
oneasytasksfor heart attack admissions.Thereisnosystematicevidencethateffort was
allocated toward easier processes of care and away from more difficult tasks.
Conclusions. Despite perverse P4P incentives to change allocation of efforts across
respond to P4P incentives as hypothesized. Alternative incentive structures may mo-
tivate greater response by targeted hospitals.
Payers and policy makers are increasingly turning to pay-for-performance (P4P)
and other value-based purchasing strategies in an attempt to control rapidly
growing health care costs and improve quality of care. P4P seeks to improve
quality by incentivizing hospitals to allocate additional effort toward specific
elements of care that are rewarded with bonus payments. However, it is unclear
how payers and regulators should design P4P incentives to motivate hospitals to
based payments used in the Medicare program, fail to provide such incentives.
rHealth Research and Educational Trust
Health Services Research
The Centers for Medicare and Medicaid Services (CMS) have been charged
with developing value-based purchasing strategies for the Medicare program
A recentCMS-sponsoreddemonstration program,thePremierHospital
Quality Incentive Demonstration (PHQID), was introduced with the goal of
significantly improving quality in incentivized areas in response to P4P in-
centive payments tied to hospital performance measured for five common
demonstration find mixed evidence that the demonstration improved com-
pliance with targeted process of care measures relative to hospital public re-
porting (without financial incentives) of the same measures (Glickman et al.
2007; Lindenauer et al. 2007). Ryan (2009) finds no improvements in 30-day
mortality for Medicare beneficiaries hospitalized with targeted conditions at
P4P hospitals. Studies of physician P4P have also identified little response to
P4P incentives (Rosenthal et al. 2005; Mullen, Frank, and Rosenthal 2009).
Relatively little is known about why P4P strategies fail to meet expectations.
In this paper, we use data from the Premier demonstration to consider a
possible explanation for the failure of P4P incentives to motivate improved
patient outcomes. We test whether the P4P incentive structure encourages
hospitals to maximize the scores used to determine bonus payments by
focusing on low-cost, easy-to-improve components of the composite score.
We find that P4P hospitals score about 1 percentage point higher than
unincentivized hospitals on easy tasks. However, we fail to find consistent
evidence that hospitals strategically shift resources to improve scores across
three incentivized medical admissions as hypothesized.
CMS introduced the PHQID program in October 2003. The demonstration
built on a voluntary reporting initiative, the Hospital QualityAlliance (HQA),
established by a collaboration between the American Hospital Association
Address correspondence to Lauren Hersch Nicholas, Ph.D., M.P.P., Institute for Social Research
and Centerfor Healthcare Outcomes and Policy, University of Michigan, 426 Thompson St, Ann
Arbor, MI 48104; e-mail: email@example.com. Justin B. Dimick, M.D., M.P.H., is with the
Department of Surgery and Center for Healthcare Outcomes and Policy, University of Michigan
Medical School, Ann Arbor, MI. Theodore J. Iwashyna, M.D., Ph.D., is with the Division of
Pulmonary & Critical Care, University of Michigan Medical School, Ann Arbor, MI.
62 HSR: Health Services Research 46:1, Part I (February 2011)
(AHA), CMS, the Joint Commission on Accreditation of Healthcare
Organizations, and several consumer groups (Jha 2005). HQA facilitates pub-
Hospital reports are disseminated through the Hospital Compare website
(http://www.hospitalcompare.hhs.gov). The Medicare Modernization Act re-
quiredallhospitalsto reporttoHospital Comparein October2004in order to
receive annual payment rate updates.
Hospitals that already subscribed to Premier, a quality reporting and pur-
chasing collective, were invited to participate in this voluntary P4P demonstra-
needed at least 30 annual admissions for targeted conditions. Of particular im-
portance from an evaluation perspective, P4P hospitals were already subscribed
to a quality reporting service and may be more motivated to improve quality of
care than hospitals engaged in reporting only because of the pay-for-reporting
efforts. Four hundred and twenty-one hospitals were invited to participate, and
255 completed the 3-year demonstration (Lindenauer et al. 2007).
The Premier demonstration incentivized the three medical admissions
targeted by Hospital Compare public reporting and two surgical admissions,
coronary artery bypass graft and hip/knee replacement. The P4P demonstra-
of a condition-specific composite measure comprised of a subset of observable
process and outcomes measures. Outcome measures are included in the two
surgical composites and for AMI. Figure 1 details the incentivized measures
included in this study, which focuses only on the medical admissions. We
analyze process of care measures because hospitals have more certainty about
performance on these measures. In an effort to improve hospital quality for all
Figure 1: Classification of Task Difficulty: Public Reporting and
Pay-for-Performance Measures for Three Targeted Medical Hospitalizations
Do Hospitals Alter Patient Care Effort Allocations under Pay-for-Performance? 63
targeted conditions and uses this all-patient data to rank hospitals. However,
Medicare pays bonus payments only for Medicare-covered admissions. Hos-
pitals are ranked and paid bonuses separately for each targeted condition.
PQHID hospital quality scores are calculated using a two-stage process
for conditions with process and outcome measures. The process component
score uses an opportunity model reflecting the number of successfully com-
pleted tasks divided by the number of patients eligible for each measure. The
outcome component is calculated similarly. The full composite score is a
simple average of the process and outcome scores, weighted by the relative
number of measures included in each. For example, the full AMI composite
score includes eight process measures and one outcome. Thus, the total com-
posite score is (8/9) ? (Total Process Successes/Total Process Opportuni-
ties)1(1/9) ? (Total Outcome Successes/Total Outcome Opportunities).
Composite scores are calculated without adjustments for the difficulty or
would have to have more resources to ensure that a left ventricular assessment
was performed on an eligible heart failure patient, each of those encounters
would contribute one success and one opportunity to the process composite.
Hospitals in the highest decile of composite score for each condition
receive an annual bonus of 2 percent of the Medicare diagnosis-related group
(DRG) payment for Medicare-covered admissions with the incentivized con-
dition. Hospitals in the second-highest performance decile receive a 1 percent
bonus. Hospitals that fail to improve above the lowest quintile of initial per-
Hospital Response to P4P Incentives
Payers anticipate that P4P incentives will alter hospital behavior. Because the
majority of hospitals eligible for the P4P payments are nonprofit, we follow
Horwitz and Nichols (2007) and conceptualize the hospital’s problem as
choosing a level of quality that maximizes an objective function containing
quality and other priorities such as total service volume and revenue. Chosen
levels of quality vary across conditions. We assume that hospitals already
engage in quality improvement efforts that generate positive return on in-
vestment (across monetary or nonmonetary elements of the objective func-
tion). P4P offers additional incentives (payments or fines) to indirectly
motivate ahigherlevelofqualitythanhospitalswould otherwiseselect.Thisis
achieved through process compliance and, for some conditions, inclusion of
patient outcomes in the composite score.
64HSR: Health Services Research 46:1, Part I (February 2011)
and outcomes, expected bonuses must outweigh the opportunity cost of im-
provement to the hospital. Hospitals can change performance on two dimen-
sions for each measure: the number of eligible patients and the number of
successes. In practice, admissions patterns for the medical conditions targeted
by P4P and public reporting would be difficult to manipulate.1We posit that
performance pay will motivategreater effort on tasks that canbe completed at
payment would be the same for an increase in the percentage of AMI patients
receiving smoking cessation counseling (which can be accomplished by dis-
tributing an antismoking booklet during patient registration) or a same-sized
improvement in inpatient survival, which may require changes on multiple
tasks and utilize additional resources.
To illustrate the trade-offs across easy and difficult tasks, let NAbe the
number of patients eligible for measure A, SAbe the number of successes on
a number of successes. If the goal is to maximize the P4P score while min-
imizing cost, we would expect substitution from high cost of success activities
to low cost of success activities. Consider the case where the hospital is choos-
ing between allocating enough resources to obtain SAsuccesses of S0Asuc-
cesses. The change in the composite score for a change in the number of
Thecost ofthose additionalsuccessesis CA? (S0A?SA).Thus, thecostfora 1
percentage point improvement in one’s composite score by doing better on
measure A is fCA? (S0A?SA)g/f(S0A?SA)/NAg, which simplifies to
NA? CA. The key decision-making variable is the ratio of the costs across
alternative measures of making equivalent changes in score. Whenever
(NA? CA)/(NB? CB)o1, we expect hospitals to substitute toward A away
from B if they are solely seeking to maximize their P4P benefit, as the im-
provement in the score per expenditure is greater under A than under B.
bonus payments and fines are much more relevant for hospitals at the tails of
the initial performance demonstration, response to P4P should be concen-
trated among initially high- and low-performing hospitals. We also note that
all hospitals face incentive to improve process compliance scores during the
study period due to the CMS public reporting requirements that also began at
the onset of P4P. Hospitals may gain or lose volume if patients and payers
Do Hospitals Alter Patient Care Effort Allocations under Pay-for-Performance? 65
respond to posted quality information. Thus, the relevant P4P effect is im-
provement above and beyond secular trends related to public reporting,
driven either by improvements in process of care or improvements in record
keeping. Reduced effort on costly tasks may have adverse consequences for
the hospital in ways that are not directly related to P4P bonus payments such
as diminished reputation. P4P hospitals may balance multiple incentives by
concentrating improvements among low-cost processes.
DATA AND METHODS
This study uses Hospital Compare measures collected under the CMS Re-
porting Hospital Quality Data for Annual Payment Update initiative and
cover Fiscal Years 2003–2005 (reported in 2004–2006). Hospital Compare
measures are posted with a 9-month lag. Data are available for 243 Premier
hospitals and 3,100 non-Premier hospitals. Because public reporting and P4P
begin simultaneously, pre-P4P performance data are unavailable. We use
Hospital Compare data to assess the effect of P4P on hospital process com-
pliance relative to public-reporting only.
our estimated P4P effects are driven by unobserved differences between P4P
and comparison hospitals. We first limit the sample to hospitals reporting to
with at least 30 admissions for each of theincentivized medical conditions in all
the sample. These restrictions generate an analytic sample with 145 P4P
hospitals (the treatment group) and 1,089 comparison hospitals. We exclude
98 small P4P hospitals that do not have sufficient sample size in all years.
WeaugmenttheHospital Comparedata with surveydatafromthe2003
AHA Annual Survey. AHA survey data include baseline hospital character-
which may reflect hospitals’ interests and abilities to comply with evidence-
based measures. We also control for the percentage of admissions covered by
Medicare, because P4P bonuses will be larger for hospitals which are more
reliant on Medicare.
The AHA survey alsoaskshospitals whether they are involved inquality
66 HSR: Health Services Research 46:1, Part I (February 2011)
compliance data, this variable helps to isolate a control group that is engaged
in some form of quality measurement at baseline. Three-quarters of P4P and
non-P4P hospitals report participating in quality reporting in 2003. Our pre-
ferred control group for the 145 P4P hospitals is the 842 ‘‘early adopter’’
hospitals that are already engaged in some form of reporting as of the 2003
survey. By comparing P4P hospitals to early adopters, we minimize bias re-
lated to differential knowledge of or engagement in quality measurement and
improvement between P4P and comparison hospitals at baseline.
Our analysis is limited to 13 Hospital Compare measures covering the
threeinitialconditions,whichare consistentlyreportedduringtheinitial years
of P4P. Process-of-care measures are the proportion of eligible patients re-
calculate overall composite scores for each condition and condition-specific
composites for easy and hard processes. Composite scores are calculated fol-
lowing PQHID methodology as an opportunity model, which is the propor-
tion of opportunities where the appropriate measure was provided.
A panel of physician health services researchers, including a cardiologist,
tasks as easy or difficult to improve (Figure 1). Panelists were instructed
to classify tasks that would impose minimal additional per-patient costs as
easy to improve and those that would impose additional costs, for example,
by requiring additional staff time (either from existing staff or new hiring)
as difficult to improve. Hospital Compare data are used to create com-
posite hospital performance scores separately for each of the three conditions
and for easy and difficult tasks within conditions. Hospitals are assigned
quintiles of initial performance based on where their process compliance
composite score falls in the P4P hospital process compliance distribution
in Year 1.
We estimate random effects regressions of hospital process compliance
with easy and difficult tasks during the first 3 years of P4P using generalized
least squares regression.
Phte¼ aP4Phþ bQ h þ gðP4Ph? QhÞ þ tHhþ rT
þ dP4Ph? T þ ahþ ehte
Thedependent variable Phteis thein averageprocess compliance in hospital h
in year t on easy (difficult) tasks e. P4P is an indicator for participation in the
Do Hospitals Alter Patient Care Effort Allocations under Pay-for-Performance? 67
P4P demonstration; Qhis a vector of dummy variables indicating the hos-
pital’s condition-specific initial performance quintile relative to the omitted
median quintile; P4Ph? Qhis a vector of interaction terms which allow the
P4P response to vary with initial hospital ranking; H is a vector of baseline
hospital characteristics from the AHA survey; T is a vector of year fixed
effects relative to the first year; dP4Ph? T is an interaction term which
allows the time effect to vary for P4P and non-P4P hospitals; ahis a hospital
random effect uncorrelated with other variables; and ehteis an error term. We
test thehypothesis that P4P incentives motivate hospitals toincrease efforts on
easy tasks and decrease efforts on difficult tasks. We examine whether this
payments or face larger potential bonuses.
Equation (1) is estimated twice for each of the three incentivized con-
ditions. Within each condition, the model is estimated separately for the easy
and difficult composites scores. Hospital rankings Qhare condition specific.
Our preferred specification compares P4P hospitals only to those hospitals
for heterogeneous response to P4P incentives by initial level of performance,
The volume regressions include indicator variables for hospitals in the lowest
and highest quartiles of condition-specific volume.
and Year 3 of the P4P demonstration project both in hospitals receiving
financial incentives and in other hospitals that were only subject to public
reporting. P4P hospitals experience larger unadjusted gains on some but not
all targeted measures (Table 1).
As shown in Table 1, P4P and reporting-only hospitals increased per-
formance across both easy and difficult measures. Overall gains are nearly
points versus 3.2). Early adopter hospitals actually make larger gains in use of
ACE-inhibitors for left ventricular systolic dysfunction (LVSD), which is clas-
sified as a difficult task. P4P hospitals do exhibit larger gains in composite
scores for both heart failure (7.8 versus 6.8) and pneumonia (11.5 versus 10.1
percentage points) relative to the early adopter non-P4P hospitals. In contrast
to the predicted behavior for P4P hospitals to reduce efforts on difficult tasks,
68 HSR: Health Services Research 46:1, Part I (February 2011)
Unadjusted Average Process Compliance by Pay-for-Performance (P4P) and Public-Reporting-Only
Hospitals, Fiscal Years 2003–2005
Reporting Early Adopters
AMI easy composite
Aspirin at arrival (E)
Aspirin at discharge (E)
b-Blockers at arrival (E)
b-Blockers at discharge (E)
ACE-inhibitors for LVSD
Heart failure (HF) easy
Discharge instructions (E)
Smoking cessation advice (E)
HF difficult composite
ACE-inhibitors for LVSD
Left ventricular assessment
Do Hospitals Alter Patient Care Effort Allocations under Pay-for-Performance? 69
Reporting Early Adopters
Pneumonia easy composite
Pneumonia vaccination status
Oxygenation assessment (E)
Pneumonia difficult composite
Blood culture preantibiotics
% Medicare patient days
Note. Standard deviations in parentheses. (E) indicates easy tasks. Summary statistics of process compliance for P4P and public-reporting-only hospitals
reporting on at least 30 patients with targeted admissions annually. Public-reporting column includes early adopters.
AMI, acute myocardial infarction; LVSD, left ventricular systolic dysfunction; RN, registered nurse.
70HSR: Health Services Research 46:1, Part I (February 2011)
incentivized hospitals make larger gains on hard tasks for both heart failure
and pneumonia than comparison hospitals do.
Table 2 reports regression results from the first set of random effects
regressions comparing P4P hospitals to public reporting early adopters. P4P
hospitals score higher on easy tasks than control hospitals for AMI (a50.93
percentage points, SE50.36) and heart failure (a53.12, SE52.68), and
pneumonia (a50.05, SE50.21), though only the AMI effect is statistically
significant. The differences between P4P and control hospitals for difficult
tasks are small and insignificant. The P4P coefficient for heart failure is neg-
ative (a5 ?0.44, SE50.90) as expected, but positive for heart attack
(a50.44, SE51.48) and pneumonia (a51.04, SE50.72). The P4Ph? time
effects are positive and statistically significant for the hard pneumonia com-
posites, indicating that P4P hospitals improve more rapidly on difficult tasks
than unincentivized hospitals, contrary to our expectations.
The regression evidence confirms our observation from the descriptive
in the easy AMIcomposite represents about 1 percent of the Year 1 mean score.
The P4P incentives in PHQID are most relevant for high and low per-
formers. Contrary to our expectations, we fail to find statistically significant
effects for P4P hospitals at either end of the initial quality distribution relative
to hospitals with average scores. In sensitivity analysis, we fail to observe
P4P hospitals to early-adopter hospitals because we are concerned that other
unobserved hospital characteristics, such as motivation to improve and prior
Early adopter public reporting hospitals have somewhat higher initial com-
posite quality scores for all three incentivized conditions. In sensitivity anal-
ysis, models are estimated using the full public-reporting sample as a control
group (Table SA1). Our results are essentially unchanged, though P4P coeffi-
cients are slightly larger in magnitude.
P4P incentives may be more salient for larger hospitals that are eligible
for larger potential bonus payments. Table 3 reports regression results con-
trolling for hospital volume and a volume ? P4Phinteraction. We first omit
the initial performance quintiles, which were insignificant determinants of
process score in the first set of regressions. P4P main effects are positive and
SE50.43; aHF55.2, SE52.52). While the P4P effect remains small and
statistically insignificant for the easy pneumonia composite, P4P hospitals
Do Hospitals Alter Patient Care Effort Allocations under Pay-for-Performance? 71
and Hospital Process Compliance, Early Reporters Only: Fiscal Years
Pay-for-Performance (P4P) Participation, Initial Performance
Heart AttackHeart Failure Pneumonia
Easy Hard EasyHard Easy Hard
P4P ? Quintile 1
P4P ? Quintile 2
P4P ? Quintile 4
P4P ? Quintile 5
P4P ? Year 2
P4P ? Year 3
% Admissions Medicare
RN per admissions
Notes. Robust standard errors in parentheses. 987 hospitals engaged in some form of voluntary
public reporting prior in 2003.
nSignificant at 5%.
nnSignificant at 1%.
RN, registered nurses.
72HSR: Health Services Research 46:1, Part I (February 2011)
Pay-for-Performance (P4P) Participation, Volume and Hospital Process Compliance among Early Reporters,
Fiscal Year 2003–2005
P4P ? Year 2
P4P ? Year 3
% Medicare admits
RN per admission
Low volume quarterly
High volume quarterly
Do Hospitals Alter Patient Care Effort Allocations under Pay-for-Performance?73
P4P ? low volume
P4P ? high volume
quintile and interaction
Note. Robust standard errors in parentheses. 987 hospitals engaged in some form of voluntary public reporting prior in 2003.
nSignificant at 5%.
nnSignificant at 1%.
RN, registered nurses.
74 HSR: Health Services Research 46:1, Part I (February 2011)
exhibit significantly higher performance on the difficult pneumonia tasks
initial performance quintiles Qhand P4Ph? Qh. Only the AMI easy P4P
effect remains statistically significant, indicating a 1 percentage point higher
process compliance score among P4P hospitals relative to public reporting
only. P4P hospitals also improve compliance with hard measures of pneu-
monia care by an additional 2 percentage points in each of the second 2 years
of the demonstration, the only significant difference in performance over time
easy and pneumonia hard measures, including all interaction terms, is also
both statistically insignificant and inconsistently signed for most combinations
of hospital size, year, and initial performance quintile.
We sought additional evidence as to whether hospitals strategically
substitute toward easy tasks in order to improve their scores. In Table 4, we
have, on average, 5.9 times as many patients eligible for an aspirin at admission
for AMI (an easy measure) as are eligible for an ACE-inhibitor among those
with LVSD (difficult measure). This implies that if the average hospital faces
marginal costs to provide an ACE-inhibitor for those with LVSD that are 45.9
Table 4: Task Substitution Ratios of Difficult versus Easy Tasks
Difficult EasyMean Median
ACE-I for LVSD
ACE-I for LVSD
ACE-I for LVSD
ACE-I for LVSD
ACE-I for LVSD
ACE-I for LVSD
Aspirin at admission
Aspirin at discharge
b-Blocker at admission
b-Blocker at discharge
Discharge instructions0.70.8 0.40.81.9
LVSD, left ventricular systolic dysfunction.
Do Hospitals Alter Patient Care Effort Allocations under Pay-for-Performance? 75
times the marginal costs of aspirin at admission, they should substitute efforts
implausible that the marginal cost ratio is not 45.9 for the average hospital in
practice, but substitution is not observed to have occurred. For some task pairs,
the easy:difficult ratio is o1. Unless the harder task was substantially cheaper
(at the margin) than the easy task, we would expect score-maximizing hospitals
to have fully substituted toward the easier task by Year 3.
In regression analyses, we confirm that hospitals which face a lower mar-
ginal cost ratio for substitution (and therefore greater incentives to substitute)
were not more likely to substitute toward easier tasks under P4P. We estimate
our comprehensive specification of equation (1) including the full set of initial
performance, year, and volume P4P interactions separately for each of the
and discharge) and one of the easy pneumonia measures (vaccination status).
failure care exhibit higher scores for one easy (smoking cessation counseling,
a56.8 percentage points, SE52.7) and one difficult measure (left ventricular
assessment, a53.0 percentage points, SE51.46).
We conducted additional sensitivity analysesto confirmourresults. Our
main findings——that P4P is associated with a 1 percentage point gain in com-
pliance for easy AMI tasks but not related to performance on heart failure or
pneumonia measures——are robust across multiple specifications. Findings
persist when we reestimate equation (1) using the natural logarithm of com-
pliance score as the dependent variable and in a seemingly unrelated regres-
sion model with the change in score as the dependent variable, which allows
the error terms to correlate across conditions.
Despite limited empirical evidence of its effectiveness, public and private
payers continue to view P4P as a promising vehicle for quality improvement
and cost savings (Petersen 2006; IOM 2007). If P4P strategies are to achieve
these goals, however, they must motivate hospitals to respond in the desired
manner. To aid understanding of the apparent failure of P4P programs to
motivate changes in health care quality, we tested whetherhospitals rationally
responded to incentives created by the PHQID. Despite incentives to game
76 HSR: Health Services Research 46:1, Part I (February 2011)
the system and boost scores at low cost, we found that hospitals display no
consistent shift in efforts to easier tasks.
Previous studies evaluating P4P yield mixed results. In a national study
using clinical registry data, Glickman et al. (2007) find an improvement on
some processesof care forAMIbutno significant impact on a composite of all
processes or on risk-adjusted mortality. Perhaps the most comprehensive
study evaluating the Premier P4P program was conducted by Ryan (2009)
using national Medicare data. Ryan demonstrated no impact of P4P on risk-
adjusted mortality and 90-day Medicare payments for all five incentivized
conditions. Our study extends prior work to show that when there is a re-
sponse, the efforts are concentrated among easy tasks. P4P incentives do not
demonstrate that the improvement in easy tasks does not come at the expense
of decreased effort on the more difficult tasks.
There are several limitations to our analysis. Because preperiod data are
unavailable, we may underestimate the total P4P effect on effort allocation.
Although we control for many observable hospital characteristics, participa-
tion in the Premier demonstration was nonrandom and other unobserved
factors may simultaneously influence P4P participation and task allocation.
We only observe a subset of incentivized tasks, so we are unable to assess, for
example, whether P4P hospitals allocated more or less effort to distributing
smoking cessation brochures to AMI patients (easy) or ensuring that they
received thrombolytics within 30 minutes of arrival.
We lack comprehensive information about whether hospitals are par-
ticipating in other P4P or public reporting efforts during the study period.
payers, including stateMedicaid agencies,theseprograms are unlikelytoalter
our results because they tend to be small in scope. The current literature lacks
examples of P4P programs that led to meaningful differences in hospital
performance, so it is unlikely that our results are driven by other programs.
Our results have important implications for payers and policy makers
considering ways to expand the role of P4P in reimbursement. We note that
nonresponse to P4P incentives is the optimal response for many hospitals
when incentives are based on relative performance rankings. Hospitals with
average composite scores are unlikely to qualify for bonus payments, so there
is no expected return on investments in improved process compliance. We
cannot rule outthe possibility that hospitals do not change effort allocations to
maximize bonus scores because changes in efforts in incentivized tasks would
adversely affect overall quality of care or another hospital objective would
Do Hospitals Alter Patient Care Effort Allocations under Pay-for-Performance? 77
suffer. Other incentives and market factors such as reputation and private
payer expectations likely balance out explicit incentives for gaming intro-
duced by P4P.
when designing P4P programs. The heavy representation of process compli-
ance measures in the PHQID scoring methodology provides incentives for
hospitals to improve their scores by devoting additional efforts to easy tasks.
This danger would characterize any composite score methodology that does
not adjust for task difficulty or relative return. This is especially important
when composite quality measures include both processes and outcomes. A
rational hospital would improve its scores by increasing compliance on easy
processes of care, rather than focusing on improving outcomes, which many
agree is the ‘‘gold standard’’ for documenting the effectiveness of quality im-
provement. Our results highlight the need to consider the difficulty of tasks
when creating performance measures.
Policy makers also need to consider the size of the incentives in P4P.
DRG payments, the maximum bonus, is between U.S.$300 and U.S.$500 for
Medicare patients. Uncertainty about the probability of bonus receipt would
further reduce the expected value of bonus payments to hospitals assessing the
costs and benefits of response to the P4P incentives. Although P4P reporting
requirements cover all admissions for targeted conditions, bonus payments are
only made for the roughly two-thirds of admissions experienced by Medicare
beneficiaries. In contrast, the financial incentives associated with public report-
across all conditions for noncompliance. CMS provides implicit incentive pay-
ments for lower quality outcomes through the outlier payment system, which
reimburses additional hospital costs for the sickest and longest-staying patients
(including those triggered by hospital-acquired conditions and complications),
and the potential for readmissions.
Hospitals could raise comparable levels of revenue by modestly in-
creasing patient volume by attracting new patients (possibly by signaling high
quality) or readmitting patients postdischarge (particularly among lower qual-
ity hospitals). Even when hospitals can improve P4P scores at very low mar-
ginal cost, theresponse is modest. While it is beyond the scope of this paper to
assess the level of bonus payment that would motivate hospital response, our
findings suggest that the Premier payments were inadequate to generate
changes on intended or unintended dimensions.
78HSR: Health Services Research 46:1, Part I (February 2011)
In conclusion, our evaluation of the PHQID program highlights lessons
all points in the quality distribution. Second, the quality score should align
scientific knowledge about the production process (process measures) with
economic incentives to improve or maintain high quality (patient outcomes).
Finally, the program must provide a large enough bonus payment to trigger
provider response. Our findings suggest that the financial rewards (up to 2
percent of DRG payments for Medicare patients) are insufficient to motivate
hospitals to behave strategically as predicted by P4P’s motivating logic. In-
centive payments large enough to motivate hospital response may exceed
public and private payers’ willingness to pay for higher quality. Future dem-
onstrations could assess hospital response to bonus payments of larger sizes.
Joint Acknowledgment/Disclosure Statement: The authors acknowledge funding
from the National Institute on Aging (Nicholas, AG000221-17), The National
Heart, Lung and Blood Institute (Iwashyna, K08HL091249), and the Agency
for Healthcare Research and Quality (Dimick, 5K08HS017765-02). The find-
ings and conclusions are those of the authors and do not necessarily represent
the official views of any of the funding agencies. The authors appreciate com-
ments from participants at the 2009 National Bureau of Economic Research
Summer Institute and the 2009 AcademyHealth Health Economics Interest
1. For example, Ryan (2010) finds scant evidence that the Premier demon-
stration caused hospitals to reduce service to minority patients.
Centers for Medicare and Medicaid Services (CMS). 2008. ‘‘Premier Hospital Quality
Incentive Demonstration Fact Sheet’’ [accessed April 23, 2009]. Available
Do Hospitals Alter Patient Care Effort Allocations under Pay-for-Performance?79
Centers for Medicare and Medicaid Services (CMS). 2009. ‘‘Roadmap for
Implementing Value Driven Healthcare in the Traditional Medicare
Fee-for-Service Program’’ [accessed March 4, 2010]. Available at https://
Glickman, S. W., F. Ou, E. R. DeLong, M. T. Roe, B. L. Lytle, J. Mulgund, J. S.
Rumsfeld, W. B. Gibler, E. M. Ohman, K. A. Schulman, and E. D. Peterson.
2007. ‘‘Pay for Performance, Quality of Care and Outcomes in Acute Myocar-
dial Infarction.’’ Journal of the American Medical Association 297 (21): 2373–80.
Horwitz, J., and A. Nichols. 2007. What Do Non-Profits Maximize? Non-Profit
Hospital Service Provision and Market Ownership Mix. NBER Working Paper
Institute of Medicine. 2007. Rewarding Provider Performance. Washington, DC: National
Jha, A. 2005. ‘‘Care in U.S. Hospitals——The Hospital Quality Alliance Program.’’ New
England Journal of Medicine 353 (3): 265–74.
Lindenauer, P. K., D. Remus, S. Roman, M. B. Rothberg, E. M. Benjamin, A.
Ma, and D. W. Bratzler. 2007. ‘‘Public Reporting and Pay for Performance
in Hospital Quality Improvement.’’New England Journal of Medicine 356(5): 486–
Pay-for-Performance and the Quality of Healthcare Providers. NBER Working
Petersen, L. A., L. D. Woodard, T. Urech, C. Daw, andS. Sookanan. 2006. ‘‘Does Pay-
for-Performance Improve the Quality of Health Care?’’ Annals of Internal
Medicine 145 (4): 265–72.
Rosenthal, M. B., R. G. Frank, Z. Li, and A. M. Epstein. 2005. ‘‘Early Experience with
Pay-for-Performance from Concept to Practice.’’ Journal of the American Medical
Association 294 (14): 1788–93.
Ryan, A. 2009. ‘‘Effects of the Premier Hospital Quality Incentive Demonstration on
Medicare Patient Mortality and Cost.’’ Health Services Research 44 (3): 821–42.
—— —— ——. 2010. ‘‘Has Pay-for-Performance Decreased Access for Minority Patients?’’
Health Services Research 45 (1): 6–23.
Additional supporting information may be found in the online version of this
Appendix SA1: Author Matrix.
Table SA1: Pay-for-Performance Participation and Hospital Process
Compliance,Voluntary Hospital Compare Reporters:Fiscal Years2003–2005.
80HSR: Health Services Research 46:1, Part I (February 2011)
Table SA2: Pay-for-Performance Participation and Hospital Process Download full-text
Compliance among Early Reporters, Fiscal Years 2003–2005, Individual
Measure Regression Coefficients.
Please note: Wiley-Blackwell is not responsible for the content or func-
tionality of any supporting materials supplied by the authors. Any queries
(other than missing material) should be directed to the corresponding author
for the article.
Do Hospitals Alter Patient Care Effort Allocations under Pay-for-Performance?81