Attributing sources of variation in patients' experiences of ambulatory care.

Department of Health Services, School of Public Health, University of California, Los Angeles, Los Angeles, California 90095-1772, USA.
Medical care (Impact Factor: 2.94). 09/2009; 47(8):835-41. DOI: 10.1097/MLR.0b013e318197b1e1
Source: PubMed

ABSTRACT Public reporting and pay-for-performance programs increasingly rely on patient experience data to evaluate individual physicians and guide quality improvement efforts. The extent to which performance variation is attributable to physicians versus other system-level units, however, remains unclear.
Using ambulatory care experience survey data from 61,839 patients of 1729 primary care physicians in California (response rate = 39.1%), this study assesses the proportion of explainable performance variation attributable to various organizational units in composite measures of physician-patient interaction, organizational features of care, and global assessments of care. For each measure, multilevel regression models that controlled for respondent characteristics and used random effects to account for the clustering of patients within physicians, physicians within care sites, care sites within medical groups, and medical groups within primary care service areas, estimated the proportion of explainable performance variation attributable to each system-level unit.
System-level factors explained between 27.9% to 47.7% of variation, with the highest proportion explained for the access to care composite and the lowest explained for the quality of chronic care composite. Physicians accounted for the largest proportion of explainable variance for all measures (range: 35.1%-49.0%). Care sites and primary care service areas explained substantial proportions of variance (>20% each) for the access to care and care coordination measures. Medical groups explained the largest proportions of variation (>20%) for global assessments of care.
Individual physicians and their care sites are the most important foci for patient experience improvement efforts. Because markets contribute substantially to performance variation on organizational features of care, future research should clarify the extent to which associated performance deficits are modifiable.


Available from: Hector P Rodriguez, Jun 14, 2015
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: No, but they need further refinement
    BMJ (online) 10/2010; 341(oct12 1):c4783. DOI:10.1136/bmj.c4783 · 16.38 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To assess the robustness of patient responses to a new national survey of patient experience as a basis for providing financial incentives to doctors. Analysis of the representativeness of the respondents to the GP Patient Survey compared with those who were sampled (5.5 million patients registered with 8273 general practices in England in January 2009) and with the general population. Analysis of non-response bias looked at the relation between practice response rates and scores on the survey. Analysis of the reliability of the survey estimated the proportion of the variance of practice scores attributable to true differences between practices. The overall response rate was 38.2% (2.2 million responses), which is comparable to that in surveys using similar methodology in the UK. Men, young adults, and people living in deprived areas were under-represented among respondents. However, for questions related to pay for performance, there was no systematic association between response rates and questionnaire scores. Two questions which triggered payments to general practitioners were reliable measures of practice performance, with average practice-level reliability coefficients of 93.2% and 95.0%. Less than 3% and 0.5% of practices had fewer than the number of responses required to achieve conventional reliability levels of 90% and 70%. A change to the payment formula in 2009 resulted in an increase in the average impact of random variation in patient scores on payments to general practitioners compared with payments made in 2007 and 2008. There is little evidence to support the concern of some general practitioners that low response rates and selective non-response bias have led to systematic unfairness in payments attached to questionnaire scores. The study raises issues relating to the validity and reliability of payments based on patient surveys and provides lessons for the UK and for other countries considering the use of patient experience as part of pay for performance schemes.
    BMJ (online) 09/2009; 339:b3851. DOI:10.1136/bmj.b3851 · 16.38 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To explore whether responses to questions in surveys of patients that purport to assess the performance of general practices or doctors reflect differences between practices, doctors, or the patients themselves. Secondary analysis of data from a study of access to general practice, combining data from a survey of patients with information about practice organisation and doctors consulted, and using multilevel modelling at practice, doctor, and patient level. Nine primary care trusts in England. 4573 patients who consulted 150 different doctors in 27 practices. Overall satisfaction; experience of wait for an appointment; reported access to care; satisfaction with communication skills. The experience based measure of wait for an appointment was more discriminating between practices (practice level accounted for 20.2% (95% confidence interval 9.1% to 31.3%) of variance) than was the overall satisfaction measure (practice level accounted for 4.6% (1.6% to 7.6%) of variance). Only 6.3% (3.8% to 8.9%) of the variance in the doctors' communication skills measure was due to differences between doctors; 92.4% (88.5% to 96.4%) of the variance occurred at the level of the patient (including differences between patients' perceptions and random variation). At least 79% of the variance on all measures occurred at the level of the patient, and patients' age, sex, ethnicity, and housing and employment status explained some of this variation. However, adjustment for patients' characteristics made very little difference to practices' scores or the ranking of individual practices. Analyses of surveys of patients should take account of the hierarchical nature of the data by using multilevel models. Measures related to patients' experience discriminate more effectively between practices than do measures of general satisfaction. Surveys of patients' satisfaction fail to distinguish effectively between individual doctors because most of the variation in doctors' reported performance is due to differences between patients and random error rather than differences between doctors. Although patients' reports of satisfaction and experience are systematically related to patients' characteristics such as age and sex, the effect of adjusting practices' scores for the characteristics of their patients is small.
    BMJ (online) 10/2010; 341:c5004. DOI:10.1136/bmj.c5004 · 16.38 Impact Factor