Article

Variability of interpretive accuracy among diagnostic mammography facilities.

Department of Internal Medicine, University of Washington School of Medicine, Box 359854, Seattle, WA 98104, USA.
CancerSpectrum Knowledge Environment (Impact Factor: 15.16). 07/2009; 101(11):814-27. DOI: 10.1093/jnci/djp105
Source: PubMed

ABSTRACT Interpretive performance of screening mammography varies substantially by facility, but performance of diagnostic interpretation has not been studied.
Facilities performing diagnostic mammography within three registries of the Breast Cancer Surveillance Consortium were surveyed about their structure, organization, and interpretive processes. Performance measurements (false-positive rate, sensitivity, and likelihood of cancer among women referred for biopsy [positive predictive value of biopsy recommendation {PPV2}]) from January 1, 1998, through December 31, 2005, were prospectively measured. Logistic regression and receiver operating characteristic (ROC) curve analyses, adjusted for patient and radiologist characteristics, were used to assess the association between facility characteristics and interpretive performance. All statistical tests were two-sided.
Forty-five of the 53 facilities completed a facility survey (85% response rate), and 32 of the 45 facilities performed diagnostic mammography. The analyses included 28 100 diagnostic mammograms performed as an evaluation of a breast problem, and data were available for 118 radiologists who interpreted diagnostic mammograms at the facilities. Performance measurements demonstrated statistically significant interpretive variability among facilities (sensitivity, P = .006; false-positive rate, P < .001; and PPV2, P < .001) in unadjusted analyses. However, after adjustment for patient and radiologist characteristics, only false-positive rate variation remained statistically significant and facility traits associated with performance measures changed (false-positive rate = 6.5%, 95% confidence interval [CI] = 5.5% to 7.4%; sensitivity = 73.5%, 95% CI = 67.1% to 79.9%; and PPV2 = 33.8%, 95% CI = 29.1% to 38.5%). Facilities reporting that concern about malpractice had moderately or greatly increased diagnostic examination recommendations at the facility had a higher false-positive rate (odds ratio [OR] = 1.48, 95% CI = 1.09 to 2.01) and a non-statistically significantly higher sensitivity (OR = 1.74, 95% CI = 0.94 to 3.23). Facilities offering specialized interventional services had a non-statistically significantly higher false-positive rate (OR = 1.97, 95% CI = 0.94 to 4.1). No characteristics were associated with overall accuracy by ROC curve analyses.
Variation in diagnostic mammography interpretation exists across facilities. Failure to adjust for patient characteristics when comparing facility performance could lead to erroneous conclusions. Malpractice concerns are associated with interpretive performance.

Full-text

Available from: Berta M Geller, Jun 08, 2015
0 Followers
 · 
118 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Mammography has been shown to improve outcomes of women with breast cancer, but it is subject to inter-reader variability. One well-documented source of such variability is in the content of mammography reports. The mammography report is of crucial importance, since it documents the radiologist's imaging observations, interpretation of those observations in terms of likelihood of malignancy, and suggested patient management. In this paper, we define an incompleteness score to measure how incomplete the information content is in the mammography report and provide an algorithm to calculate this metric. We then show that the incompleteness score can be used to predict errors in interpretation. This method has 82.6% accuracy at predicting errors in interpretation and can possibly reduce total diagnostic errors by up to 21.7%. Such a method can easily be modified to suit other domains that depend on quality reporting.
    AMIA ... Annual Symposium proceedings / AMIA Symposium. AMIA Symposium 01/2014; 2014:1758-67.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Facilities serving vulnerable women have higher false-positive rates for diagnostic mammography than facilities serving nonvulnerable women. False positives lead to anxiety, unnecessary biopsies, and higher costs. Examine whether availability of on-site breast ultrasound or biopsy services, academic medical center affiliation, or profit status explains differences in false-positive rates. We examined 78,733 diagnostic mammograms performed to evaluate breast problems at Breast Cancer Surveillance Consortium facilities from 1999 to 2005. We used logistic-normal mixed effects regression to determine if adjusting for facility characteristics accounts for observed differences in false-positive rates. Facilities were characterized as serving vulnerable women based on the proportion of mammograms performed on racial/ethnic minorities, women with lower educational attainment, limited household income, or rural residence. Although the availability of on-site ultrasound and biopsy services was associated with greater odds of a false positive in most models [odds ratios (OR) ranging from 1.24 to 1.88; P<0.05], adjustment for these services did not attenuate the association between vulnerability and false-positive rates. Estimated ORs for the effect of vulnerability indexes on false-positive rates unadjusted for facility services were: lower educational attainment [OR 1.33; 95% confidence intervals (CI), 1.03-1.74]; racial/ethnic minority status (OR 1.33; 95% CI, 0.98-1.80); rural residence (OR 1.56; 95% CI, 1.26-1.92); limited household income (OR 1.38; 95% CI, 1.10-1.73). After adjustment, estimates remained relatively unchanged. On-site diagnostic service availability may contribute to unnecessary biopsies, but does not explain the higher diagnostic mammography false-positive rates at facilities serving vulnerable women.
    Medical care 12/2011; 50(3):210-6. DOI:10.1097/MLR.0b013e3182407c8a · 2.94 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To investigate the association between radiologist interpretive volume and diagnostic mammography performance in community-based settings. This study received institutional review board approval and was HIPAA compliant. A total of 117,136 diagnostic mammograms that were interpreted by 107 radiologists between 2002 and 2006 in the Breast Cancer Surveillance Consortium were included. Logistic regression analysis was used to estimate the adjusted effect on sensitivity and the rates of false-positive findings and cancer detection of four volume measures: annual diagnostic volume, screening volume, total volume, and diagnostic focus (percentage of total volume that is diagnostic). Analyses were stratified by the indication for imaging: additional imaging after screening mammography or evaluation of a breast concern or problem. Diagnostic volume was associated with sensitivity; the odds of a true-positive finding rose until a diagnostic volume of 1000 mammograms was reached; thereafter, they either leveled off (P < .001 for additional imaging) or decreased (P = .049 for breast concerns or problems) with further volume increases. Diagnostic focus was associated with false-positive rate; the odds of a false-positive finding increased until a diagnostic focus of 20% was reached and decreased thereafter (P < .024 for additional imaging and P < .001 for breast concerns or problems with no self-reported lump). Neither total volume nor screening volume was consistently associated with diagnostic performance. Interpretive volume and diagnostic performance have complex multifaceted relationships. Our results suggest that diagnostic interpretive volume is a key determinant in the development of thresholds for considering a diagnostic mammogram to be abnormal. Current volume regulations do not distinguish between screening and diagnostic mammography, and doing so would likely be challenging.
    Radiology 11/2011; 262(1):69-79. DOI:10.1148/radiol.11111026 · 6.21 Impact Factor