Article

When radiologists perform best: the learning curve in screening mammogram interpretation.

Group Health Research Institute, Group Health Cooperative, 1730 Minor Ave, Suite 1600, Seattle, WA 98101, USA.
Radiology (Impact Factor: 6.21). 09/2009; 253(3):632-40. DOI: 10.1148/radiol.2533090070
Source: PubMed

ABSTRACT To examine changes in screening mammogram interpretation as radiologists with and radiologists without fellowship training in breast imaging gain clinical experience.
In an institutional review board-approved HIPAA-compliant study, the performance of 231 radiologists who interpreted screen-film screening mammograms from 1996 to 2005 at 280 facilities that contribute data to the Breast Cancer Surveillance Consortium was examined. Radiologists' demographic data and clinical experience levels were collected by means of a mailed survey. Mammograms were grouped on the basis of how many years the interpreting radiologist had been practicing mammography, and the influence of increasing experience on performance was examined separately for radiologists with and those without fellowship training in breast imaging, taking into account case-mix and radiologist-level differences.
A total of 1 599 610 mammograms were interpreted during the study period. Performance for radiologists without fellowship training improved most during their 1st 3 years of clinical practice, when the odds of a false-positive reading dropped 11%-15% per year (P < .015) with no associated decrease in sensitivity (P > .89). The number of women recalled per breast cancer detected decreased from 33 for radiologists in their 1st year of practice to 24 for radiologists with 3 years of experience to 19 for radiologists with 20 years of experience. Radiologists with fellowship training in breast imaging experienced no learning curve and reached desirable goals during their 1st year of practice.
Radiologists' interpretations of screening mammograms improve during their first few years of practice and continue to improve throughout much of their careers. Additional residency training and targeted continuing medical education may help reduce the number of work-ups of benign lesions while maintaining high cancer detection rates.

0 Followers
 · 
122 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: Using a combination of performance measures, we updated previously proposed criteria for identifying physicians whose performance interpreting screening mammography may indicate suboptimal interpretation skills. In this study, six expert breast imagers used a method based on the Angoff approach to update criteria for acceptable mammography performance on the basis of two sets of combined performance measures: set 1, sensitivity and specificity for facilities with complete capture of false-negative cancers; and set 2, cancer detection rate (CDR), recall rate, and positive predictive value of a recall (PPV1) for facilities that cannot capture false-negative cancers but have reliable cancer follow-up information for positive mammography results. Decisions were informed by normative data from the Breast Cancer Surveillance Consortium (BCSC). Updated combined ranges for acceptable sensitivity and specificity of screening mammography are sensitivity ≥ 80% and specificity ≥ 85% or sensitivity 75-79% and specificity 88-97%. Updated ranges for CDR, recall rate, and PPV1 are: CDR ≥ 6 per 1000, recall rate 3-20%, and any PPV1; CDR 4-6 per 1000, recall rate 3-15%, and PPV1 ≥ 3%; or CDR 2.5-4.0 per 1000, recall rate 5-12%, and PPV1 3-8%. Using the original criteria, 51% of BCSC radiologists had acceptable sensitivity and specificity; 40% had acceptable CDR, recall rate, and PPV1. Using the combined criteria, 69% had acceptable sensitivity and specificity and 62% had acceptable CDR, recall rate, and PPV1. The combined criteria improve previous criteria by considering the interrelationships of multiple performance measures and broaden the acceptable performance ranges compared with previous criteria based on individual measures.
    American Journal of Roentgenology 04/2015; 204(4):W486-W491. DOI:10.2214/AJR.13.12313 · 2.74 Impact Factor
  • Journal of the American College of Radiology: JACR 07/2014; 11(9). DOI:10.1016/j.jacr.2014.05.006 · 2.28 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: To determine whether the mammographic technologist has an effect on the radiologists' interpretative performance of screening mammography in community practice. In this institutional review board-approved retrospective cohort study, we included Carolina Mammography Registry data from 372 radiologists and 356 mammographic technologists from 1994 to 2009 who performed 1,003,276 screening mammograms. Measures of interpretative performance (recall rate, sensitivity, specificity, positive predictive value [PPV1], and cancer detection rate [CDR]) were ascertained prospectively with cancer outcomes collected from the state cancer registry and pathology reports. To determine if the mammographic technologist influenced the radiologists' performance, we used mixed effects logistic regression models, including a radiologist-specific random effect and taking into account the clustering of examinations across women, separately for screen-film mammography (SFM) and full-field digital mammography (FFDM). Of the 356 mammographic technologists included, 343 performed 889,347 SFM examinations, 51 performed 113,929 FFDM examinations, and 38 performed both SFM and FFDM examinations. A total of 4328 cancers were reported for SFM and 564 cancers for FFDM. The technologists had a statistically significant effect on the radiologists' recall rate, sensitivity, specificity, and CDR for both SFM and FFDM (P values <.01). For PPV1, variability by technologist was observed for SFM (P value <.0001) but not for FFDM (P value = .088). The interpretative performance of radiologists in screening mammography varies substantially by the technologist performing the examination. Additional studies should aim to identify technologist characteristics that may explain this variation. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
    Academic Radiology 11/2014; 22(3). DOI:10.1016/j.acra.2014.09.013 · 2.08 Impact Factor

Full-text (2 Sources)

Download
45 Downloads
Available from
Jun 4, 2014