Educational Interventions to Improve Screening Mammography Interpretation: A Randomized Controlled Trial

American Journal of Roentgenology (Impact Factor: 2.73). 06/2014; 202(6):W586-96. DOI: 10.2214/AJR.13.11147
Source: PubMed


The objective of our study was to conduct a randomized controlled trial of educational interventions that were created to improve performance of screening mammography interpretation.

Materials and methods:
We randomly assigned physicians who interpret mammography to one of three groups: self-paced DVD, live expert-led educational seminar, or control. The DVD and seminar interventions used mammography cases of varying difficulty and provided associated teaching points. Interpretive performance was compared using a pretest-posttest design. Sensitivity, specificity, and positive predictive value (PPV) were calculated relative to two outcomes: cancer status and consensus of three experts about recall. The performance measures for each group were compared using logistic regression adjusting for pretest performance.

One hundred two radiologists completed all aspects of the trial. After adjustment for preintervention performance, the odds of improved sensitivity for correctly identifying a lesion relative to expert recall were 1.34 times higher for DVD participants than for control subjects (95% CI, 1.00-1.81; p = 0.050). The odds of an improved PPV for correctly identifying a lesion relative to both expert recall (odds ratio [OR] = 1.94; 95% CI, 1.24-3.05; p = 0.004) and cancer status (OR = 1.81; 95% CI, 1.01-3.23; p = 0.045) were significantly improved for DVD participants compared with control subjects, with no significant change in specificity. For the seminar group, specificity was significantly lower than the control group (OR relative to expert recall = 0.80; 95% CI, 0.64-1.00; p = 0.048; OR relative to cancer status = 0.79; 95% CI, 0.65-0.95; p = 0.015).

In this randomized controlled trial, the DVD educational intervention resulted in a significant improvement in screening mammography interpretive performance on a test set, which could translate into improved interpretative performance in clinical practice.

Download full-text


Available from: Berta M Geller, Jul 17, 2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To describe recruitment, enrollment, and participation in a study of US radiologists invited to participate in a randomized controlled trial of two continuing medical education (CME) interventions designed to improve interpretation of screening mammography. We collected recruitment, consent, and intervention-completion information as part of a large study involving radiologists in California, Oregon, Washington, New Mexico, New Hampshire, North Carolina, and Vermont. Consenting radiologists were randomized to receive either a 1-day live, expert-led educational session; to receive a self-paced DVD with similar content; or to a control group (delayed intervention). The impact of the interventions was assessed using a preintervention-postintervention test set design. All activities were institutional review board approved and HIPAA compliant. Of 403 eligible radiologists, 151 of 403 (37.5%) consented to participate in the trial and 119 of 151 (78.8%) completed the preintervention test set, leaving 119 available for randomization to one of the two intervention groups or to controls. Female radiologists were more likely than male radiologists to consent to and complete the study (P = .03). Consenting radiologists who completed all study activities were more likely to have been interpreting mammography for 10 years or less compared to radiologists who consented and did not complete all study activities or did not consent at all. The live intervention group was more likely to report their intent to change their clinical practice as a result of the intervention compared to those who received the DVD (50% versus 17.6%, P = .02). The majority of participants in both interventions groups felt the interventions were a useful way to receive CME mammography credits. Community radiologists found interactive interventions designed to improve interpretative mammography performance acceptable and useful for clinical practice. This suggests CME credits for radiologists should, in part, be for examining practice skills.
    Academic radiology 11/2013; 20(11):1389-1398. DOI:10.1016/j.acra.2013.08.017 · 1.75 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Purpose: To examine radiologists' screening performance in relation to the number of diagnostic work-ups performed after abnormal findings are discovered at screening mammography by the same radiologist or by different radiologists. Materials and methods: In an institutional review board-approved HIPAA-compliant study, the authors linked 651 671 screening mammograms interpreted from 2002 to 2006 by 96 radiologists in the Breast Cancer Surveillance Consortium to cancer registries (standard of reference) to evaluate the performance of screening mammography (sensitivity, false-positive rate [ FPR false-positive rate ], and cancer detection rate [ CDR cancer detection rate ]). Logistic regression was used to assess the association between the volume of recalled screening mammograms ("own" mammograms, where the radiologist who interpreted the diagnostic image was the same radiologist who had interpreted the screening image, and "any" mammograms, where the radiologist who interpreted the diagnostic image may or may not have been the radiologist who interpreted the screening image) and screening performance and whether the association between total annual volume and performance differed according to the volume of diagnostic work-up. Results: Annually, 38% of radiologists performed the diagnostic work-up for 25 or fewer of their own recalled screening mammograms, 24% performed the work-up for 0-50, and 39% performed the work-up for more than 50. For the work-up of recalled screening mammograms from any radiologist, 24% of radiologists performed the work-up for 0-50 mammograms, 32% performed the work-up for 51-125, and 44% performed the work-up for more than 125. With increasing numbers of radiologist work-ups for their own recalled mammograms, the sensitivity (P = .039), FPR false-positive rate (P = .004), and CDR cancer detection rate (P < .001) of screening mammography increased, yielding a stepped increase in women recalled per cancer detected from 17.4 for 25 or fewer mammograms to 24.6 for more than 50 mammograms. Increases in work-ups for any radiologist yielded significant increases in FPR false-positive rate (P = .011) and CDR cancer detection rate (P = .001) and a nonsignificant increase in sensitivity (P = .15). Radiologists with a lower annual volume of any work-ups had consistently lower FPR false-positive rate , sensitivity, and CDR cancer detection rate at all annual interpretive volumes. Conclusion: These findings support the hypothesis that radiologists may improve their screening performance by performing the diagnostic work-up for their own recalled screening mammograms and directly receiving feedback afforded by means of the outcomes associated with their initial decision to recall. Arranging for radiologists to work up a minimum number of their own recalled cases could improve screening performance but would need systems to facilitate this workflow.
    Radiology 06/2014; 273(2):132806. DOI:10.1148/radiol.14132806 · 6.87 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: While collective intelligence (CI) is a powerful approach to increase decision accuracy, few attempts have been made to unlock its potential in medical decision-making. Here we investigated the performance of three well-known collective intelligence rules ("majority", "quorum", and "weighted quorum") when applied to mammography screening. For any particular mammogram, these rules aggregate the independent assessments of multiple radiologists into a single decision (recall the patient for additional workup or not). We found that, compared to single radiologists, any of these CI-rules both increases true positives (i.e., recalls of patients with cancer) and decreases false positives (i.e., recalls of patients without cancer), thereby overcoming one of the fundamental limitations to decision accuracy that individual radiologists face. Importantly, we find that all CI-rules systematically outperform even the best-performing individual radiologist in the respective group. Our findings demonstrate that CI can be employed to improve mammography screening; similarly, CI may have the potential to improve medical decision-making in a much wider range of contexts, including many areas of diagnostic imaging and, more generally, diagnostic decisions that are based on the subjective interpretation of evidence.
    PLoS ONE 08/2015; 10(8):e0134269. DOI:10.1371/journal.pone.0134269 · 3.23 Impact Factor