Article

Association between time spent interpreting, level of confidence, and accuracy of screening mammography.

Department of Family Medicine, Oregon Health & Science University, Portland, 97239, USA.
American Journal of Roentgenology (Impact Factor: 2.9). 04/2012; 198(4):970-8. DOI: 10.2214/AJR.11.6988
Source: PubMed

ABSTRACT The objective of this study was to examine the effect of time spent viewing images and level of confidence on a screening mammography test set on interpretive performance.
Radiologists from six mammography registries participated in this study and were randomized to interpret one of four test sets and complete 12 survey questions. Each test set had 109 cases of digitized four-view screening screen-film mammograms with prior comparison screening views. Viewing time for each case was defined as the cumulative time spent viewing all mammographic images before recording which visible feature, if any, was the "most significant finding." Log-linear regression fit via the generalized estimating equation was used to test the effect of viewing time and level of confidence in the interpretation on test set sensitivity and false-positive rate.
One hundred nineteen radiologists completed a test set and contributed data on 11,484 interpretations. The radiologists spent more time viewing cases that had significant findings or cases for which they had less confidence in their interpretation. Each additional minute of viewing time increased the probability of a true-positive interpretation among cancer cases by 1.12 (95% CI, 1.06-1.19; p < 0.001) regardless of confidence in the assessment. Among the radiologists who were very confident in their assessment, each additional minute of viewing time increased the adjusted risk of a false-positive interpretation among noncancer cases by 1.42 (95% CI, 1.21-1.68), and this viewing-time effect diminished with decreasing confidence.
Longer interpretation times and higher levels of confidence in an interpretation are both associated with higher sensitivity and false-positive rates in mammography screening.

1 Bookmark
 · 
130 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To evaluate a self-test for Dutch breast screening radiologists introduced as part of the national quality assurance programme. A total of 144 radiologists were invited to complete a test-set of 60 screening mammograms (20 malignancies). Participants assigned findings such as location, lesion type and BI-RADS. We determined areas under the receiver operating characteristics (ROC) curves (AUC), case and lesion sensitivity and specificity, agreement (kappa) and correlation between reader characteristics and case sensitivity (Spearman correlation coefficients). A total of 110 radiologists completed the test (76 %). Participants read a median number of 10,000 screening mammograms/year. Median AUC value was 0.93, case and lesion sensitivity was 91 % and case specificity 94 %. We found substantial agreement for recall (κ = 0.77) and laterality (κ = 0.80), moderate agreement for lesion type (κ = 0.57) and BI-RADS (κ = 0.45) and no correlation between case sensitivity and reader characteristics. Areas under the ROC curve, case sensitivity and lesion sensitivity were satisfactory and recall agreement was substantial. However, agreement in lesion type and BI-RADS could be improved; further education might be aimed at reducing interobserver variation in interpretation and description of abnormalities. We offered individual feedback on interpretive performance and overall feedback at group level. Future research will determine whether performance has improved. • We introduced and evaluated a self-test for Dutch breast screening radiologists. • ROC curves, case and lesion sensitivity and recall agreement were all satisfactory. • Agreement in BI-RADS interpretation and description of abnormalities could be improved. • These are areas that should be targeted with further education and training. • We offered individual feedback on interpretative performance and overall group feedback.
    European Radiology 09/2013; · 4.34 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: OBJECTIVE. The objective of our study was to conduct a randomized controlled trial of educational interventions that were created to improve performance of screening mammography interpretation. MATERIALS AND METHODS. We randomly assigned physicians who interpret mammography to one of three groups: self-paced DVD, live expert-led educational seminar, or control. The DVD and seminar interventions used mammography cases of varying difficulty and provided associated teaching points. Interpretive performance was compared using a pretest-posttest design. Sensitivity, specificity, and positive predictive value (PPV) were calculated relative to two outcomes: cancer status and consensus of three experts about recall. The performance measures for each group were compared using logistic regression adjusting for pretest performance. RESULTS. One hundred two radiologists completed all aspects of the trial. After adjustment for preintervention performance, the odds of improved sensitivity for correctly identifying a lesion relative to expert recall were 1.34 times higher for DVD participants than for control subjects (95% CI, 1.00-1.81; p = 0.050). The odds of an improved PPV for correctly identifying a lesion relative to both expert recall (odds ratio [OR] = 1.94; 95% CI, 1.24-3.05; p = 0.004) and cancer status (OR = 1.81; 95% CI, 1.01-3.23; p = 0.045) were significantly improved for DVD participants compared with control subjects, with no significant change in specificity. For the seminar group, specificity was significantly lower than the control group (OR relative to expert recall = 0.80; 95% CI, 0.64-1.00; p = 0.048; OR relative to cancer status = 0.79; 95% CI, 0.65-0.95; p = 0.015). CONCLUSION. In this randomized controlled trial, the DVD educational intervention resulted in a significant improvement in screening mammography interpretive performance on a test set, which could translate into improved interpretative performance in clinical practice.
    AJR. American journal of roentgenology. 06/2014; 202(6):W586-96.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To describe recruitment, enrollment, and participation in a study of US radiologists invited to participate in a randomized controlled trial of two continuing medical education (CME) interventions designed to improve interpretation of screening mammography. We collected recruitment, consent, and intervention-completion information as part of a large study involving radiologists in California, Oregon, Washington, New Mexico, New Hampshire, North Carolina, and Vermont. Consenting radiologists were randomized to receive either a 1-day live, expert-led educational session; to receive a self-paced DVD with similar content; or to a control group (delayed intervention). The impact of the interventions was assessed using a preintervention-postintervention test set design. All activities were institutional review board approved and HIPAA compliant. Of 403 eligible radiologists, 151 of 403 (37.5%) consented to participate in the trial and 119 of 151 (78.8%) completed the preintervention test set, leaving 119 available for randomization to one of the two intervention groups or to controls. Female radiologists were more likely than male radiologists to consent to and complete the study (P = .03). Consenting radiologists who completed all study activities were more likely to have been interpreting mammography for 10 years or less compared to radiologists who consented and did not complete all study activities or did not consent at all. The live intervention group was more likely to report their intent to change their clinical practice as a result of the intervention compared to those who received the DVD (50% versus 17.6%, P = .02). The majority of participants in both interventions groups felt the interventions were a useful way to receive CME mammography credits. Community radiologists found interactive interventions designed to improve interpretative mammography performance acceptable and useful for clinical practice. This suggests CME credits for radiologists should, in part, be for examining practice skills.
    Academic radiology 11/2013; 20(11):1389-1398. · 2.09 Impact Factor

Full-text (2 Sources)

Download
25 Downloads
Available from
Jun 2, 2014