Article

Developing and using a rubric for evaluating evidence-based medicine point-of-care tools

Instructional Service Librarian, Medical Sciences Library, Texas A&M University, 4462 TAMU College Station, TX 77843-4462, USA.
Journal of the Medical Library Association JMLA (Impact Factor: 0.99). 07/2011; 99(3):247-54. DOI: 10.3163/1536-5050.99.3.012
Source: PubMed

ABSTRACT The research sought to establish a rubric for evaluating evidence-based medicine (EBM) point-of-care tools in a health sciences library.
The authors searched the literature for EBM tool evaluations and found that most previous reviews were designed to evaluate the ability of an EBM tool to answer a clinical question. The researchers' goal was to develop and complete rubrics for assessing these tools based on criteria for a general evaluation of tools (reviewing content, search options, quality control, and grading) and criteria for an evaluation of clinical summaries (searching tools for treatments of common diagnoses and evaluating summaries for quality control).
Differences between EBM tools' options, content coverage, and usability were minimal. However, the products' methods for locating and grading evidence varied widely in transparency and process.
As EBM tools are constantly updating and evolving, evaluation of these tools needs to be conducted frequently. Standards for evaluating EBM tools need to be established, with one method being the use of objective rubrics. In addition, EBM tools need to provide more information about authorship, reviewers, methods for evidence collection, and grading system employed.

Full-text

Available from: Margaret J Foster, May 28, 2015
1 Follower
 · 
271 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: Standards for evaluating evidence-based medicine (EBM) point-of-care (POC) summaries of research are lacking. The authors developed a "Critical Appraisal for Summaries of Evidence" (CASE) worksheet to help assess the evidence in these tools. The authors then evaluated the reliability of the worksheet. The CASE worksheet was developed with 10 questions covering specificity, authorship, reviewers, methods, grading, clarity, citations, currency, bias, and relevancy. Two reviewers independently assessed a random selection of 384 EBM POC summaries using the worksheet. The responses of the raters were then compared using a kappa score. The kappa statistic demonstrated an overall moderate agreement (κ = 0.44) between the reviewers using the CASE worksheet for the 384 summaries. The 3 categories of evaluation questions in which the reviewers disagreed most often were citations (κ = 0), bias (κ = 0.11), and currency (κ = -0.18). The CASE worksheet provided an effective checklist for critically analyzing a treatment summary. While the reviewers agreed on worksheet responses for most questions, variation occurred in how the raters navigated the tool and interpreted some of the questions. Further validation of the form by other groups of users should be investigated.
    Journal of the Medical Library Association JMLA 07/2013; 101(3):192-8. DOI:10.3163/1536-5050.101.3.008 · 0.99 Impact Factor
  • Journal of perianesthesia nursing: official journal of the American Society of PeriAnesthesia Nurses / American Society of PeriAnesthesia Nurses 10/2013; 28(5):300-9. DOI:10.1016/j.jopan.2013.07.003 · 0.89 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The rapid development and updates of mobile medical resource applications (apps) highlight the need for an evaluation tool to assess the content of these resources. The purpose of the study was to develop and test a new evaluation rubric for medical resource apps. The evaluation rubric was designed using existing literature and through a collaborative effort between a hospital and an academic librarian. Testing found scores ranging from 23% to 88% for the apps. The evaluation rubric proved able to distinguish levels of quality within each content component of the apps, demonstrating potential for standardization of medical resource app evaluations.
    Medical Reference Services Quarterly 01/2015; 34(1):75-87. DOI:10.1080/02763869.2015.986794