Article

Systematic reviews and original articles differ in relevance, novelty, and use in an evidence-based service for physicians: PLUS project.

Health Information Research Unit, Department of Clinical Epidemiology & Biostatistics, McMaster University, Hamilton, Ontario, Canada.
Journal of Clinical Epidemiology (Impact Factor: 5.48). 05/2008; 61(5):449-54. DOI: 10.1016/j.jclinepi.2007.10.016
Source: PubMed

ABSTRACT To describe the ratings from physicians, and use by physicians, of high quality, clinically pertinent original articles and systematic reviews from over 110 clinical journals and the Cochrane Database of Systematic Reviews (CDSRs).
Prospective observational study. Data were collected via an online clinical rating system of relevance and newsworthiness for quality-filtered clinical articles and via an online delivery service for practicing physicians, during the course of the McMaster Premium LiteratUre Service Trial. Clinical ratings of articles in the MORE system by over 1,900 physicians were compared and the usage rates over 13 months of these articles by physicians, who were not raters, were examined.
Systematic reviews were rated significantly higher than original articles for relevance (P<0.001), but significantly lower for newsworthiness (P<0.001). Reviews published in the CDSR had significantly lower ratings for both relevance (P<0.001) and newsworthiness (P<0.001) than reviews published in other journals. Participants accessed reviews more often than original articles (P<0.001), and accessed reviews from journals more often than from CDSR (P<0.001).
Physician ratings and the use of high-quality original articles and systematic reviews differed, generally favoring systematic reviews over original articles. Reviews published in journals were rated higher and accessed more often than Cochrane reviews.

0 Bookmarks
 · 
63 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To assess the clinical relevance and newsworthiness of the UK National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Programme funded reports. Retrospective cohort study. The cohort included 311 NIHR HTA Programme funded reports publishing in HTA in the period 1 January 2007-31 December 2012. The McMaster Online Rating of Evidence (MORE) system independently identified the clinical relevance and newsworthiness of NIHR HTA publications and non-NIHR HTA publications. The MORE system involves over 4000 physicians rating publications on a scale of relevance (the extent to which articles are relevant to practice) and a scale of newsworthiness (the extent to which articles contain news or something clinicians are unlikely to know). The proportion of reports published in HTA meeting MORE inclusion criteria and mean average relevance and newsworthiness ratings were calculated and compared with publications from the same studies publishing outside HTA and non-NIHR HTA funded publications. 286/311 (92.0%) of NIHR HTA reports were assessed by MORE, of which 192 (67.1%) passed MORE criteria. The average clinical relevance rating for NIHR HTA reports was 5.48, statistically higher than the 5.32 rating for non-NIHR HTA publications (mean difference=0.16, 95% CI 0.04 to 0.29, p=0.01). Average newsworthiness ratings were similar between NIHR HTA reports and non-NIHR HTA publications (4.75 and 4.70, respectively; mean difference=0.05, 95% CI -0.18 to 0.07, p=0.402). NIHR HTA-funded original research reports were statistically higher for newsworthiness than reviews (5.05 compared with 4.64) (mean difference=0.41, 95% CI 0.18 to 0.64, p=0.001). Funding research of clinical relevance is important in maximising the value of research investment. The NIHR HTA Programme is successful in funding projects that generate outputs of clinical relevance.
    BMJ Open 01/2014; 4(5):e004556. · 2.06 Impact Factor
  • Source
    Rev Cient Cienc Med. 01/2009; 12(2):38-41.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: It is difficult to foster research utilization among allied health professionals (AHPs). Tailored, multifaceted knowledge translation (KT) strategies are now recommended but are resource intensive to implement. Employers need effective KT solutions but little is known about; the impact and viability of multifaceted KT strategies using an online KT tool, their effectiveness with AHPs and their effect on evidence-based practice (EBP) decision-making behavior. The study aim was to measure the effectiveness of a multifaceted KT intervention including a customized KT tool, to change EBP behavior, knowledge, and attitudes of AHPs. This is an evaluator-blinded, cluster randomized controlled trial conducted in an Australian community-based cerebral palsy service. 135 AHPs (physiotherapists, occupational therapists, speech pathologists, psychologists and social workers) from four regions were cluster randomized (n = 4), to either the KT intervention group (n = 73 AHPs) or the control group (n = 62 AHPs), using computer-generated random numbers, concealed in opaque envelopes, by an independent officer. The KT intervention included three-day skills training workshop and multifaceted workplace supports to redress barriers (paid EBP time, mentoring, system changes and access to an online research synthesis tool). Primary outcome (self- and peer-rated EBP behavior) was measured using the Goal Attainment Scale (individual level). Secondary outcomes (knowledge and attitudes) were measured using exams and the Evidence Based Practice Attitude Scale. The intervention group's primary outcome scores improved relative to the control group, however when clustering was taken into account, the findings were non-significant: self-rated EBP behavior [effect size 4.97 (95% CI -10.47, 20.41)(p = 0.52)]; peer-rated EBP behavior [effect size 5.86 (95% CI -17.77, 29.50)(p = 0.62)]. Statistically significant improvements in EBP knowledge were detected [effect size 2.97 (95% CI 1.97, 3.97(p < 0.0001)]. Change in EBP attitudes was not statistically significant. Improvement in EBP behavior was not statistically significant after adjusting for cluster effect, however similar improvements from peer-ratings suggest behaviorally meaningful gains. The large variability in behavior observed between clusters suggests barrier assessments and subsequent KT interventions may need to target subgroups within an organization.Trial registration: Registered on the Australian New Zealand Clinical Trials Registry (ACTRN12611000529943).
    Implementation Science 11/2013; 8(1):132. · 2.37 Impact Factor