Article

Interventions to improve the use of systematic reviews in decision-making by health system managers, policy makers and clinicians

UK Cochrane Centre, National Institute for Health Research, Oxford, UK.
Cochrane database of systematic reviews (Online) (Impact Factor: 5.94). 01/2012; 9(9):CD009401. DOI: 10.1002/14651858.CD009401.pub2
Source: PubMed

ABSTRACT Systematic reviews provide a transparent and robust summary of existing research. However, health system managers, national and local policy makers and healthcare professionals can face several obstacles when attempting to utilise this evidence. These include constraints operating within the health system, dealing with a large volume of research evidence and difficulties in adapting evidence from systematic reviews so that it is locally relevant. In an attempt to increase the use of systematic review evidence in decision-making a number of interventions have been developed. These include summaries of systematic review evidence that are designed to improve the accessibility of the findings of systematic reviews (often referred to as information products) and changes to organisational structures, such as employing specialist groups to synthesise the evidence to inform local decision-making.
To identify and assess the effects of information products based on the findings of systematic review evidence and organisational supports and processes designed to support the uptake of systematic review evidence by health system managers, policy makers and healthcare professionals.
We searched The Cochrane Library, MEDLINE, EMBASE, CINAHL, Web of Science, and Health Economic Evaluations Database. We also handsearched two journals (Implementation Science and Evidence and Policy), Cochrane Colloquium abstracts, websites of key organisations and reference lists of studies considered for inclusion. Searches were run from 1992 to March 2011 on all databases, an update search to March 2012 was run on MEDLINE only.
Randomised controlled trials (RCTs), interrupted time-series (ITS) and controlled before-after studies (CBA) of interventions designed to aid the use of systematic reviews in healthcare decision-making were considered.
Two review authors independently extracted the data and assessed the study quality. We extracted the median value across similar outcomes for each study and reported the range of values for each median value. We calculated the median of the two middlemost values if an even number of outcomes were reported.
We included eight studies evaluating the effectiveness of different interventions designed to support the uptake of systematic review evidence. The overall quality of the evidence was very low to moderate.Two cluster RCTs evaluated the effectiveness of multifaceted interventions, which contained access to systematic reviews relevant to reproductive health, to change obstetric care; the high baseline performance in some of the key clinical indicators limited the findings of these studies. There were no statistically significant effects on clinical practice for all but one of the clinical indicators in selected obstetric units in Thailand (median effect size 4.2%, range -11.2% to 18.2%) and none in Mexico (median effect size 3.5%, range 0.1% to 19.0%). In the second cluster RCT there were no statistically significant differences in selected obstetric units in the UK (median effect RR 0.92; range RR 0.57 to RR 1.10). One RCT evaluated the perceived understanding and ease of use of summary of findings tables in Cochrane Reviews. The median effect of the differences in responses for the acceptability of including summary of findings tables in Cochrane Reviews versus not including them was 16%, range 1% to 28%. One RCT evaluated the effect of an analgesic league table, derived from systematic review evidence, and there was no statistically significant effect on self-reported pain. Only one RCT evaluated an organisational intervention (which included a knowledge broker, access to a repository of systematic reviews and provision of tailored messages), and reported no statistically significant difference in evidence informed programme planning.Three interrupted time series studies evaluated the dissemination of printed bulletins based on evidence from systematic reviews. A statistically significant reduction in the rates of surgery for glue ear in children under 10 years (mean annual decline of -10.1%; 95% CI -7.9 to -12.3) and in children under 15 years (quarterly reduction -0.044; 95% CI -0.080 to -0.011) was reported. The distribution to general practitioners of a bulletin on the treatment of depression was associated with a statistically significant lower prescribing rate each quarter than that predicted by the rates of prescribing observed before the distribution of the bulletin (8.2%; P = 0.005).
Mass mailing a printed bulletin which summarises systematic review evidence may improve evidence-based practice when there is a single clear message, if the change is relatively simple to accomplish, and there is a growing awareness by users of the evidence that a change in practice is required. If the intention is to develop awareness and knowledge of systematic review evidence, and the skills for implementing this evidence, a multifaceted intervention that addresses each of these aims may be required, though there is insufficient evidence to support this approach.

2 Followers
 · 
73 Views
  • Source
  • [Show abstract] [Hide abstract]
    ABSTRACT: Clinical scientists are at the unique interface between laboratory science and frontline clinical practice for supporting clinical partnerships for evidence-based practice. In an era of molecular diagnostics and personalised medicine, evidence-based laboratory practice (EBLP) is also crucial in aiding clinical scientists to keep up-to-date with this expanding knowledge base. However, there are recognised barriers to the implementation of EBLP and its training. The aim of this review is to provide a practical summary of potential strategies for training clinician-scientists of the next generation. Current evidence suggests that clinically integrated evidence-based medicine (EBM) training is effective. Tailored e-learning EBM packages and evidence-based journal clubs have been shown to improve knowledge and skills of EBM. Moreover, e-learning is no longer restricted to computer-assisted learning packages. For example, social media platforms such as Twitter have been used to complement existing journal clubs and provide additional post-publication appraisal information for journals. In addition, the delivery of an EBLP curriculum has influence on its success. Although e-learning of EBM skills is effective, having EBM trained teachers available locally promotes the implementation of EBM training. Training courses, such as Training the Trainers, are now available to help trainers identify and make use of EBM training opportunities in clinical practice. On the other hand, peer-assisted learning and trainee-led support networks can strengthen self-directed learning of EBM and research participation among clinical scientists in training. Finally, we emphasise the need to evaluate any EBLP training programme using validated assessment tools to help identify the most crucial ingredients of effective EBLP training. In summary, we recommend on-the-job training of EBM with additional focus on overcoming barriers to its implementation. In addition, future studies evaluating the effectiveness of EBM training should use validated outcome tools, endeavour to achieve adequate power and consider the effects of EBM training on learning environment and patient outcomes.
    The Clinical biochemist. Reviews / Australian Association of Clinical Biochemists 08/2013; 34(2):93-103.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Change agency in its various forms is one intervention aimed at improving the effectiveness of the uptake of evidence. Facilitators, knowledge brokers and opinion leaders are examples of change agency strategies used to promote knowledge utilization. This review adopts a realist approach and addresses the following question: What change agency characteristics work, for whom do they work, in what circumstances and why? The literature reviewed spanned the period 1997-2007. Change agency was operationalized as roles that are aimed at effecting successful change in individuals and organizations. A theoretical framework, developed through stakeholder consultation formed the basis for a search for relevant literature. Team members, working in sub groups, independently themed the data and developed chains of inference to form a series of hypotheses regarding change agency and the role of change agency in knowledge use. 24, 478 electronic references were initially returned from search strategies. Preliminary screening of the article titles reduced the list of potentially relevant papers to 196. A review of full document versions of potentially relevant papers resulted in a final list of 52 papers. The findings add to the knowledge of change agency as they raise issues pertaining to how change agents' function, how individual change agent characteristics effect evidence-informed health care, the influence of interaction between the change agent and the setting and the overall effect of change agency on knowledge utilization. Particular issues are raised such as how accessibility of the change agent, their cultural compatibility and their attitude mediate overall effectiveness. Findings also indicate the importance of promoting reflection on practice and role modeling. The findings of this study are limited by the complexity and diversity of the change agency literature, poor indexing of literature and a lack of theory-driven approaches. This is the first realist review of change agency. Though effectiveness evidence is weak, change agent roles are evolving, as is the literature, which requires more detailed description of interventions, outcomes measures, the context, intensity, and levels at which interventions are implemented in order to understand how change agent interventions effect evidence-informed health care.
    Implementation Science 09/2013; 8(1):107. DOI:10.1186/1748-5908-8-107 · 3.47 Impact Factor