Article

A pragmatic cluster randomised trial evaluating three implementation interventions

Centre for Health Related Research, School of Healthcare Sciences, Bangor University, Ffriddoedd Road, Bangor, UK. .
Implementation Science (Impact Factor: 3.47). 08/2012; 7(1):80. DOI: 10.1186/1748-5908-7-80
Source: PubMed

ABSTRACT Implementation research is concerned with bridging the gap between evidence and practice through the study of methods to promote the uptake of research into routine practice. Good quality evidence has been summarised into guideline recommendations to show that peri-operative fasting times could be considerably shorter than patients currently experience. The objective of this trial was to evaluate the effectiveness of three strategies for the implementation of recommendations about peri-operative fasting.
A pragmatic cluster randomised trial underpinned by the PARIHS framework was conducted during 2006 to 2009 with a national sample of UK hospitals using time series with mixed methods process evaluation and cost analysis. Hospitals were randomised to one of three interventions: standard dissemination (SD) of a guideline package, SD plus a web-based resource championed by an opinion leader, and SD plus plan-do-study-act (PDSA). The primary outcome was duration of fluid fast prior to induction of anaesthesia. Secondary outcomes included duration of food fast, patients' experiences, and stakeholders' experiences of implementation, including influences. ANOVA was used to test differences over time and interventions.
Nineteen acute NHS hospitals participated. Across timepoints, 3,505 duration of fasting observations were recorded. No significant effect of the interventions was observed for either fluid or food fasting times. The effect size was 0.33 for the web-based intervention compared to SD alone for the change in fluid fasting and was 0.12 for PDSA compared to SD alone. The process evaluation showed different types of impact, including changes to practices, policies, and attitudes. A rich picture of the implementation challenges emerged, including inter-professional tensions and a lack of clarity for decision-making authority and responsibility.
This was a large, complex study and one of the first national randomised controlled trials conducted within acute care in implementation research. The evidence base for fasting practice was accepted by those participating in this study and the messages from it simple; however, implementation and practical challenges influenced the interventions' impact. A set of conditions for implementation emerges from the findings of this study, which are presented as theoretically transferable propositions that have international relevance.
ISRCTN18046709 - Peri-operative Implementation Study Evaluation (POISE).

Download full-text

Full-text

Available from: Jo Rycroft-Malone, Aug 18, 2015
1 Follower
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: BACKGROUND: The case has been made for more and better theory-informed process evaluations within trials in an effort to facilitate insightful understandings of how interventions work. In this paper, we provide an explanation of implementation processes from one of the first national implementation research randomized controlled trials with embedded process evaluation conducted within acute care, and a proposed extension to the Promoting Action on Research Implementation in Health Services (PARIHS) framework. METHODS: The PARIHS framework was prospectively applied to guide decisions about intervention design, data collection, and analysis processes in a trial focussed on reducing peri-operative fasting times. In order to capture a holistic picture of implementation processes, the same data were collected across 19 participating hospitals irrespective of allocation to intervention. This paper reports on findings from data collected from a purposive sample of 151 staff and patients pre- and post-intervention. Data were analysed using content analysis within, and then across data sets. RESULTS: A robust and uncontested evidence base was a necessary, but not sufficient condition for practice change, in that individual staff and patient responses such as caution influenced decision making. The implementation context was challenging, in which individuals and teams were bounded by professional issues, communication challenges, power and a lack of clarity for the authority and responsibility for practice change. Progress was made in sites where processes were aligned with existing initiatives. Additionally, facilitators reported engaging in many intervention implementation activities, some of which result in practice changes, but not significant improvements to outcomes. CONCLUSIONS: This study provided an opportunity for reflection on the comprehensiveness of the PARIHS framework. Consistent with the underlying tenant of PARIHS, a multi-faceted and dynamic story of implementation was evident. However, the prominent role that individuals played as part of the interaction between evidence and context is not currently explicit within the framework. We propose that successful implementation of evidence into practice is a planned facilitated process involving an interplay between individuals, evidence, and context to promote evidence-informed practice. This proposal will enhance the potential of the PARIHS framework for explanation, and ensure theoretical development both informs and responds to the evidence base for implementation.Trial registration: ISRCTN18046709 - Peri-operative Implementation Study Evaluation (PoISE).
    Implementation Science 03/2013; 8(1):28. DOI:10.1186/1748-5908-8-28 · 3.47 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Background National quality registries (NQRs) purportedly facilitate quality improvement, while neither the extent nor the mechanisms of such a relationship are fully known. The aim of this case study is to describe the experiences of local stakeholders to determine those elements that facilitate and hinder clinical quality improvement in relation to participation in a well-known and established NQR on stroke in Sweden. Methods A strategic sample was drawn of 8 hospitals in 4 county councils, representing a variety of settings and outcomes according to the NQR’s criteria. Semi-structured telephone interviews were conducted with 25 managers, physicians in charge of the Riks-Stroke, and registered nurses registering local data at the hospitals. Interviews, including aspects of barriers and facilitators within the NQR and the local context, were analysed with content analysis. Results An NQR can provide vital aspects for facilitating evidence-based practice, for example, local data drawn from national guidelines which can be used for comparisons over time within the organisation or with other hospitals. Major effort is required to ensure that data entries are accurate and valid, and thus the trustworthiness of local data output competes with resources needed for everyday clinical stroke care and quality improvement initiatives. Local stakeholders with knowledge of and interest in both the medical area (in this case stroke) and quality improvement can apply the NQR data to effectively initiate, carry out, and evaluate quality improvement, if supported by managers and co-workers, a common stroke care process and an operational management system that embraces and engages with the NQR data. Conclusion While quality registries are assumed to support adherence to evidence-based guidelines around the world, this study proposes that a NQR can facilitate improvement of care but neither the registry itself nor the reporting of data initiates quality improvement. Rather, the local and general evidence provided by the NQR must be considered relevant and must be applied in the local context. Further, the quality improvement process needs to be facilitated by stakeholders collaborating within and outside the context, who know how to initiate, perform, and evaluate quality improvement, and who have the resources to do so.
    BMC Health Services Research 08/2014; 14(1):354. DOI:10.1186/1472-6963-14-354 · 1.66 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Background The need for high-quality evidence that is applicable in real-world, routine settings continues to increase. Pragmatic trials are designed to evaluate the effectiveness of interventions in real-world settings, whereas explanatory trials aim to test whether an intervention works under optimal situations. There is a continuum between explanatory and pragmatic trials. Most trials have aspects of both, making it challenging to label and categorize a trial and to evaluate its potential for translation into practice.Methods We summarize our experience applying the Pragmatic-Explanatory Continuum Indicator Summary (PRECIS) combined with external validity items based on the Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM) framework to three studies to provide a more robust and comprehensive assessment of trial characteristics related to translation of research. We summarize lessons learned using domains from the combined frameworks for use in study planning, evaluating specific studies, and reviewing the literature and make recommendations for future use.ResultsA variety of coders can be trained to use the PRECIS and RE-AIM domains. These domains can also be used for diverse purposes, content areas, and study types, but are not without challenges. Both PRECIS and RE-AIM domains required modification in two of the three studies to evaluate and rate domains specific to study type. Lessons learned involved: dedicating enough time for training activities related to the domains; use of reviewers with a range of familiarity with specific study protocols; how to best adapt ratings that reflect complex study designs; and differences of opinion regarding the value of creating a composite score for these criteria.Conclusions Combining both frameworks can specifically help identify where and how a study is and is not pragmatic. Using both PRECIS and RE-AIM allows for standard reporting of key study characteristics related to pragmatism and translation. Such measures should be used more consistently to help plan more pragmatic studies, evaluate progress, increase transparency of reporting, and integrate literature to facilitate translation of research into practice and policy.
    Implementation Science 08/2014; 9(1):96. DOI:10.1186/s13012-014-0096-x · 3.47 Impact Factor
Show more