Implementation Science (IMPLEMENT SCI)

Publisher: BioMed Central Ltd, BioMed Central

Journal description

Implementation Science is an open access, peer-reviewed online journal that aims to publish research relevant to the scientific study of methods to promote the uptake of research findings into routine healthcare in both clinical and policy contexts. Biomedical research constantly produces new findings - but often these are not routinely translated into health care practice. Implementation research is the scientific study of methods to promote the systematic uptake of clinical research findings and other evidence-based practices into routine practice, and hence to improve the quality and effectiveness of health care. It includes the study of influences on healthcare professional and organisational behaviour.

Current impact factor: 4.12

Impact Factor Rankings

2016 Impact Factor Available summer 2017
2014 / 2015 Impact Factor 4.122
2013 Impact Factor 3.47
2012 Impact Factor 2.372
2011 Impact Factor 3.1
2010 Impact Factor 2.514
2009 Impact Factor 2.485

Impact factor over time

Impact factor
Year

Additional details

5-year impact 4.62
Cited half-life 3.90
Immediacy index 0.44
Eigenfactor 0.01
Article influence 1.45
Website Implementation Science website
Other titles IS
ISSN 1748-5908
OCLC 65431651
Material type Document, Periodical, Internet resource
Document type Internet Resource, Computer File, Journal / Magazine / Newspaper

Publisher details

BioMed Central

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Publisher's version/PDF may be used
    • Eligible UK authors may deposit in OpenDepot
    • Creative Commons Attribution License
    • Copy of License must accompany any deposit.
    • All titles are open access journals
    • 'BioMed Central' is an imprint of 'Springer Verlag (Germany)'
  • Classification
    green

Publications in this journal

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Background: The current syphilis epidemic among urban men who have sex with men (MSM) has serious implications for those co-infected with human immunodeficiency virus (HIV). Routine and frequent syphilis screening has the potential to ensure early detection and treatment, minimize disease burden, and help control the ongoing spread of syphilis and HIV. We aim to enhance syphilis screening among HIV-positive men by conducting a clinic-based intervention that incorporates opt-out syphilis testing into routine HIV laboratory evaluation for this population. Trial objectives are to determine the degree to which the intervention (1) increases the detection rate of untreated syphilis, (2) increases screening coverage, (3) increases screening frequency, and (4) reaches men at highest risk according to sexual behaviors. Methods/design: The trial is a pragmatic, stepped wedge cluster-randomized controlled trial that introduces the intervention stepwise across four urban HIV clinics in Ontario, Canada. The intervention includes standing orders for syphilis serological testing whenever a male in HIV care undergoes HIV viral load testing, which typically occurs every 3-6 months. The control condition is the maintenance of current, provider-initiated syphilis testing practice. Approximately 3100 HIV-positive men will be followed over 30 months. Test results will be obtained from the centralized provincial laboratory in Ontario and will be supplemented by a standardized clinical worksheet and medical chart review at the clinics. Detailed clinical, psychosocial, and behavioral data is available for a subset of men receiving HIV care who are also participants of the province-wide Ontario HIV Treatment Network Cohort Study. Process evaluation plans include audit and feedback of compliance of the participating centers to identify potential barriers to the introduction of this type of practice into routine care. Health economic components include evaluation of the impact and cost-effectiveness of the intervention. Discussion: This trial will be the first of its kind in Canada and will provide evidence regarding the feasibility, clinical effectiveness, and cost-effectiveness of a clinic-based intervention to improve syphilis screening among HIV-positive men. Involvement of knowledge users in all stages of trial design, conduct, and analysis will facilitate scale-up should the intervention be effective. Trial registration: ClinicalTrials.gov NCT02019043.
    Preview · Article · Jan 2016 · Implementation Science
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Background: Policy has a tremendous potential to improve population health when informed by research evidence. Such evidence, however, typically plays a suboptimal role in policymaking processes. The field of policy dissemination and implementation research (policy D&I) exists to address this challenge. The purpose of this study was to: (1) determine the extent to which policy D&I was funded by the National Institutes of Health (NIH), (2) identify trends in NIH-funded policy D&I, and (3) describe characteristics of NIH-funded policy D&I projects. Methods: The NIH Research Portfolio Online Reporting Tool was used to identify all projects funded through D&I-focused funding announcements. We screened for policy D&I projects by searching project title, abstract, and term fields for mentions of "policy," "policies," "law," "legal," "legislation," "ordinance," "statute," "regulation," "regulatory," "code," or "rule." A project was classified as policy D&I if it explicitly proposed to conduct research about the content of a policy, the process through which it was developed, or outcomes it produced. A coding guide was iteratively developed, and all projects were independently coded by two researchers. ClinicalTrials.gov and PubMed were used to obtain additional project information and validate coding decisions. Descriptive statistics-stratified by funding mechanism, Institute, and project characteristics-were produced. Results: Between 2007 and 2014, 146 projects were funded through the D&I funding announcements, 12 (8.2 %) of which were policy D&I. Policy D&I funding totaled $16,177,250, equivalent to 10.5 % of all funding through the D&I funding announcements. The proportion of funding for policy D&I projects ranged from 14.6 % in 2007 to 8.0 % in 2012. Policy D&I projects were primarily focused on policy outcomes (66.7 %), implementation (41.7 %), state-level policies (41.7 %), and policies within the USA (83.3 %). Tobacco (33.3 %) and cancer (25.0 %) control were the primary topics of focus. Many projects combined survey (58.3 %) and interview (33.3 %) methods with analysis of archival data sources. Conclusions: NIH has made an initial investment in policy D&I research, but the level of support has varied between Institutes. Policy D&I researchers have utilized a variety of designs, methods, and data sources to investigate the development processes, content, and outcomes of public and private policies.
    Preview · Article · Jan 2016 · Implementation Science
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In their article on "Evidence-based de-implementation for contradicted, unproven, and aspiring healthcare practices," Prasad and Ioannidis (IS 9:1, 2014) referred to extra-scientific "entrenched practices and other biases" that hinder evidence-based de-implementation. Using the case example of the de-implementation of radical mastectomy, we disaggregated "entrenched practices and other biases" and analyzed the historical, economic, professional, and social forces that presented resistance to de-implementation. We found that these extra-scientific factors operated to sustain a commitment to radical mastectomy, even after the evidence slated the procedure for de-implementation, because the factors holding radical mastectomy in place were beyond the control of individual clinicians. We propose to expand de-implementation theory through the inclusion of extra-scientific factors. If the outcome to which we aim is appropriate and timely de-implementation, social scientific analysis will illuminate the context within which the healthcare practitioner practices and, in doing so, facilitate de-implementation by pointing to avenues that lead to systems change. The implications of our analysis lead us to contend that intervening in the broader context in which clinicians work-the social, political, and economic realms-rather than focusing on healthcare professionals' behavior, may indeed be a fruitful approach to effect change.
    Full-text · Article · Dec 2015 · Implementation Science
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Background Scaling up complex health interventions to large populations is not a straightforward task. Without intentional, guided efforts to scale up, it can take many years for a new evidence-based intervention to be broadly implemented. For the past decade, researchers and implementers have developed models of scale-up that move beyond earlier paradigms that assumed ideas and practices would successfully spread through a combination of publication, policy, training, and example. Drawing from the previously reported frameworks for scaling up health interventions and our experience in the USA and abroad, we describe a framework for taking health interventions to full scale, and we use two large-scale improvement initiatives in Africa to illustrate the framework in action. We first identified other scale-up approaches for comparison and analysis of common constructs by searching for systematic reviews of scale-up in health care, reviewing those bibliographies, speaking with experts, and reviewing common research databases (PubMed, Google Scholar) for papers in English from peer-reviewed and “gray” sources that discussed models, frameworks, or theories for scale-up from 2000 to 2014. We then analyzed the results of this external review in the context of the models and frameworks developed over the past 20 years by Associates in Process Improvement (API) and the Institute for Healthcare improvement (IHI). Finally, we reflected on two national-scale improvement initiatives that IHI had undertaken in Ghana and South Africa that were testing grounds for early iterations of the framework presented in this paper. Results The framework describes three core components: a sequence of activities that are required to get a program of work to full scale, the mechanisms that are required to facilitate the adoption of interventions, and the underlying factors and support systems required for successful scale-up. The four steps in the sequence include (1) Set-up, which prepares the ground for introduction and testing of the intervention that will be taken to full scale; (2) Develop the Scalable Unit, which is an early testing phase; (3) Test of Scale-up, which then tests the intervention in a variety of settings that are likely to represent different contexts that will be encountered at full scale; and (4) Go to Full Scale, which unfolds rapidly to enable a larger number of sites or divisions to adopt and/or replicate the intervention. Conclusions Our framework echoes, amplifies, and systematizes the three dominant themes that occur to varying extents in a number of existing scale-up frameworks. We call out the crucial importance of defining a scalable unit of organization. If a scalable unit can be defined, and successful results achieved by implementing an intervention in this unit without major addition of resources, it is more likely that the intervention can be fully and rapidly scaled. When tying this framework to quality improvement (QI) methods, we describe a range of methodological options that can be applied to each of the four steps in the framework’s sequence.
    Preview · Article · Dec 2015 · Implementation Science
  • Jo Rycroft-Malone · Christopher R Burton · Joyce Wilkinson · Gill Harvey · Brendan McCormack · Richard Baker · Sue Dopson · Ian D. Graham · Sophie Staniszewska · Carl Thompson · Steven Ariss · Lucy Melville-Richards · Lynne Williams
    [Show abstract] [Hide abstract]
    ABSTRACT: Background Increasingly, it is being suggested that translational gaps might be eradicated or narrowed by bringing research users and producers closer together, a theory that is largely untested. This paper reports a national study to fill a gap in the evidence about the conditions, processes and outcomes related to collaboration and implementation. Methods A longitudinal realist evaluation using multiple qualitative methods case studies was conducted with three Collaborations for Leadership in Applied Health Research in Care (England). Data were collected over four rounds of theory development, refinement and testing. Over 200 participants were involved in semi-structured interviews, non-participant observations of events and meetings, and stakeholder engagement. A combined inductive and deductive data analysis process was focused on proposition refinement and testing iteratively over data collection rounds. Results The quality of existing relationships between higher education and local health service, and views about whether implementation was a collaborative act, created a path dependency. Where implementation was perceived to be removed from service and there was a lack of organisational connections, this resulted in a focus on knowledge production and transfer, rather than co-production. The collaborations’ architectures were counterproductive because they did not facilitate connectivity and had emphasised professional and epistemic boundaries. More distributed leadership was associated with greater potential for engagement. The creation of boundary spanning roles was the most visible investment in implementation, and credible individuals in these roles resulted in cross-boundary work, in facilitation and in direct impacts. The academic-practice divide played out strongly as a context for motivation to engage, in that ‘what’s in it for me’ resulted in variable levels of engagement along a co-operation-collaboration continuum. Learning within and across collaborations was patchy depending on attention to evaluation. Conclusions These collaborations did not emerge from a vacuum, and they needed time to learn and develop. Their life cycle started with their position on collaboration, knowledge and implementation. More impactful attempts at collective action in implementation might be determined by the deliberate alignment of a number of features, including foundational relationships, vision, values, structures and processes and views about the nature of the collaboration and implementation.
    No preview · Article · Dec 2015 · Implementation Science