Implementation Science Journal Impact Factor & Information

Publisher: BioMed Central Ltd, BioMed Central

Journal description

Implementation Science is an open access, peer-reviewed online journal that aims to publish research relevant to the scientific study of methods to promote the uptake of research findings into routine healthcare in both clinical and policy contexts. Biomedical research constantly produces new findings - but often these are not routinely translated into health care practice. Implementation research is the scientific study of methods to promote the systematic uptake of clinical research findings and other evidence-based practices into routine practice, and hence to improve the quality and effectiveness of health care. It includes the study of influences on healthcare professional and organisational behaviour.

Current impact factor: 4.12

Impact Factor Rankings

2015 Impact Factor Available summer 2016
2014 Impact Factor 4.122
2013 Impact Factor 3.47
2012 Impact Factor 2.372
2011 Impact Factor 3.1
2010 Impact Factor 2.514
2009 Impact Factor 2.485

Impact factor over time

Impact factor

Additional details

5-year impact 4.62
Cited half-life 3.90
Immediacy index 0.44
Eigenfactor 0.01
Article influence 1.45
Website Implementation Science website
Other titles IS
ISSN 1748-5908
OCLC 65431651
Material type Document, Periodical, Internet resource
Document type Internet Resource, Computer File, Journal / Magazine / Newspaper

Publisher details

BioMed Central

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Publisher's version/PDF may be used
    • Eligible UK authors may deposit in OpenDepot
    • Creative Commons Attribution License
    • Copy of License must accompany any deposit.
    • All titles are open access journals
    • 'BioMed Central' is an imprint of 'Springer Verlag (Germany)'
  • Classification

Publications in this journal

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In their article on "Evidence-based de-implementation for contradicted, unproven, and aspiring healthcare practices," Prasad and Ioannidis (IS 9:1, 2014) referred to extra-scientific "entrenched practices and other biases" that hinder evidence-based de-implementation. Using the case example of the de-implementation of radical mastectomy, we disaggregated "entrenched practices and other biases" and analyzed the historical, economic, professional, and social forces that presented resistance to de-implementation. We found that these extra-scientific factors operated to sustain a commitment to radical mastectomy, even after the evidence slated the procedure for de-implementation, because the factors holding radical mastectomy in place were beyond the control of individual clinicians. We propose to expand de-implementation theory through the inclusion of extra-scientific factors. If the outcome to which we aim is appropriate and timely de-implementation, social scientific analysis will illuminate the context within which the healthcare practitioner practices and, in doing so, facilitate de-implementation by pointing to avenues that lead to systems change. The implications of our analysis lead us to contend that intervening in the broader context in which clinicians work-the social, political, and economic realms-rather than focusing on healthcare professionals' behavior, may indeed be a fruitful approach to effect change.
    Implementation Science 12/2015; 10(1). DOI:10.1186/s13012-015-0211-7
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Evidence suggests that systematic reviews are used infrequently by physicians in clinical decision-making. One proposed solution is to create filtered resources so that information is validated and refined in order to be read quickly. Two shortened systematic review formats were developed to enhance their use in clinical decision-making. To prepare for a full-scale trial, we conducted a pilot study to test methods and procedures in order to refine the processes. A recruitment email was sent to physicians practicing full- or part-time in family medicine or general internal medicine. The pilot study took place in an online environment and eligible physicians were randomized to one of the systematic review formats (shortened or full-length) and instructed to read the document. Participants were asked to provide the clinical bottom line and apply the information presented to a clinical scenario. Participants' answers were evaluated independently by two investigators against "gold standard" answers prepared by an expert panel. Fifty-six clinicians completed the pilot study within a 2-month period with a response rate of 4.3 %. Agreement between investigators in assessing participants' answers was determined by calculating a kappa statistic. Two questions were assessed separately, and a kappa statistic was calculated at 1.00 (100 % agreement) for each. Agreement between investigators in assessing participants' answers is satisfactory. Although recruitment for the pilot study was completed in a reasonable time-frame, response rates were low and will require large numbers of contacts. The results indicate that conducting a full-scale trial is feasible. NCT02414360 .
    Implementation Science 12/2015; 10(1):118. DOI:10.1186/s13012-015-0303-4
  • Source

    Implementation Science 12/2015; 10(1). DOI:10.1186/s13012-015-0336-8
  • [Show abstract] [Hide abstract]
    ABSTRACT: Background Practice facilitation has been associated with meaningful improvements in disease prevention and quality of patient care. Using practice facilitation, the Improved Delivery of Cardiovascular Care (IDOCC) project aimed to improve the delivery of evidence-based cardiovascular care in primary care practices across a large health region. Our goal was to evaluate IDOCC’s impact on adherence to processes of care delivery. Methods A pragmatic stepped wedge cluster randomized trial recruiting primary care providers in practices located in Eastern Ontario, Canada ( NCT00574808). Participants were randomly assigned by region to one of three steps. Practice facilitators were intended to visit practices every 3–4 (year 1—intensive) or 6–12 weeks (year 2—sustainability) to support changes in practice behavior. The primary outcome was mean adherence to indicators of evidence-based care measured at the patient level. Adherence was assessed by chart review of a randomly selected cohort of 66 patients per practice in each pre-intervention year, as well as in year 1 and year 2 post-intervention. Results Eighty-four practices (182 physicians) participated. On average, facilitators had 6.6 (min: 2, max: 11) face-to-face visits with practices in year 1 and 2.5 (min: 0 max: 10) visits in year 2. We collected chart data from 5292 patients. After adjustment for patient and provider characteristics, there was a 1.9 % (95 % confidence interval (CI): −2.9 to −0.9 %) and 4.2 % (95 % CI: −5.7 to −2.6 %) absolute decrease in mean adherence from baseline to intensive and sustainability years, respectively. Conclusions IDOCC did not improve adherence to best-practice guidelines. Our results showed a small statistically significant decrease in mean adherence of questionable clinical significance. Potential reasons for this result include implementation challenges, competing priorities in practices, a broad focus on multiple chronic disease indicators, and use of an overall index of adherence. These results contrast with findings from previously reported facilitation trials and highlight the complexities and challenges of translating research findings into clinical practice. Trial registration NCT00574808
    Implementation Science 12/2015; 10(1). DOI:10.1186/s13012-015-0341-y
  • [Show abstract] [Hide abstract]
    ABSTRACT: Background: Considerable racial and socio-economic disparities exist in breast cancer. In spite of the existence of numerous evidence-based interventions (EBIs) aimed at reducing breast cancer screening barriers among the underserved, there is a lack of uptake or sub-optimal uptake of EBIs in community and clinical settings. This study evaluates a theoretically based, systematically designed implementation strategy to support adoption and implementation of a patient navigation-based intervention, called Peace of Mind Program (PMP), aimed at improving breast cancer screening among underserved women. Methods/design: The PMP will be offered to federally qualified health centers and charity clinics in the Greater Houston area using a non-randomized stepped wedge design. Due to practical constraints of implementing and adopting in the real-world, randomization of start times and blinding will not be used. Any potential confounding or bias will be controlled in the analysis. Outcomes such as appointment adherence, patient referral to diagnostics, time to diagnostic referral, patient referral to treatment, time to treatment referral, and budget impact of the intervention will be assessed. Assessment of constructs from the consolidated framework for implementation research (CFIR) will be assessed during implementation and at the end of the study (sustainment) from each participating clinic. Data will be analyzed using descriptive statistics (chi-square tests) and generalized estimating equations (GEE). Discussion: While parallel group randomized controlled trials (RCT) are considered the gold standard for evaluating EBI efficacy, withholding an effective EBI in practice can be both unethical and/or impractical. The stepped wedge design addresses this issue by enabling all clinics to eventually receive the EBI during the study and allowing each clinic to serve as its own control, while maintaining strong internal validity. We expect that the PMP will prove to be a feasible and successful strategy for reducing appointment no-shows in underserved women. Trial registration: Clinical trials registration number: NCT02296177
    Implementation Science 12/2015; 10(1). DOI:10.1186/s13012-015-0334-x