Why Don’t We See More Translation of Health Promotion Research to Practice? Rethinking the Efficacy-to-Effectiveness Transition

Kaiser Permanente Colorado, USA.
American Journal of Public Health (Impact Factor: 4.55). 09/2003; 93(8):1261-7. DOI: 10.2105/AJPH.93.8.1261
Source: PubMed


The gap between research and practice is well documented. We address one of the underlying reasons for this gap: the assumption that effectiveness research naturally and logically follows from successful efficacy research. These 2 research traditions have evolved different methods and values; consequently, there are inherent differences between the characteristics of a successful efficacy intervention versus those of an effectiveness one. Moderating factors that limit robustness across settings, populations, and intervention staff need to be addressed in efficacy studies, as well as in effectiveness trials. Greater attention needs to be paid to documenting intervention reach, adoption, implementation, and maintenance. Recommendations are offered to help close the gap between efficacy and effectiveness research and to guide evaluation and possible adoption of new programs.

49 Reads
  • Source
    • "Enhancing guideline adherent mammography routines among these women may be important to address this disparity [3]. While a number of effective evidence-based interventions (EBIs) exist for addressing barriers to mammography screening, like other EBIs, their uptake and use in community settings have been limited [4] [5] [6] [7]. Reasons for lack of uptake include cancer planners' anticipation of a misfit between interventions tested in controlled efficacy trials and the needs of their settings [8– 10]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Breast cancer mortality disparities continue, particularly for uninsured and minority women. A number of effective evidence-based interventions (EBIs) exist for addressing barriers to mammography screening; however, their uptake and use in community has been limited. Few cancer-specific studies have evaluated adapted EBIs in new contexts, and fewer still have considered implementation. This study sought to (1) evaluate the effectiveness of an adapted mammography EBI in improving appointment keeping in African American women and (2) describe processes of implementation in a new practice setting. We used the type 1 hybrid design to test effectiveness and implementation using a quasi-experimental design. Logistic regression and intent-to-treat analysis were used to evaluate mammography appointment attendance. The no-show rate was 44% (comparison) versus 19% (intervention). The adjusted odds of a woman in the intervention group attending her appointment were 3.88 (p < 0.001). The adjusted odds of a woman attending her appointment in the intent-to-treat analysis were 2.31 (p < 0.05). Adapted EBI effectiveness was 3.88 (adjusted OR) versus 2.10 (OR) for the original program, indicating enhanced program effect. A number of implementation barriers and facilitators were identified. Our findings support previous studies noting that sequentially measuring EBI efficacy and effectiveness, followed by implementation, may be missing important contextual information.
    BioMed Research International 08/2015; DOI:10.1155/2015/240240 · 2.71 Impact Factor
    • "By definition, efficacy trials are those that test the impact of an intervention under optimum conditions and therefore tend to provide one type of setting with expert staff and resources for a defined length of time, and to limit reach to a homogenous population through the use of eligibility and exclusion criteria. Comparatively, effectiveness trials test the impact of an intervention under real-world conditions with participants from a broad population, and are therefore conducted in multiple settings, use existing resources and/or procedures, rely on regular staff to implement the intervention, and are intended to be maintained, assuming there are positive results (Glasgow et al., 2003). For the interventions reviewed herein, the majority of the efficacy trials had durations of less than a school year and they were more likely to measure PA outcomes with objective measures, whereas the effectiveness trials were more likely to be longer than a school year and most of them used self-report or observation techniques to record PA levels. "
    [Show abstract] [Hide abstract]
    ABSTRACT: An identified limitation of existing reviews of physical activity (PA) interventions in school-aged youth is the lack of reporting on issues related to the translatability of the research into health promotion practice. This review used the RE-AIM (Reach, Efficacy/Effectiveness, Adoption, Implementation and Maintenance) framework to determine the extent to which intervention studies promoting PA in youth report on factors that inform generalizability across settings and populations. and Results A systematic search for controlled interventions conducted within the last ten years identified 50 studies that met the selection criteria. Based on RE-AIM criteria, most of these studies focused on statistically significant findings and internal validity rather than on issues of external validity. Due to this lack of information, it is difficult to determine whether or not reportedly successful interventions are feasible and sustainable in an uncontrolled, real-world setting. Areas requiring further research include costs associated with recruitment and implementation, adoption rate, and representativeness of participants and settings. This review adds data to support recommendations that interventions promoting PA in youth should include assessment of adoption and implementation issues. Copyright © 2015. Published by Elsevier Inc.
    Preventive Medicine 04/2015; 76. DOI:10.1016/j.ypmed.2015.04.006 · 3.09 Impact Factor
  • Source
    • "Accordingly, Rychetnik et al. (Rychetnik et al., 2004) distinguish between three types of evidence in public health: evidence pointing to the fact that 'something should be done', to determine 'what should be done' and informing on 'how it should be done'. The attention for the latter kind of evidence has given rise to a research investigating the quality and the processes of implementation (Glasgow et al., 2003; Breitenstein et al., 2010; Rabin et al., 2010; Palinkas et al., 2011). Although there is no consensus with regard to the conceptual and methodological frameworks to be used to study implementation, various strategies have been proposed to enhance the quality of implementation. "
Show more


49 Reads
Available from