August 2003, Vol 93, No. 8 | American Journal of Public Health Glasgow et al. | Peer Reviewed | Public Health Matters | 1261
PUBLIC HEALTH MATTERS
The gap between research and practice is well documented. We address one of the
underlying reasons for this gap: the assumption that effectiveness research naturally
and logically follows from successful efficacy research. These 2 research traditions
have evolved different methods and values; consequently, there are inherent differ-
ences between the characteristics of a successful efficacy intervention versus those of
an effectiveness one. Moderating factors that limit robustness across settings, popu-
lations, and intervention staff need to be addressed in efficacy studies, as well as in
effectiveness trials. Greater attention needs to be paid to documenting intervention
reach, adoption, implementation, and maintenance. Recommendations are offered to
help close the gap between efficacy and effectiveness research and to guide evaluation
and possible adoption of new programs. (Am J Public Health. 2003;93:1261–1267)
Why Don’t We See More Translation
of Health Promotion Research to Practice?
Rethinking the Efficacy-to-Effectiveness Transition
| Russell E. Glasgow, PhD, Edward Lichtenstein, PhD, and Alfred C. Marcus, PhD
is increasing consensus on evidence-based
diabetes management practices to prevent
complications and on the importance and cost-
effectiveness of these practices.10However,
these recommendations—and especially those
related to lifestyle counseling and behavioral
issues—are poorly implemented in practice.1 1–1 4
This gap between research and practice is
the result of several interacting factors, includ-
ing limited time and resources of practition-
ers, insufficient training,15lack of feedback
and incentives for use of evidence-based
practices, and inadequate infrastructure and
systems organization to support translation.8,16
In this article, we focus on another reason for
the slow and incomplete translation of re-
search findings into practice: the logic and as-
sumptions behind the design of efficacy and
effectiveness research trials.
EFFICACY AND EFFECTIVENESS
Many of the methods used in current pre-
vention science are based on 2 influential pa-
pers published in the 1980s: Greenwald and
Cullen’s1 7description of the phases of cancer
control research and Flay’s analysis of efficacy
and effectiveness research.18Both papers ar-
gued for a logical progression of research de-
signs through which promising intervention
ideas should proceed. These papers had many
positive effects in helping to establish preven-
tion research and enhancing acceptability
among other disciplines. However, they may
also have had an important and inadvertent
negative consequence that derives from the
assumption that the best candidates for effec-
tiveness studies—and later dissemination—are
interventions that prove successful in certain
types of efficacy research. We argue that this
assumption, or at least the way in which it has
been operationalized over the past 15 years,
has often led to interventions that have low
probability of success in real-world settings.
To understand this point, it is necessary first
to briefly review the seminal papers by Flay18
and Greenwald and Cullen.1 7Efficacy trials are
defined by Flay as a test of whether a “pro-
gram does more good than harm when deliv-
ered under optimum conditions.”18(p451)Effi-
cacy trials are characterized by strong control
in that a standardized program is delivered in
a uniform fashion to a specific, often narrowly
defined, homogeneous target audience. Owing
to the strict standardization of efficacy trials,
any positive (or negative) effect can be directly
attributed to the intervention being studied.
Effectiveness trials are defined as a test of
whether a “program does more good than
harm when delivered under real-world condi-
tions.”18(p451)They typically standardize avail-
ability and access among a defined popula-
tion while allowing implementation and levels
of participation to vary on the basis of real-
world conditions. The primary goal of an ef-
fectiveness trial is to determine whether an
intervention works among a broadly defined
population. Effectiveness trials that result in
no change may be the result of a lack of
proper implementation or weak acceptance or
adherence by participants.18,19
Greenwald and Cullen1 7proposed 5 phases
of intervention research presumed to unfold in
Despite a growing literature documenting pre-
vention and health promotion interventions
that have proven successful in well-controlled
research, few of these interventions are consis-
tently implemented in applied settings. This is
true across preventive counseling services for
numerous target behaviors, including tobacco
use, dietary change, physical activity, and
behavioral health issues (e.g., alcohol use, de-
pression). Several recent reviews and meta-
analyses have documented this gap,1,2and the
task forces on both clinical preventive services
and community preventive services have noted
that in several areas there is insufficient ap-
plied evidence available to make recommenda-
tions at present.3–5 Most of the Healthy People
2000 objectives6were not met, and the even
more ambitious goals in Healthy People 2010
are similarly unlikely to be met without signifi-
cant changes in the status quo.7,8 To meet these
challenges, we will need to have substantially
more demonstrations of how to effectively im-
plement recommendations in typical settings
and in locations serving minority, low-income,
and rural populations facing health disparities.
This situation is not unique to preventive in-
terventions, as strikingly documented in the re-
cent Institute of Medicine report Crossing the
Chasm,9which summarizes the similar state of
affairs regarding many medical and disease
management interventions. For example, there
August 2003, Vol 93, No. 8 | American Journal of Public Health Glasgow et al. | Peer Reviewed | Public Health Matters | 1267
PUBLIC HEALTH MATTERS
1.Clark GN. Improving the transition from basic effi-
cacy research to effectiveness studies: methodological
issues and procedures. J Consult Clin Psychol. 1995;63:
the clinic: effects of child and adolescent psychotherapy.
Am Psychol. 1992;47:1578–1585.
Weisz JR, Weisz B, Donenberg GR. The lab versus
an evidence-based Guide to Community Preventive
Services—methods. Prev Med. 2000;18(suppl 1):
Briss PA, Zaza S, Papaioanou M, et al. Developing
Guide to Community Preventive Services. 2002. Avail-
able at: http://www.thecommunityguide.org. Accessed
March 11, 2003.
Centers for Disease Control and Prevention. The
Evaluating primary care behavioral counseling inter-
ventions: an evidence-based approach. Am J Prev Med.
Whitlock EP, Orleans CT, Prender N, Allan J.
Healthy People 2000. 2002. Available at: http://www.
htm. Accessed March 11, 2003.
Department of Health and Human Services.
vention strategies from social and behavioral research.
Am J Health Promot. 2001;15:149–166.
Smedley BD, Syme SL. Promoting health: inter-
tine Medical Care. Washington, DC: Center for the Ad-
vancement of Health; 2001.
Integration of Health Behavior Counseling Into Rou-
Crossing the Quality Chasm: A New Health System for the
21st Century. Washington, DC: National Academy
Committee on Quality Health Care in America.
10. Joyner L, McNeeley S, Kahn R. ADA’s provider
recognition program. HMO Pract. 1997;11:168–170.
practices for diabetes management: patient, physician,
and office correlates in two primary care samples. Am J
Prev Med. 2000;19:9–14.
Glasgow RE, Strycker LA. Level of preventive
12. Health Behavior Change in Managed Care: A Status
Report. Washington, DC: Center for the Advancement
of Health; 2000.
13. Kottke TE, Edwards BS, Hagen PT. Counseling:
implementing our knowledge in a hurried and complex
world. Am J Prev Med. 1999;17:295–298.
14. Woolf SH, Atkins D. The evolving role of preven-
tion in health care contributions of the US Preventive
Services Task Force. Am J Prev Med. 2001;20:13–20.
15. Orlandi MA. Promoting health and preventing dis-
ease in health care settings: an analysis of barriers.
Prev Med. 1987;16:119–130.
16. Green LW. From research to “best practices” in
other settings and populations. Am J Health Behav.
cancer control. J Natl Cancer Inst. 1985;74:543–551.
Greenwald P, Cullen JW. The new emphasis in
18. Flay BR. Efficacy and effectiveness trials (and
other phases of research) in the development of health
promotion programs. Prev Med. 1986;15:451–474.
19. Basch CE, Sliepcevich EM, Gold RS. Avoiding
type III errors in health education program evaluations.
Health Educ Q. 1985;12:315–331.
20. King AC. The coming of age of behavioral re-
search in physical activity. Ann Behav Med. 2001;23:
21. Glasgow RE, Bull SS, Gillette C, Klesges LM, Dze-
waltowski DA. Behavior change intervention research
in health care settings: a review of recent reports with
emphasis on external validity. Am J Prev Med. 2002;
22. Oldenburg B, Ffrench BF, Sallis JF. Health behav-
ior research: the quality of the evidence base. Am J
Health Promot. 2000;14:253–257.
23. Hiatt RA, Rimer BK. A new strategy for cancer
control research. Cancer Epidemiol Biomarkers Prev.
24. Kerner JF. Closing the Gap Between Discovery and
Delivery. Washington, DC: National Cancer Institute;
25. Teutsch SM. A framework for assessing the effec-
tiveness of disease and injury prevention. MMWR
Recomm Rep. 1992;41(RR-3):1–12.
26. Glasgow RE, Vogt TM, Boles SM. Evaluating the
public health impact of health promotion interventions:
the RE-AIM framework. Am J Public Health. 1999;89:
27. Glasgow RE, McKay HG, Piette JD, Reynolds KD.
The RE-AIM framework for evaluating interventions:
what can it tell us about approaches to chronic illness
management? Patient Educ Couns. 2001;44:119–127.
28. Glasgow RE, Klesges LM, Dzewaltowski DA, Bull
SS, Estabrooks P. The future of health behavior change
research: what is needed to improve translation of re-
search into health promotion practice? Ann Behav Med.
29. Estabrooks PA, Dzewaltowski DA, Glasgow RE,
Klesges LM. How well has recent literature reported on
important issues related to translating school-based
health promotion research into practice? J School
30. Rogers EM. Diffusion of Innovations. 4th ed. New
York, NY: Free Press; 1995.
31. Mook DG. In defense of external invalidity. Am
32. Axelrod R, Cohen MD. Harnessing Complexity: Or-
ganizational Implications of a Scientific Frontier. New
York, NY: Simon & Schuster; 2000.
33. Biglan A, Glasgow RE, Singer G. The need for a
science of larger social units: a contextual approach.
Behav Ther. 1990;21:195–215.
34. Gleser GC, Cronbach LJ, Rajaratnam N. Generaliz-
ability of scores influenced by multiple sources of vari-
ance. Psychometrika. 1965;30:1373–1385.
35. Shadish WR, Cook TD, Campbell PT. Experimen-
tal and Quasi-Experimental Design for Generalized
Causal Inference. Boston, Mass: Houghton Mifflin;
36. Brunswik E. Representative design and probabilis-
tic theory in functional psychology. Psychol Rev. 1955;
37. Murray DM. Statistical models appropriate for de-
signs often used in group-randomized trials. Stat Med.
38. Cook TD, Campbell DT. Quasi-Experimentation:
Design and Analysis Issues for Field Settings. Chicago,
Ill: Rand McNally; 1979.
39. Brewer MB. Research design and issues of valid-
ity. In: Reis HT, Judd CM, eds. Handbook of Research
Methods in Social and Personality Psychology. New York,
NY: Cambridge University Press; 2000:3–39.
40. Oldenburg BF, Sallis JF, Ffrench ML, Owen N.
Health promotion research and the diffusion and insti-
tutionalization of interventions. Health Educ Res. 1999;
41. Skinner CS, Campbell MK, Rimer BK, Curry S,
Prochaska JO. How effective is tailored print communi-
cation? Ann Behav Med. 1999;21:290–298.
42. Kreuter MW, Strecher VJ, Glassman B. One size
does not fit all: the case for tailoring print materials.
Ann Behav Med. 1999;21:276–283.
43. Glasgow RE, Toobert DJ, Hampson SE, Strycker
LA. Implementation, generalization, and long-term re-
sults of the “Choosing Well” diabetes self-management
intervention. Patient Educ Couns. 2002;48:115–122.
44. Abrams DB, Emmons KM, Linnan L, Biener L.
Smoking cessation at the workplace: conceptual and
practical considerations. In: Richmond R, ed. Interven-
tions for Smokers: An International Perspective. New
York, NY: Williams & Wilkins; 1994:137–169.
45. Prochaska JO, Velicer WF, Fava JL, Rossi JS, Tsoh
JY. Evaluating a population-based recruitment approach
and a stage-based expert system intervention for smok-
ing cessation. Addict Behav. 2001;26:583–602.
46. Jeffery RW. Risk behaviors and health: contrasting
individual and population perspectives. Am Psychol.
47. Lichtenstein E, Glasgow RE. A pragmatic frame-
work for smoking cessation: implications for clinical
and public health programs. Psychol Addict Behav.
48. Elbourne DR, Campbell MK. Extending the
CONSORT statement to cluster randomized trials:
for discussion. Stat Med. 2001;20:489–496.
49. Kolbe LJ. Increasing the impact of school health
promotion programs: emerging research perspectives.
Health Educ. 1986;17:49–52.
50. Moher D, Schulz KF, Altman D. The CONSORT
statement: revised recommendations for improving the
quality of reports. JAMA. 2001;285:1987–1991.
51. Zaza S, Lawrence RS, Mahan CS, Fullilove M, et
al. Scope and organization of the Guide to Community
Preventive Services. Task Force on Community Preven-
tive Services. Am J Prev Med. 2000;18(suppl 1):27–34.
52. Bull SS, Gillette C, Glasgow RE, Estabrooks P.
Worksite health promotion research: to what extent
can we generalize the results and what is needed to
translate research to practice? Health Educ Behav. In
53. Davidson K, Goldstein M, Kaplan R, et al. Evi-
dence-based behavioral medicine: what is it and how
do we get there? Ann Behav Med. In press.
54. Green LW, Kreuter MW. Commentary on the
emerging Guide to Community Preventive Services
from a health promotion perspective. Am J Prev Med.
55. Institute of Medicine. Promoting Health: Interven-
tion Strategies From Social and Behavioral Research.
Washington, DC: National Academy Press; 2000.
56. Green LM, Kreuter MW. Health Promotion Plan-
ning: An Educational and Ecological Approach. 3rd ed.
Mountain View, Calif: Mayfield Publishing Co; 1999.