RE-AIMing Research for Application: Ways to
Improve Evidence for Family Medicine
Russell E. Glasgow, PhD
Objective: To outline changes in clinical research design and measurement that should enhance the
relevance of research to family medicine.
Methods: Review of the traditional efficacy research paradigm and discussion of why this needs to be
expanded. Presentation of practical clinical and behavioral trials frameworks, and of the RE-AIM model
for planning, evaluating, and reporting studies.
Results: Recommended changes to improve the external validity and relevance of research to family
medicine include studying multiple clinical practices, realistic alternative program choices, heteroge-
neous and representative patients, and multiple outcomes including cost, behavior change of patients
and staff, generalization, and quality of life.
Conclusions: The methods and procedures discussed can help program planners, evaluators and
readers of research articles to evaluate the replicability, consistency of effects, and likelihood of wide-
spread adoption of interventions. (J Am Board Fam Med 2006;19:11–9.)
Family medicine is by nature pragmatic and con-
textual. It deals with making practical decisions on
complex and multiple issues in ways that are con-
gruent with family values and situations.1,2Like
other areas of medicine, it is also adopting evi-
dence-based medicine as a key feature of its current
and future direction.1Unfortunately, the evidence
available for family medicine often does not address
the above issues. Most available evidence comes
from studies that attempt to rule out threats to
validity by studying isolated issues and controlling
or standardizing contextual factors.3,4This creates
a gap between the available evidence and the situ-
ations and context in which the evidence needs to
This article has 2 primary purposes. First, it
discusses how future primary care research might
be “Re-Aimed” to be more relevant and practical.
Second, it provides a series of questions that family
physicians can ask to determine the applicability of
research reports to their setting and to help plan
primary care programs that have broad impact.
Why Change and What Might Be Changed?
The gap between research and practice has been
extensively documented5,6and is increasingly the
topic of meetings and initiatives.7–9However, few
projects have addressed one of the fundamental
causes of the gap between research and practice:
Many family physicians and health system decision
makers do not see much of the available research
evidence as applicable to their setting. Specific is-
sues concern the types of patients, settings, and
resources available (including time), and outcomes
Table 1 summarizes key differences between the
traditional “efficacy study” evidence most often
available, and the evidence from practical effective-
ness studies needed to help integrate research into
practice.4There are both conceptual and philoso-
phy of science differences, and methodological/
design differences between available efficacy stud-
ies and those that are needed to inform family
medicine. As shown, the traditional efficacy ap-
proach attempts to maximize internal validity by
isolating causes so that treatment or theoretical
mechanisms can be identified. In contrast, practical
effectiveness studies aim to identify widely applica-
ble, replicable programs that will work in a variety
of different contexts. For heuristic purposes, Table
1 presents efficacy and effectiveness studies as a
Submitted 12 July 2005; revised 24 August 2005; accepted
29 August 2005.
From Kaiser Permanente Colorado, Denver, CO
This article is based on a presentation made at the 2005
Convocation of Practices, hosted by the American Academy
of Family Physicians National Research Network and the
Federation of Practice-based Research Networks, Colorado
Springs, CO, March 2005.
Conflict of interest: none declared.
Corresponding author: Russell E. Glasgow, PhD, Kaiser
Permanente Colorado, 335 Road Runner Lane, Penrose,
CO 81240 (E-mail: email@example.com).
dichotomy, whereas in reality, there is a continuum
of research designs and many studies have elements
of both efficacy and effectiveness research. Because
of the preponderance of studies toward the efficacy
end of the continuum in the literature and which
form the basis for current practice recommenda-
tions, this paper focuses on how we might change
such designs. It is recognized that not all studies
need to be “complete” effectiveness studies, but
movement in this direction would generally en-
hance the relevance of research data.
Design differences between efficacy and effec-
tiveness studies impact the inferences that can be
made from a given study. To maximize internal
validity and chances of finding a treatment effect,
efficacy studies tend to recruit homogeneous,
highly motivated patients to participate in highly
structured, intensive interventions that are con-
ducted in one or a few settings. In contrast, effec-
tiveness studies focus on heterogeneity and repre-
sentativeness of both patients and settings, and
emphasize interventions that are more flexible to
address unique issues.
As Larry W. Green has said, “If we want more
evidence-based practice, then we need more prac-
Practical Clinical Trials
Tunis et al,11have proposed criteria for “practical
clinical trials,” that should also be more relevant for
family medicine. There are 4 key characteristics of
practical trials. They study representative patients;
are conducted in multiple settings; use reasonable
alternative intervention choices as controls rather
than placebos or “usual care;” and report on out-
comes relevant to patients, clinicians, potential
adoptees, and policy makers.11
Tunis et al,11recommend having diverse sam-
ples of both patients and clinical settings. In par-
ticular, they recommend using few exclusion crite-
ria so that the complex, comorbid cases seen in
primary care are included. Their recommendations
for inclusion of community practice settings are
very compatible with practice-based research net-
work research approaches within primary care.12
Inclusion of a variety of different settings also per-
mits investigation of variations in both processes
and outcomes of care. In particular, it is recom-
mended that studies include practices in small, ru-
ral, mixed payer and safety net settings as well as
those that are part of larger health systems.
The third characteristic of practical clinical trials
is that they use realistic alternative treatment com-
parisons–not just no treatment, placebo, or “usual
care.”11,13The rationale for this is that clinicians
and policy makers need to make decisions among
alternative interventions, and including direct com-
parisons provides more valuable information on
intervention strengths and weaknesses than does
just knowing that a number of alternative treat-
ments are each better than placebo or no treatment.
Tunis et al,11stress that it is important to collect
multiple outcomes. Family medicine investigations
could accelerate translation if more studies would
collect the types of measures discussed below. My
colleagues and I have proposed a comprehensive,
yet feasible, set of measures summarized in Table
2.14,15The first 4 measures listed can be collected
without adding any burden to patients. Contextual
factors and moderating variables are important de-
Table 1. Purpose, Intent, and Elements of Traditional Efficacy Studies and Practical Effectiveness Trials
IssueTraditional Efficacy Study Practical Effectiveness Trial
I. Purpose and intent
II. Study elements
Isolate treatment mechanism; unique approach
Understand program in context
Practice and policy
Homogeneous, single or few under control of
Intensive, highly structured, complex
Heterogeneous, multiple to evaluate
generalization across settings
Less intensive, moderate structure, and flexible to
Regular staff in representative settings
Intervention staff Research staff or highly trained experts
January–February 2006 Vol. 19 No. 1http://www.jabfm.org
and other health care interventions. Thousand Oaks
(CA): Sage Publications; 1996.
25. Kaplan RM. The significance of quality of life in
health care. Qual Life Res 2003;12(Suppl 1):3–16.
26. Lorig KR, Sobel DS, Stewart AL, et al. Evidence sug-
gesting that a chronic disease self-management pro-
gram can improve health status while reducing hospi-
talization: a randomized trial. Med Care 1999;37:5–14.
27. Bech P, Olsen LR, Kjoller M, Rasmussen NK. Mea-
suring well-being rather than the absence of distress
symptoms: a comparison of the SF-36 mental health
subscale and the WHO–Five Well-Being Scale. Int J
Methods Psychiatr Res 2003;12:85–91.
28. Centers for Disease Control and Prevention. CDC
Healthy Days Website. Available at: http://www.cdc.
29. Leape LL, Berwick DM. Five years after To Err is
Human: What have we learned? JAMA 2005;18:
30. Stange KC, Woolf SH, Gjeltema K. One minute for
prevention: the power of leveraging to fulfill the
promise of health behavior counseling. Am J Prev
31. Goldstein MG, Whitlock EP, DePue J. Multiple
health risk behavior interventions in primary care:
summary of research evidence. Am J Prev Med 2004;
(NCQA). Health plan employer data and informa-
tion set 3.0. Washington (DC): National Committee
for Quality Assurance; 1996.
33. Green LW, Kreuter MW. Health promotion plan-
ning: an educational and ecological approach. 4th
Ed. Mountain View (CA): Mayfield Publishing
34. Rotheram-Borus MJ, Flannery ND. Interventions
that are CURRES: Cost-effective, useful, realistic,
robust, evolving, and sustainable. In: Rehmschmidt
H, Belfer M, Goodyear I, Editors. Facilitating path-
ways: care, treatment, and prevention in child and ad-
olescent health. New York: Springer; 2004. p. 235–44.
35. Rogers EM. Diffusion of innovations. 5th Ed. New
York: Free Press; 2003.
36. Glasgow RE, Vogt TM, Boles SM. Evaluating the
public health impact of health promotion interven-
tions: the RE-AIM framework. Am J Public Health
37. Glasgow RE. Evaluation of theory-based interven-
tions: the RE-AIM model. In: Glanz K, Lewis FM,
Rimer BK, editors. Health behavior and health ed-
ucation. 3rd Ed. San Francisco: John Wiley & Sons;
2002. p. 531–44.
38. Glasgow RE, McKay HG, Piette JD, Reynolds KD.
The RE-AIM framework for evaluating interventions:
what can it tell us about approaches to chronic illness
management? Patient Educ Couns 2001;44:119–27.
39. Prochaska JO, Velicer WF, Fava JL, Rossi JS, Tsoh
JY. Evaluating a population-based recruitment ap-
for Quality Assurance
proach and a stage-based expert system intervention
for smoking cessation. Add Behav 2001;26:583–602.
40. Abrams DB, Orleans CT, Niaura RS, Goldstein
MG, Prochaska JO, Velicer W. Integrating individ-
ual and public health perspectives for treatment of
tobacco dependence under managed health care: a
combined stepped care and matching model. Ann
Behav Med 1996;18:290–304.
41. Glasgow RE, Klesges LM, Dzewaltowski DA, Es-
tabrooks PA, Vogt TM. Evaluating the overall im-
pact of health promotion programs: using the RE-
AIM framework for decision making and to consider
complex issues. Health Educ Res In press 2006.
42. Institute of Medicine. Unequal treatment: confront-
ing racial and ethnic disparities in health care. Wash-
ington (DC): National Academies Press; 2003.
43. Glasgow RE, Klesges LM, Dzewaltowski DA, Bull
SS, Estabrooks P. The future of health behavior
change research: what is needed to improve transla-
tion of research into health promotion practice? Ann
Behav Med 2004;27:3–12.
44. Rothman KJ, Greenland S, editors. Modern epide-
miology. 2nd Ed. Philadelphia PA: Lippincott, Wil-
liams, and Wilkins; 1998.
45. Glasgow RE, Whitlock EP, Eakin EG, Lichtenstein
E. A brief smoking cessation intervention for women
in low-income Planned Parenthood Clinics. Am J
Public Health 2000;90:786–9.
46. Glasgow RE, Nutting PA, King DK, et al. A prac-
tical randomized trial to improve diabetes care.
J Gen Intern Med 2004;19:1167–74.
47. Will JC, Farris RP, Sanders CG, Stockmyer CK, Finkel-
stein EA. Health promotion interventions for disadvan-
taged women: overview of the WISEWOMAN projects.
J Womens Health 2004;13:484–502.
48. Crabtree BF, Miller WL, Stange KC. Understand-
ing practice from the ground up. J Fam Pract 2001;
49. Stange KC, Goodwin MR, Zyzanski SJ, Dietrich AJ.
Sustainability of a practice-individualized preventive
service delivery intervention. Am J Prev Med 2003;
50. Miller WL, McDaniel RRJ, Crabtree BF, Stange
KC. Practice jazz: understanding variation in family
practices using complexity science. J Fam Pract
51. Glasgow RE, Bull SS, Piette JD, Steiner J. Interac-
tive behavior change technology: a partial solution to
the competing demands of primary care. Am J Prev
52. Glasgow RE, Toobert DJ, Barrera M, Jr., Strycker
LA. The Chronic Illness Resources Survey: cross-
validation and sensitivity to intervention. Health
Educ Res 2004;20:402–9.
53. Riley KM, Glasgow RE, Eakin EG. Resources for
health: a social-ecological intervention for support-
ing self-management of chronic conditions. J Health