ArticlePDF Available

Implementation Science: A Brief Overview and a Look Ahead

Authors:

Abstract and Figures

Abstract. The field of implementation research is remarkable in many ways and, even as a young discipline, it has expanded well beyond the expectations of even its most optimistic supporters and stakeholders. In this overview we provide a selective rather than systematic review to serve as a relevant introduction to the field of implementation science. We highlight central concepts, strategies, frameworks, and research outcomes. These highlights draw heavily on the seminal systematic reviews from Brownson, Colditz, and Proctor (2012), Fixsen, Naoom, Blase, Friedman, and Wallace (2005), and Greenhalgh, Robert, MacFarlane, Bate, and Kyriakidou (2004) and on a thorough comparative review of implementation frameworks conducted by Meyers, Durlak, and Wandersman (2012). Looking ahead to future implementation research, we consider research challenges related to the scaling up of programs, striking a good balance between treatment integrity and local adaptation, measuring implementation quality, and program sustainability.
No caption available
… 
Content may be subject to copyright.
www.hogrefe.com/journals/zfp
Implementation Research,
Practice, and Policy
Editors
Dean L. Fixsen
Terje Ogden
Editor-in-Chief
Bernd Leplow
Associate Editors
Edgar Erdfelder · Herta Flor · Dieter Frey
Friedrich W. Hesse · Heinz Holling · Christiane Spiel
Zeitschrift für Psychologie
Founded by Hermann Ebbinghaus and Arthur König in 1890
Volume 222 / Number 1 / 2014
ISSN-L 2151-2604 • ISSN-Print 2190-8370 • ISSN-Online 2151-2604
Contents
Editorial Facing the Challenges of Implementation
Dean L. Fixsen and Terje Ogden 1
Review Articles Implementation Science: A Brief Overview and a Look Ahead
Terje Ogden and Dean L. Fixsen 4
Implementation of Treatment Integrity Procedures: An Analysis of Outcome Studies
of Youth Interventions Targeting Externalizing Behavioral Problems
Pauline Goense, Leonieke Boendermaker, Tom van Yperen, Geert-Jan Stams,
and Jose van Laar 12
Original Articles Defining and Evaluating Fidelity at the Program Level in Psychosocial Treatments:
A Preliminary Investigation
Molly A. Brunk, Jason E. Chapman, and Sonja K. Schoenwald 22
Clustering Practitioners Within Service Organizations May Improve Implementation
Outcomes for Evidence-Based Programs
Sihu K. Klest 30
Exploration-Stage Implementation Variation: Its Effect on the Cost-Effectiveness
of an Evidence-Based Parenting Program
Stephanie Romney, Nathaniel Israel, and Danijela Zlatevski 37
Measuring Implementation of a School-Based Violence Prevention Program: Fidelity
and Teachers’ Responsiveness as Predictors of Proximal Outcomes
Marie-Therese Schultes, Elisabeth Stefanek, Rens van de Schoot, Dagmar Strohmeier,
and Christiane Spiel 49
Research Spotlight The Ecology of Sustainable Implementation: Reflection on a 10-Year Case History
Illustration
Tormod Rimehaug 58
Call for Papers ‘‘Developmental Dyscalculia’’: A Topical Issue of the Zeitschrift fu¨r Psychologie
Guest Editor: Jo¨ rg-Tobias Kuhn 67
Ó2014 Hogrefe Publishing Zeitschrift fu¨r Psychologie 2014; Vol. 222(1)
Zeitschrift für
Psychologie
Your article has appeared in a journal published by Hogrefe Publishing.
This e-offprint is provided exclusively for the personal use of the authors. It may not be
posted on a personal or institutional website or to an institutional or disciplinary repository.
If you wish to post the article to your personal or institutional website or to archive it
in an institutional or disciplinary repository, please use either a pre-print or a post-print of
your manuscript in accordance with the publication release for your article and our
‘‘Online Rights for Journal Articles’’ (www.hogrefe.com/journals).
Review Article
Implementation Science
A Brief Overview and a Look Ahead
Terje Ogden
1
and Dean L. Fixsen
2
1
The Norwegian Center for Child Behavioral Development, Oslo, Norway,
2
University of North Carolina at Chapel Hill, NC, USA
Abstract. The field of implementation research is remarkable in many ways and, even as a young discipline, it has expanded well beyond the
expectations of even its most optimistic supporters and stakeholders. In this overview we provide a selective rather than systematic review to
serve as a relevant introduction to the field of implementation science. We highlight central concepts, strategies, frameworks, and research
outcomes. These highlights draw heavily on the seminal systematic reviews from Brownson, Colditz, and Proctor (2012), Fixsen, Naoom,
Blase, Friedman, and Wallace (2005), and Greenhalgh, Robert, MacFarlane, Bate, and Kyriakidou (2004) and on a thorough comparative
review of implementation frameworks conducted by Meyers, Durlak, and Wandersman (2012). Looking ahead to future implementation
research, we consider research challenges related to the scaling up of programs, striking a good balance between treatment integrity and local
adaptation, measuring implementation quality, and program sustainability.
Keywords: implementation research, implementation practice, implementation strategies, implementation frameworks
In a practical sense, the central issues in implementation
research are the ‘‘what,’’ ‘‘how,’’ and ‘‘who’’ of implemen-
tation. What shall be implemented, how will the task be car-
ried out, and who shall do the work of implementation? In
response to the question about ‘‘what,’’ we stress the impor-
tance of ‘‘effective interventions’’ which mostly refers to
evidence-based programs or practices across several disci-
plines and professions (Biglan & Ogden, 2008). A useful
distinction can be made between evidence-based practices
and evidence-based programs. Practices are often consid-
ered to be simple procedures that can be adopted for use
by individual practitioners. Programs on the other hand
are collections of practices which are standardized and
may integrate several intervention practices. Even if we
emphasize evidence-based programs in this introduction,
programs and practices share a number of challenges and
requirements when it comes to implementation (Kessler
& Glasgow, 2011). The question of ‘‘how’’ is not that easily
answered and good ideas have to be derived from imple-
mentation frameworks and from research on facilitators
and obstacles to effective transfer of knowledge. And
finally the ‘‘who’’ refers to competent change agents and
facilitators or, in our nomenclature, purveyors and imple-
mentation teams. It requires skilled people to do the work
of implementation effectively and efficiently in complex
human service environments.
Implementation Defined
Implementation is ‘‘a specified set of activities designed
to put into practice an activity or program of known
dimensions’’ (Fixsen, Naoom, Blase, Friedman, & Wallace,
2005, p. 5). Implementation activities help practitioners
become increasingly skillful, consistent, and committed in
their use of an innovation (Klein & Sorra, 1996), help orga-
nizations change to support the innovative and evidence-
based services (Klein, 2004), and help assure leadership
for changing practices and organizational supports
(Marzano, Waters, & McNulty, 2005). But it would be a
mistake to consider implementation as a onetime event; it
should rather be conceived of as an ongoing process from
exploration to full implementation. Kitson, Harvey, and
McCormack (1998) summarize that successful implementa-
tion in its simplest form requires that the evidence is high,
the context receptive to change, and the change supported
by appropriate facilitation.
The concept of implementation science has been
defined as ‘‘The scientific study of methods to promote
the systematic uptake of clinical research findings and other
evidence-based practices into routine practice ...’’ (ICE-
BeRG, 2006). From its beginnings (Fairweather, Sanders,
& Tornatzky, 1974; Pressman & Wildavsky, 1973), imple-
mentation science has grown out of experiences with the
‘‘science to service gap.’’ Proponents of implementation
science have been concerned with the limited success of
transferring research-based practices to ordinary service set-
tings (Palinkas & Soydan, 2012). Over the past decades,
diffusion and dissemination strategies (Brownson, Colditz,
& Proctor, 2012) have resulted in about 14% use of evi-
dence-based programs after about 17 years (Balas & Boren,
2000; Green, 2008). The poor outcomes from these
necessary but insufficient approaches have led some to call
for a moratorium on funding research to develop more
Zeitschrift fr Psychologie 2014; Vol. 222(1):4–11
DOI: 10.1027/2151-2604/a000160
Ó2014 Hogrefe Publishing
Author’s personal copy (e-offprint)
evidence-based programs until we learn to successfully use
the ones we already have available (Kessler & Glasgow,
2011).
The growing awareness of a science to service gap has
inspired research efforts and numerous papers on imple-
mentation facilitators and obstacles. Thus, the persistent
challenges of putting research knowledge into practice have
contributed in important ways to the emergence of the field
of implementation research (Institute of Medicine, 2001;
Rossi & Wright, 1984). The development of more orga-
nized approaches to implementation practice, science, and
policy is timely. As pointed out by Goldman et al.
(2001): ‘‘There is uncomfortable irony in moving forward
to implement evidence-based practices in the absence of
an evidence base to guide implementation practice’’
(p. 1593). Some of the research leading to a more evi-
dence-based approach to implementation is summarized
in the next section.
Implementation Research Outcomes
Implementation outcomes are conceptually and empirically
distinct from those of service and treatment effectiveness
outcomes. Of course, the ultimate outcome of evidence-
based interventions and evidence-based implementation is
socially significant improvements in consumer well-being.
Identifying implementation outcomes has required opening
the ‘‘black box’’ of implementation processes (Sullivan,
Blevins, & Kauth, 2008) to identify the necessary ingredi-
ents apparently related to supporting successful and sustain-
able uses of evidence-based interventions. Implementation
processes hidden within the ‘‘black box’’ also are referred
to as mediating or change mechanisms. Sullivan et al.
(2008) identified two main implementation components;
facilitation of training (including participant selection,
training content and process, consultation and coaching)
and facilitation of implementation (including evaluation
approach, administrative support, and systems interven-
tion). Another attempt at analyzing mediating mechanisms
is Berkel et al.s (2011) theoretical model for the study of
treatment program implementation and effectiveness.
Berkel et al. (2011) emphasize how both practitioners and
clients contribute to positive outcomes. First, treatment
integrity indicates to what extent treatment components
are delivered as intended, for instance how much of the pro-
gram is disseminated and how much time is spent on each
component. Additionally, indicators of competence or qual-
ity are indicated by clinical process skills and a clear and
enthusiastic presentation of the program (e.g., interactive
teaching). The quality of delivery may influence client
engagement and response, as indicated by showing up
and actively participating in the sessions, and by doing
homework and express satisfaction with treatment. Another
mediating mechanism is adaptation which, contrary to pro-
gram drift, is positive contributions to the program made by
the therapist. This may include taking into account the par-
ticipantscultural or local distinctiveness, but without vio-
lating the underlying theory and principles of the program.
But still there is insufficient empirical evidence for evi-
dence-based implementation and we still need to know
moreaboutwhatthe‘‘black box’’ contains. That is, what
are the processes or change mechanisms that bring about
the successful implementation of evidence-based interven-
tions and other innovations.
Facilitators and Obstacles
A considerable proportion of the literature has focused on
facilitators and obstacles of change by asking: ‘‘what pro-
motes and what slows down implementation with high
fidelity and good outcomes?’’ Barriers and incentives to
practice change are associated with characteristics of the
innovation itself, the provider, the practitioner adopting
the practice, the client or consumer, and the inner and outer
context of the service delivery organization (Durlak &
DuPre, 2008; Greenhalgh, Robert, MacFarlane, Bate, &
Kyriakidou, 2004; Grohl & Wensing, 2004; Rogers,
1995). In their review of effective implementation, Durlak
and DuPre (2008) found the organizational capacity of the
service delivery organization and the providers external
support through training and technical assistance to be of
particular importance. Key attributes of new practice may
include relative advantage, compatibility with current
norms and value, low complexity, triability, observable
benefits, and flexibility in the setting (Greenhalgh et al.,
2004; Rogers, 1995). Attributes of the practitioners may
include tolerance for ambiguity, openness to change, moti-
vation, confidence, skill, social values, and learning style
(Rogers, 1995). Attributes of the innovative systems
include decentralized decision-making, diverse profession-
als with specialized knowledge, lack of formality, good
internal communication, and technical support for change.
Managers and administrators are important because they
can address organizational barriers to change (Kauth,
Sullivan, Cully, & Blevins, 2011). In one perspective, stud-
ies seem to indicate that ‘‘everything matters’’ (Durlak &
DuPre, 2008), and that the collective and interactive influ-
ences of a range of implementation variables are prerequi-
sites of practice change. Other studies have tried to
determine the relative importance of specific implementation
components in relation to outcomes. One example is Mihalic
and Irwins (2003) evaluation of the implementation of pro-
grams in the Blueprint series for violence prevention (Elliott,
1998), in which regression analysis showed that thequality of
technical support, ideal program characteristics, limited staff
turnover, and support from the local community were among
the most important facilitators. Several strategies have been
explored in efforts to bring innovationsto a broader audience,
and next we present some of these.
Implementation Strategies
Implementation strategies are ways of dealing with the con-
tingencies of various service systems and practice settings,
T. Ogden & D. L. Fixsen: An Overview of Implementation Science 5
Author’s personal copy (e-offprint)
Ó2014 Hogrefe Publishing Zeitschrift fr Psychologie 2014; Vol. 222(1):4–11
and descriptions of concrete implementation activities
(Proctor et al., 2009). The well-known distinction between
diffusion and dissemination on the one hand and implemen-
tation on the other highlights the difference between pas-
sive and active approaches to knowledge transfer
(Greenhalgh et al., 2004). A review and synthesis of the
implementation evaluation literature (Fixsen et al., 2005)
concluded that diffusion and dissemination are not suffi-
cient to reliably produce and sustain positive benefits to
consumers. In addition to the active-passive dimension,
implementation strategies may be categorized as ‘‘top-
down’’ or ‘‘bottom up.’’ In a top-down linear model (Best
et al., 2008) new interventions are disseminated from a cen-
tral source (e.g., the government or program developers) to
the local level or sites. This strategy may be used by gov-
ernments and NGOs in order to promote better, more acces-
sible, and cost-effective services to clients or users
(Palinkas & Soydan, 2012). The possible downside of the
top-down strategy appears when it fails to address local
needs and concerns, and it may mobilize counterforces at
the local level if it is considered to be a threat to profes-
sional autonomy (Ferrer-Wreder, Stattin, Lorente, Tubman,
& Adamson, 2004; Palinkas & Soydan, 2012). A bottom-up
or decentralized approach signals that new interventions are
initiated by individuals and stakeholders at the community
level (Price & Lorion, 1989) and may therefore increase
their sense of ownership. While bottom-up approaches
may increase the likelihood of increased commitment
among practitioners (e.g., Sherman, 2009; Sullivan et al.,
2008), they also may reduce the chances of the intervention
being used as intended in practice (Ogden, Amlund-Hagen,
Askeland, & Christensen, 2009; U.S. Department of Educa-
tion, 2011; Vernez, Karam, Mariano, & DeMartini, 2006).
The message from research is clearly in favor of combining
the top-down and bottom-up approaches in such a way that
the ‘‘knowledge to action’’ process becomes a two-way
street in which ‘‘evidence-based-practice’’ and ‘‘practice-
based evidence’’ are combined. Successful implementation
seems to depend on striking a good balance between the
two (Fixsen, Blase, Metz, & Van Dyke, 2013; Ogden
et al., 2009) with top-down leadership and systems support
for bottom-up practice and organization change. A more
elaborated and refined approach to the analysis of imple-
mentation is communicated in frameworks of implementa-
tion. In the following section, implementation frameworks
help to make sense of these lists of variables found to be
influential in closing the science to service gap.
Implementation Frameworks
A comprehensive conceptual model should by definition
summarize current empirical knowledge and include clearly
defined constructs, a measurement model for these key con-
structs, and an analytical model hypothesizing links among
measured constructs (ICEBeRG, 2006). Several conceptual
models or implementation frameworks have been presented
in the literature during the last decade. Each framework has
guided research in one or more human service domains and
has turned out to have some empirical support.
Meyers, Durlak, and Wandersman (2012) provided an
excellent review of extant implementation frameworks.
They reviewed the literature and found frequent references
to 25 frameworks. They conducted a systematic review of
each framework and found 14 dimensions that were com-
mon to many of the frameworks. The 14 common dimen-
sions were grouped into six areas: (1) assessment
strategies (conducting a needs and resources assessment;
conducting a fit assessment; conducting a capacity/readi-
ness assessment); (2) decisions about adaptation (possibility
for adaptation); (3) capacity-building strategies (obtaining
explicit buy-in from critical stakeholders and fostering a
supportive community/organizational climate; building
general/organizational capacity; staff recruitment/mainte-
nance; effective pre-innovation staff training); (4) creating
a structure for implementation (creating implementation
teams; developing an implementation plan); (5) ongoing
implementation support strategies (technical assistance/
coaching/supervision; process evaluation; supportive feed-
back mechanism); and (6) improving future applications
(learning from experience). The lists of potentially impor-
tant implementation variables begin to make sense when
they are grouped into frameworks and common
dimensions.
Active Implementation Framework
One of the frameworks in the Meyers, Durlak, & Wanders-
man review is the active implementation framework based
on the findings from a major review and synthesis of the
implementation evaluation literature (Fixsen et al., 2005).
The active implementation framework integrates the
multilevel approach to change. First, the active implemen-
tation framework summarizes the importance of knowing
WHAT the intervention is prior to attempting to use it in
practice. Researchers assess the rigor with which evalua-
tions of evidence-based programs have been conducted
(e.g., two or more rigorous randomized control trials; Elli-
ott, 1998). Implementers assess the clarity with which the
practice or program is described and operationalized so it
can be taught and used in practice (e.g., usable intervention
criteria; Fixsen et al., 2013). Second, the active implemen-
tation framework describes the common features of
successful attempts to make full and effective use of inter-
ventions in practice. These implementation drivers describe
HOW interventions are supported in practice. The imple-
mentation drivers include developing staff competencies
(best practices for recruiting and selecting practitioners
and for training, supervision/coaching, and performance/
fidelity assessment), making organization changes to sup-
port the intervention and implementation drivers (best prac-
tices for facilitative administration, decision support data
systems, and systems interventions), and leadership (best
practices for technical/managerial leadership and for
adaptive/transformational leadership).
6 T. Ogden & D. L. Fixsen: An Overview of Implementation Science
Author’s personal copy (e-offprint)
Zeitschrift fr Psychologie 2014; Vol. 222(1):4–11 Ó2014 Hogrefe Publishing
The active implementation framework integrates the
multistage approach to change. The progression of the
implementation process is captured by the stages of imple-
mentation identified as: (1) exploration and adoption, (2)
program installation, (3) initial implementation, and (4) full
implementation. Rather than being a linear process, the
stages are assumed to interact and impact one another in
complex ways (e.g., after years of full implementation,
the exploration stage may be revisited after a major change
in leadership or system supports). Finally, the active imple-
mentation framework describes WHO does the work of
implementation, a capacity missing in human service sys-
tems. Purveyors are people who bring about significant
practice change and are referred to as change agents, facil-
itators, or implementation teams. They are individuals or
groups who know the implementation drivers and stages
well and actively work to implement programs or practices
to achieve the intended outcomes (Fixsen et al., 2005,
p. 14). The multilevel and multistage work of implementa-
tion is seen as integrated and purposeful; hence, the name
active implementation frameworks. Some have argued that
omitting one or more elements may weaken the overall
intervention and the outcome (Kauth et al., 2011).
Research on Implementation Stages
and Components
Fixsen et al. (2005) conclude their research synthesis by
stating that the best evidence points to what does not work
with respect to implementation (p. 70). By themselves, dif-
fusion of information, dissemination of information
(research literature, mailings, practice guidelines), and
training (no matter how well done) are ineffective imple-
mentation methods. In fact, the authors found that success-
ful implementation efforts designed to achieve beneficial
outcomes for consumers required a longer term multilevel
approach. The strongest research support was found for
skill-based training, coaching, and assessment of practi-
tioner performance or fidelity. There was good evidence
for the importance of practitioner selection, but the evi-
dence was sparse and unfocused with regard to program
evaluation, facilitative administrative practices, and system
intervention methods. The critical role of leadership was
universally acknowledged but seldom measured or modi-
fied. Even though their importance is indisputable, there
was little research evidence related to organizational and
system influences on implementation.
According to the research findings, programs should be
fully operational before they are tested in controlled out-
come studies (Durlak & DuPre, 2008). Evaluating pro-
grams before they mature may lead to poor results, the
underestimation of the effectiveness, and doing disservice
to the program. Also, programs should be fully imple-
mented with fidelity before modifications are made.
Panzano et al. (2004) found the ‘‘overall implementation
effectiveness was negatively related to the extent to which
the program had been modified from its prescribed form.’’
And, external system factors can facilitate or hinder the use
of evidence-based programs with fidelity and good out-
comes (Glisson et al., 2010). But the most noticeable gap
found in the research literature concerned the interaction
effects among implementation factors and their relative
influences over time (Greenhalgh et al., 2004).
In sum, the research literature indicates that an active,
long-term, multilevel implementation approach is far more
effective than passive forms of dissemination in order to
promote and sustain the use of evidence-based interven-
tions in real-world practice. There is far more research on
the implementation drivers pertaining to the individual
competency dimension than on the organization level or
leadership components. There is a need for more research
on the interactions among the implementation drivers,
and the extent to which program modifications impact out-
come evaluations that are carried out before evidence-based
interventions are fully operational.
An encouraging theme is that implementation principles
appear to be content neutral. Thus, a separate implementa-
tion science is not required for mental health, or child wel-
fare, or education, or health, or business. A quick look at
the references for this article supports the idea that there
are universal principles derived from research and practice
in all fields and that apply to all fields. Thus, all fields of
endeavor can contribute to and derive benefits from a com-
mon science of implementation. This is welcome given the
Durlak and DuPre (2008) review of implementation
research that found extensive and persuasive evidence con-
firming that implementation variables impact outcomes. In
the next section we discuss implementation research as it is
evolving.
Implementation Research
Implementation research ideally is based on conceptual
models and aims at supporting the movement of evi-
dence-based knowledge into routine use. In this sense
implementation research is applied research: it has a pur-
pose and the variables of interest are directly related to that
purpose. This is in contrast to basic research that values the
pursuit of knowledge guided by the intellectual curiosity of
scientists and may or may not lead to practical applications.
The purpose of applied implementation work is to accom-
plish the full and effective use of evidence-based innova-
tions in typical practice settings in order to produce
promised improvements in outcomes for children, families,
individuals, communities, and societies. The mission-driven
focus of implementation has been called ‘‘making it hap-
pen’’ compared to ‘‘letting it happen’’ (diffusion) or ‘‘help-
ing it happen’’ (dissemination) approaches (Greenhalgh
et al., 2004; Hall & Hord, 1987).
The difficulties inherent in implementation science and
practice have been recognized from the beginning. Imple-
mentation work and, therefore, implementation research is
done in ‘‘an environment full of personnel rules, social
stressors, union stewards, anxious administrators, political
pressures, interprofessional rivalry, staff turnover, and
T. Ogden & D. L. Fixsen: An Overview of Implementation Science 7
Author’s personal copy (e-offprint)
Ó2014 Hogrefe Publishing Zeitschrift fr Psychologie 2014; Vol. 222(1):4–11
diamond-hard inertia.’’ (Fisher, 1983; p. 249). Van Meter
and Van Horn (1975) concluded that these difficulties
‘‘discouraged detailed study of the process of policy imple-
mentation. The problems of implementation are over-
whelmingly complex and scholars have frequently been
deterred by methodological considerations. ... a compre-
hensive analysis of implementation requires that attention
be given to multiple actions over an extended period of
time.’’ (p. 450–451). Glasgow, Lichtenstein, and Marcus
(2003) acknowledge these difficulties and state, ‘‘We need
to embrace and study the complexity of the world, rather
than attempting to ignore or reduce it by studying only iso-
lated (and often unrepresentative) situations.’’ (p. 1264).
Embracing these complexities is difficult for implemen-
tation researchers. Accounting for simultaneous multilevel,
top-down, bottom-up, and multivariate influences presents
methodological challenges for researchers as well as practi-
cal challenges for purveyors and implementation teams.
Fortunately, over the past few decades implementation
researchers have identified variables that directly influence
the use of evidence-based programs in practice. The field
has been dominated by case studies, qualitative research,
and retrospective assessments of implementation factors.
Recently, quantitative, prospective projects are appearing
as researchers operationalize implementation variables
(Fixsen, Panzano, Naoom, & Blase, 2008), develop and
assess measures related to those variables (Ogden et al.,
2012), and deliberately manipulate complex implementa-
tion-specific variables as part of research designs (Glisson
et al., 2010).
The Way Ahead
Several suggestions have been made for the improvement
of future implementation research. Research challenges
are related to the scaling up of programs, striking a good
balance between treatment integrity and local adaptation,
the challenges of measuring implementation quality and
program sustainability.
Implementation Theory
Given the scattered findings from implementation research
and evaluation, the need for theory-driven analytic proce-
dures in quantitative and qualitative studies is apparent.
The implementation frameworks summarized by Meyers
et al. (2012) are a step toward the kind of mid-range theory
described by the ICEBeRG (2006). A mid-range theory
summarizes what currently is known and helps to guide
future research and practice. Investigations based on theory
are more likely to advance understanding of factors and the
interactions among factors that determine implementation
outcomes. The implementation frameworks can be tested
in practice to contribute to the development of an empiri-
cally-based mid-range theory of implementation.
Measurement and Methods
Another important issue relates to the methodological chal-
lenges encountered when doing implementation research.
First, there is the need to operationalize implementation
components and develop new measures of implementation.
Even if the concept of implementation is not new, the idea
of developing ways of measuring it certainly is. Conse-
quently, there is a great need for the development of instru-
ments which operationalize and standardize the
measurement and analyses of implementation processes
and outcomes (Fixsen et al., 2008; Ogden et al., 2012).
As pointed out by Durlak and DuPre (2008): ‘‘science can-
not study what it cannot measure accurately and cannot
measure what it does not define’’ (p. 342). In their recom-
mendations for future research, Greenhalgh et al. (2004)
mention the importance of using common definitions, mea-
sures and tools, and standardized approaches to measuring
key variables and confounders.
Second, research designs for implementation research
present challenges. Implementation research may involve
larger units of analysis (e.g., organizations, communities)
over longer periods of time (e.g., 5–10 years). The impor-
tance of readiness and availability of resources to support
implementation of evidence-based programs and other
innovations also complicate research designs. Attempting
to account for multilevel, multistage influences soon leads
to the problem of ‘‘too many variables and too few cases’’
(Goggin, 1986). Glasgow, Magid, Beck, Ritzwoller, and
Estabrooks (2005) and Speroff and OConnor (2004) sum-
marized problems associated with using randomized group
designs in complex implementation research applications.
They recommended practical clinical trials that employ
within-subject designs such as multiple-baseline designs.
In these cases, the ‘‘subjects’’ may be units of practitioners,
organizations, systems, or even countries. These designs
require fewer ‘‘subjects,’’ provide evidence of functional
relationships between experimental variables and out-
comes, and, if the implementation intervention turns out
to be effective, every ‘‘subject’’ eventually receives the
intervention. The disadvantage is that interventions need
to be powerful enough to produce visible and consistent
changes in outcomes so effectiveness can be detected.
Some of the more subtle, statistically significant, interven-
tion outcomes may be lost when using within-subject
designs.
Adaptation and Fidelity
There is substantial disagreement among researchers about
how much adaptation is allowed without compromising the
intervention. On the one hand, some researchers strongly
advocate the need for local adaptation in order to match
interventions to local conditions (Castro, Barrera, & Marti-
nez, 2004; Dodge, 2001), and even reinvention of programs
(Price, 2002), while warning against ‘‘entering the zone of
drastic mutation’’ (Ferrer-Wreder et al., 2004). Elliott and
8 T. Ogden & D. L. Fixsen: An Overview of Implementation Science
Author’s personal copy (e-offprint)
Zeitschrift fr Psychologie 2014; Vol. 222(1):4–11 Ó2014 Hogrefe Publishing
Mihalic (2004) on the other hand assert ‘‘the need for and
effectiveness of local adaptation of prevention programs
is greatly overstated...’’ (p. 51) and that modifying a pro-
gram too much may actually decrease program effective-
ness. They question the need for adaptation of most
violence prevention programs and indicate that lack of
adherence might lead to program drift characterized by sim-
plification of the intervention, the development of private
strategies, and consequently to a dilution of the program.
Some recent evaluations contradict the assumption that
local adaptations are necessary for the successful imple-
mentation of interventions at different sites (Panzano
et al., 2004). Finding the right mix of fidelity and adapta-
tion is discussed at length by Durlak and DuPre (2008)
and they concluded that the debate is framed inappropri-
ately in either-or terms. The prime focus should be on find-
ing the right mix of fidelity and adaptation to reliably
produce the intended outcomes.
Program Sustainability
A major threat to most evidence-based programs and prac-
tices is program drift or program dilution which occurs
when services are delivered with lower dosage, less inten-
sity, and with inferior quality than the original model. Sus-
taining program integrity and effectiveness over time and
across intervention sites and generations of practitioners
challenges program developers, purveyors, and practitio-
ners alike (Forgatch & DeGarmo, 2011). As mentioned
by Fixsen et al. (2005), programs and service delivery sites
may experience shifts in the inner and outer context as
when staff and leaders leave and are replaced, and when
funding streams and program requirements change. New
social problems arise and partners come and go. Political
alliances may only be temporary, and champions move
on to other causes. One solution to the challenge of pro-
gram maintenance is the development of implementation
capacity in the form of implementation teams. ‘‘Self-sus-
taining implementation sites’’ or ‘‘multi-allegiance centers’’
can help assure ongoing implementation supports (e.g.,
training, coaching, facilitative administration, fidelity
assessments) for several programs and also add new pro-
grams and practices to those already in place (Fixsen
et al., 2013; Ogden et al., 2009). One study showed that
the absence of organizational support and staff turnover
were the most commonly reported implementation chal-
lenges to the sustained implementation of Dialectic Behav-
ior Therapy in routine health care settings in the UK
(Swales, Taylor, & Hibbs, 2012). The survival curve dem-
onstrated that DBT programs ran an increased risk of fail-
ure in the second and the fifth years after training in the
UK. Due to the research findings that implementation can
deteriorate over time, Fixsen and colleagues (2005, 2013)
have called for continuous systematic monitoring and feed-
back systems in order to capture the variability that has
been observed in levels of implementation supports and
intervention outcomes over time (see also Durlak & DuPre,
2008). Implementation teams can promote sustainability by
responding constructively to variability and clearing the
way for continuous improvement.
Going to Scale
The goal of scaling is to realize the goal of the evidence-
based movement: to provide socially significant benefits
for whole populations. Given the complex relationships
between evidence-based interventions and evidence-based
approaches to implementation, scaling up interventions
requires scaling up implementation capacity. Contrary to
expectations, large-scale implementation can occur with
a high degree of fidelity and good outcomes (Elliott &
Mihalic, 2004; Glennan, Bodilly, Galegher, & Kerr, 2004;
Ogden et al., 2009). However, the process of moving to
broad scale use is fraught with challenges (Kellam &
Langevin, 2003). At scale, interventions need to serve a
more heterogeneous population employing service provid-
ers with various backgrounds working within highly vari-
able and sometimes insufficient service infrastructures
that operate with variable resources and attention to imple-
mentation supports to assure program fidelity. Welsh, Sul-
livan, and Olds (2010) expected attenuation of effects to
occur when early prevention trials were scaled up and,
across three studies, they found a fair amount of variability
in scale up discounts, ranging from a low 25% and a high of
50%. Implementation capacity in the form of purveyors and
implementation teams can help deal with the many prob-
lems that arise and help to assure continued access to com-
munity resources to fund the large scale training,
supervision, and other expenditures related to the imple-
mentation and running of the intervention. In sum, the chal-
lenges of scaling up are many and persistent. Yet, the needs
of children, families, individuals, communities, and society
demand attention to scaling up the promised benefits of evi-
dence-based programs.
Interaction of Implementation Components
The most striking gap in the research literature concerns
interaction effects among implementation factors and their
relative influence over time (Fixsen et al., 2005). Most stud-
ies seem to focus on a limited number of implementation
components and fail to address their interactions and con-
textual and contingent features. Greenhalgh et al. (2004)
state, ‘‘The paradox is that context and confounders lie at
the very heart of diffusion, dissemination and implementa-
tion of complex interventions.’’ In fact, they question if the
impact of implementation components can be isolated and
independently measured. Thus, interaction effects should be
expected and accounted for in studies of implementation.
There also is a great need to take into account the wider
context of service delivery organizations and their staff as
well as the incentives or sanctions for changing practice
(Goldman et al., 2001; Kitson et al., 1998). Today we know
more about ‘‘what works’’ than how to motivate practitio-
ners to apply interventions and practices in a systematic
T. Ogden & D. L. Fixsen: An Overview of Implementation Science 9
Author’s personal copy (e-offprint)
Ó2014 Hogrefe Publishing Zeitschrift fr Psychologie 2014; Vol. 222(1):4–11
and accountable way. We also know more about character-
istics of individuals who are willing to adopt new practices
than we know about what characterize practice organiza-
tions or agencies which are open to change. Summarizing
the research on obstacles, it is not difficult to understand
why the process of implementing evidence-based programs
and practices has been so slow. Great demands are put on
sustained funding, organizational adaptations, extensive
training, coaching, and practice evaluation and on the chal-
lenges of implementing new ways of working alongside
regular practice. The general time pressure and competition
from other prioritized tasks may also slow down the imple-
mentation process. In the practice field, the debate on evi-
dence-based programs also have had an impact on attitudes
toward EBPs with arguments that they are too rigid, reduce
the professional autonomy of practitioners, and occupy too
much of the available resources and devaluates other
approaches.
References
Balas, E. A., & Boren, S. A. (2000). Managing clinical knowl-
edge for health care improvement. In J. Bemmel & A. T.
McCray (Eds.), Yearbook of Medical Informatics 2000:
Patient-Centered Systems (pp. 65–70). Stuttgart, Germany:
Schattauer.
Berkel, C., Mauricio, A. M., Schoenfelder, E., & Sandler, I. N.
(2011). Putting the pieces together: An integrated model of
program implementation. Prevention Science, 12, 23–33.
Best, A., Terpstra, J., Moor, G., Riley, B., Norman, C. D., &
Glasgow, R. (2008, March). Building knowledge integration
systems for evidence-informed decisions. Paper presented at
the Organizational Behaviour in Health Care conference,
Sydney, Australia.
Biglan, T., & Ogden, T. (2008). The evolution of the evidence
based movement. European Journal of Behavior Analysis, 9,
81–95.
Brownson, R. C., Colditz, G. A., & Proctor, E. K. (Eds.).
(2012). Dissemination and implementation research in
health. New York, NY: Oxford University Press.
Castro, F. G., Barrera, M., & Martinez, C. R. (2004). The cultural
adaptation of prevention interventions: Resolving tensions
between fidelity and fit. Prevention Science, 5, 41–45.
Dodge, K. (2001). Progressing from developmental epidemiol-
ogy to efficacy to effectiveness to public policy. American
Journal of Preventive Medicine, 20, 63–70.
Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A
review of research on the influence of implementation on
program outcomes and the factors affecting implementation.
American Journal of Community Psychology, 41, 327–350.
Elliott, D. S. (1998). Blueprints for violence prevention. Boul-
der, CO: University of Colorado, Center for the Study and
Prevention of Violence.
Elliott, D. S., & Mihalic, S. (2004). Issues in disseminating and
replicating effective prevention programs. Prevention
Science, 5, 47–53.
Fairweather, G. W., Sanders, D. H., & Tornatzky, L. G. (1974).
Creating change in mental health organizations. Elmsford,
NY: Pergamon.
Ferrer-Wreder, L., Stattin, H., Lorente, C. C., Tubman, J. G., &
Adamson, L. (2004). Successful prevention and youth
development programs across borders. New York, NY:
Kluwer.
Fisher, D. (1983). The going gets tough when we descend from
the ivory tower. Analysis and Intervention in Developmental
Disabilities, 3, 249–255.
Fixsen, D., Blase, K., Metz, A., & Van Dyke, M. (2013).
Statewide implementation of evidence-based programs.
Exceptional Children. [Special Issue],79, 213–230.
Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., &
Wallace, F. (2005). Implementation research: A synthesis of
the literature. Tampa, FL: University of South Florida,
Louis de la Parte Florida Mental Health Institute, National
Implementation Research Network (FMHI Publication
No. 231). Retrieved from http://www.fpg.unc.edu/nirn/
resources/publications/Monograph/
Fixsen, D. L., Panzano, P., Naoom, S., & Blase, K. (2008).
Measures of implementation components of the national
implementation research network frameworks. Chapel Hill,
NC: National Implementation Research Network.
Forgatch, M., & DeGarmo, D. (2011). Sustaining fidelity
following the nationwide PMTO implementation in Norway.
Prevention Science, 12, 235–246.
Glasgow, R. E., Lichtenstein, E., & Marcus, A. C. (2003). Why
dont we see more translation of health promotion research
to practice? Rethinking the efficacy-to-effectiveness transi-
tion. American Journal of Public Health, 93, 1261–1267.
Glasgow, R. E., Magid, D. J., Beck, A., Ritzwoller, D., &
Estabrooks, P. A. (2005). Practical clinical trials for trans-
lating research to practice: Design and measurement
recommendations. Medical Care, 43, 551–557.
Glennan, T. K. Jr., Bodilly, S. J., Galegher, J. R., & Kerr, K. A.
(2004). Expanding the reach of education reforms. Santa
Monica, CA: RAND.
Glisson, C., Schoenwald, S. K., Hemmelgarn, A., Green, P.,
Dukes, D., Armstrong, K. S., & Chapman, J. E. (2010).
Randomized trial of MST and ARC in a two-level evidence-
based treatment implementation strategy. Journal of Con-
sulting and Clinical Psychology, 78, 537–550.
Goggin, M. L. (1986). The ‘‘too few cases/too many variables’’
problem in implementation research. The Western Political
Quarterly, 39, 328–347.
Goldman, H. H., Ganju, V., Drake, R. E., Gorman, P., Hogan,
H., Hyde, P. S., & Morgan, O. (2001). Policy implications
for implementing evidence-based practices. Psychiatric
Services, 52, 1591–1597.
Green, L. W. (2008). Making research relevant: If it is an
evidence-based practice, wheres the practice-based evi-
dence? Family Practice, 25, 20–24.
Greenhalgh, T., Robert, G., MacFarlane, F., Bate, P., &
Kyriakidou, O. (2004). Diffusion of innovations in service
organizations: Systematic review and recommendations. The
Milbank Quarterly, 82, 581–629.
Grohl, R., & Wensing, M. (2004). What drives change? Barriers
to and incentives for achieving evidence-based practice. The
Medical Journal of Australia, 180, 57–60.
Hall, G., & Hord, S. M. (1987). Change in schools: Facilitating
the process. Albany, NY: SUNY Press.
ICEBeRG. (2006). Designing theoretically-informed implemen-
tation interventions. Implementation Science, 1,4.
Institute of Medicine. (2001). Crossing the quality chasm: A new
health system for the 21st century. Washington, DC:
National Academy Press.
Kauth, M. R., Sullivan, G., Cully, J., & Blevins, D. (2011).
Facilitating practice changes in mental health clinics. A
guide for implementation development in health care
systems. Psychological Services, 8, 36–47.
Kellam, S. G., & Langevin, D. J. (2003). A framework for
understanding evidence in prevention research and pro-
grams. Prevention Science, 4, 137–153.
10 T. Ogden & D. L. Fixsen: An Overview of Implementation Science
Author’s personal copy (e-offprint)
Zeitschrift fr Psychologie 2014; Vol. 222(1):4–11 Ó2014 Hogrefe Publishing
Kessler, R. C., & Glasgow, R. E. (2011). A proposal to speed
translation of healthcare research into practice: Dramatic
change is needed. American Journal of Preventive Medicine,
40, 637–644.
Kitson, A., Harvey, G., & McCormack, B. (1998). Enabling the
implementation of evidence-based practice: A conceptual
framework. Quality in Health Care, 7, 149–158.
Klein, J. A. (2004). True change: How outsiders on the inside get
things done in organizations. New York, NY: Jossey-Bass.
Klein, K. J., & Sorra, J. S. (1996). The challenge of innovation
implementation. Academy of Management Review, 21,
1055–1080.
Marzano, R., Waters, T., & McNulty, B. (2005). School
leadership that works: From research to results. Alexandria,
VA: Association for Supervision and Curriculum Develop-
ment (ASCD).
Meyers, D. C., Durlak, J. A., & Wandersman, A. (2012). The
quality implementation framework: A synthesis of critical
steps in the implementation process. In American Journal of
Community Psychology. doi: 10.1007/s10464–012-9522-x
Mihalic, S. F., & Irwin, K. (2003). Blueprints for violence
prevention: From research to real world settings. Factors
influencing the successful replication of model programs.
Youth Violence and Juvenile Justice, 1, 1–23.
Ogden, T., Amlund-Hagen, K., Askeland, E., & Christensen, B.
(2009). Implementing and evaluating evidence-based treat-
ments of conduct problems in children and youth in Norway.
Research on Social Work Practice, 19, 582–591.
Ogden, T., Bjørnebekk, G., Kjøbli, J., Patras, J., Christiansen,
T., Taraldsen, K., & Tollefsen, N. (2012). Measurement of
implementation components ten years after a nationwide
introduction of empirically supported programs – A pilot
study. Implementation Science, 7, 49.
Palinkas, L. A., & Soydan, H. (2012). Translation and imple-
mentation of evidence-based practice. Oxford, UK: Oxford
University Press.
Panzano, P. C., Seffrin, B., Chaney-Jones, S., Roth, D., Crane-
Ross, D., Massatti, R., & Carstens, C. (2004). The innova-
tion diffusion and adoption research project (IDARP). In D.
Roth & W. Lutz (Eds.), New Research in Mental Health
(Vol. 16, pp. 78–89). Columbus, OH: Ohio Department of
Mental Health Office of Program Evaluation and Research.
Pressman, J. L., & Wildavsky, A. (1973). Implementation.
Berkeley, CA: University of California Press.
Price, R. H. (2002). The four faces of community readiness:
Social capital, problem awareness, social innovation and
collective efficacy. Unpublished manuscript, University of
Michigan, Institute of Social Research.
Price, R. H., & Lorion, R. P. (1989). Prevention programming
as organizational reinvention: From research to implemen-
tation. In D. Shaffer, I. Philips, & N. B. Enzer (Eds.),
Prevention of mental disorders, alcohol and other drug use
in children and adolescents. Rockville, MD: American
Academy of Child Adolescent Psychiatry. [Prevention
mongraphs no. 3].
Proctor, E. K., Landsverk, J., Aarons, G., Chambers, D.,
Glisson, C., & Mittman, B. (2009). Implementation research
in mental health services: An emerging science with
conceptual, methodological, and training challenges. Admin-
istration and Policy in Mental Health and Mental Health
Services Research, 36, 24–34. doi: 10.1007/s10488-008-
0197-4
Rogers, E. (1995). Diffusion of innovations (4th ed.). New York,
NY: Free Press.
Rossi, P. H., & Wright, J. D. (1984). Evaluation research: An
assessment. Annual Review of Sociology, 10, 331–352.
Sherman, L. W. (2009). Evidence and liberty: The promise of
experimental criminology. Criminology and Criminal Jus-
tice, 9, 5–28.
Speroff, T., & OConnor, G. T. (2004). Study designs for PDSA
quality improvement research. Quality Management in
Health Care, 13, 17–32.
Sullivan, G., Blevins, D., & Kauth, M. R. (2008). Translating
clinical training into practice in complex mental health
systems: Toward opening the ‘‘Black Box’’ of implementa-
tion. Implementation Science, 3, 33.
Swales, M. A., Taylor, B., & Hibbs, R. A. (2012). Implementing
dialectical behaviour therapy: Programme survival in rou-
tine healthcare settings. Journal of Mental Health, 21, 548–
555. doi: 10.3109/09638237.2012.689435
U.S. Department of Education. (2011). Prevalence and imple-
mentation fidelity of research-based prevention programs in
public schools: Final report. Washington, DC: U.S. Depart-
ment of Education.
Van Meter, D. S., & Van Horn, C. E. (1975). The policy
implementation process: A conceptual framework. Admin-
istration & Society, 6, 445–488.
Vernez, G., Karam, R., Mariano, L. T., & DeMartini, C. (2006).
Evaluating comprehensive school reform models at scale:
Focus on implementation. Santa Monica, CA: RAND.
Welsh, B. C., Sullivan, C. J., & Olds, D. L. (2010). When early
crime prevention goes to scale: A new look at the evidence.
Prevention Science, 11, 115–125.
Terje Ogden
Atferdssenteret – The Norwegian Center for Child Behavioral
Development
Klingenberggt. 4, 6 etasje
Postboks 1565 Vika
0118 Oslo
Norway
E-mail terje.ogden@atferdssenteret.no
T. Ogden & D. L. Fixsen: An Overview of Implementation Science 11
Author’s personal copy (e-offprint)
Ó2014 Hogrefe Publishing Zeitschrift fr Psychologie 2014; Vol. 222(1):4–11
... As implementation researchers, we conduct pilot studies using DMHI products from technology companies and test them in real-world settings. The goal of our work is to implement DMHIs in a meaningful way; the DMHI must be evidence-based and receptive to change, and the change should be supported by appropriate facilitation [13]. If a pilot study is indicated, the first step to ensure a successful systemic process toward implementation is to conduct a pilot study that adequately addresses acceptability and feasibility. ...
... Such iterative processes allow us to answer whether the DMHI is possible through developed measures. For example, we may obtain quantitative data on engagement rates (e.g., adherence) and qualitative data on how likely the individual, organization, community, or clinician will use it in their practice [13]. While our research and knowledge on the DMHI are valuable, implementation of the DMHI may differ according to the specific needs of the community partner. ...
... Implementation frameworks have been developed for guidance around effectively hitting the balance between treatment adherence (i.e., matching implementation process to methods in published research demonstrating efficacy of a particular tool) and adaption of a specific DMHI to a particular setting. This allows for the implementation and sustained engagement with the DMHI to be feasible [2,13]. Further, much guidance around the development of qualitative data collection methods for further implementation has been published, including the importance of maximizing the use of established qualitative methods while also prioritizing situationspecific or organizational needs [25]. ...
Article
Digital mental health interventions (DMHIs) have the potential to serve a significantly wider portion of the population in need of mental health services. The coronavirus disease 2019 (COVID-19) pandemic has especially highlighted the exacerbation of mental health disparities among minoritized populations. Innovations and research on DMHIs continue to expand, reinforcing the need for a more systemic process of DMHI implementation. In practice, DMHI implementation often skips the fundamental steps of conducting acceptability and feasibility studies. We propose a DMHI implementation framework that identifies an acceptability and feasibility study as an essential first step, simultaneously centering equitable processes that address populations disproportionately affected by mental illness.
... In general, three important aspects can be identified when performing implementation focused process evaluations: (a) The willingness to adopt the programme, (b) the actual implementation process and (c) the programme's ability to be maintained over several years [24][25][26]. Further-more, the knowledge gained from such process evaluations can potentially accommodate the continuing need for strategies on how to translate and disseminate MS and PA programmes into everyday preschool practice [27][28][29][30]. The Reach, Effectiveness, Adoption, Implementation and Maintenance (RE-AIM) framework [31] provides a stepwise approach to process evaluations and allows for a comprehensive evaluation of health-promoting programmes in complex settings [32]. ...
... It is recognized that the staff's self-assessment might not be an accurate measure for any actual change in the children's MS development. Yet, in the aim of the current study, the staff's solid belief in the effectiveness of MiPS is evident and constitutes an important marker for their dedication and motivation in implementing the programme [30,[41][42][43]. ...
... Furthermore, when looking at the maximum variation between preschools, it is clear that some are struggling. This might be due to low prioritization by local preschool leaders or a general lack of organizational support (e.g., from the municipal programme mangers) to follow through on implementation decisions [26,30,33,43]. ...
Good motor skills (MS) are considered important for children's social, psychological and physical development and general physical activity (PA) levels. The Motor skill in Preschool study (MiPS) aimed to optimize children's MS through weekly PA sessions. The aim of this study is to use the RE-AIM framework to report the two-year implementation process of MiPS since the programme's initiation. Data were collected through a staff questionnaire based on the RE-AIM framework. Data were collected at three months, one year and two years after initiation. Results show that the pedagogical staff believes that the programme promotes MS in children. Implementation measures only showed medium to low fidelity concerning the core element of performing adult-initiated PA sessions with a duration of at least 45 min 4 days a week. The largest barrier was finding the time to plan these PA sessions. Still, the content of the PA sessions achieved high fidelity scores and the programme was deemed suitable for staff's everyday practice and in alignment with the stated pedagogical goals. The mandatory competence development course was highly valued as strong implementation support. It is notable that there is a large variation in the implementation among the preschools with some struggling more than others.
... Partly, this issue may be explained by the lack of instruments with evidence of validity and reliability. In turn, this issue reflects a more basic problem for researchers: The lack of guidelines on how to measure quality of delivery (Buckley et al., 2017;Desimone & Hill, 2017;Domitrovich et al., 2008;Ogden & Fixsen, 2014). Specifically, it is difficult to translate existing definitions into measurable indicators of quality of delivery that are representative of the process by which students change and develop skills in the classroom. ...
... A strong theory of change should distinguish between proximal and distal outcomes. Proximal outcomes are the thoughts, feelings, or behaviors that are the direct targets of the intervention, whereas distal outcomes are impacted in the long term or in other domains of students' functioning through changes to proximal outcomes Ogden & Fixsen, 2014). For example, the 4Rs Program integrates social and emotional learning into the language arts curriculum with the goal to improve students' conflict resolution skills, such as handling anger, listening, and assertiveness (i.e., proximal outcomes), which are expected to cascade into changes to students' social competence and academic engagement (i.e., distal outcomes). ...
Article
Recent studies have suggested that quality of delivery matters to achieve better student outcomes in the context of school interventions. However, studies rarely measure quality of delivery and test its association with students’ outcomes, perhaps due to lack of clarity regarding how to measure it. Here, we offer recommendations on how to select or design measures of quality of delivery. These recommendations focus on identifying teaching practices that help students to develop proximal outcomes during the delivery of an intervention. Additionally, we illustrate an application of these recommendations to the study of quality of delivery in a cluster-randomized efficacy study of Brainology, a program that promotes students’ motivation and learning. We found that, although teachers fluctuated in their quality of delivery across lessons, students who received the intervention with higher quality of delivery on average increased more in targeted proximal outcomes (effort beliefs and learning goals) than students exposed to low quality. We discuss these results in terms of their implications for measuring quality of delivery, supporting teachers, and studying the conditions that make school interventions successful.
... A major challenge to all variants of intervention and transformative research is to disentangle the differential influence of the research project, the intervention, the researchers' presence, the involvement of the participants, the naturalness of the site, etc. In the experimental tradition, the design protocol, implementation designs (Ogden and Fixsen 2014) and fidelity data are meant to be effective instruments that facilitate an 'isolation' of field effects attributable to the research activities. In qualitative studies, various techniques for generating an analytical distance to the transformation are advocated (Gilbert 2002). ...
Article
Full-text available
The growing interest in video research and new technologies for recording human interaction has stirred debates about intrusiveness and ‘reactivity’ understood as researcher-derived changes in subjects. In addition to a plethora of concepts referring to such effects in extant literature, different ontological and epistemological positions provide contrasting frameworks for interpreting and deciding on methodological guidelines. In this article we discuss these elements, that we have called ‘meta-methodological’, from the standpoints of experimental research, social-constructivism and scientific realism. We combine conceptual analysis and a literature review of video-studies in teaching in order to identify both possible traces of contesting beliefs and to provide a glance at different aspects of ‘reactivity’ that needs to be systematized in the ongoing debates. Whereas the methodological literature underline the importance of such effects, these are rarely reported in the reviewed video studies. Moreover, reactivity is seen as a minor problem in the latter, and we found few instances that validated the effects on the field and on the empirical conclusions. Our article ask for more transparency in field researchers’ judgment about reactivity and mitigating measures.
... Much has been written about the research to practice gap between early intervention clinical research in university settings and community settings (Odom et al., 2020); the challenge of translating clinical knowledge gained in research trials to clinical practice. Recent efforts to address this gap have been led through an implementation science framework to increase the likelihood that community providers will implement interventions as expected and that interventions are relevant to the context where they are expected to be implemented (Ogden & Fixsen, 2014). One important step in the implementation science framework is the identification of barriers and facilitators to intervention uptake and participation (Nilsen & Bernhardsson, 2019). ...
Article
Full-text available
Addressing factors that make it more likely for families to attrite from early intervention trials will allow researchers to ensure that families reap the full benefits of participation. This study was an analysis of 78 children (Mage = 18.38 months, SD = 5.78) at risk for autism participating in a university-based randomized controlled trial of two 8-week long early intervention programs. Overall, attrition through 8-weeks was low, approximately 13%, however by the one-year follow-up attrition rates were approximately 50%. The most consistent predictor of attrition was the distance that families had to travel to the university. These data highlight the importance of providing services and support (e.g., financial and logistic) during follow-up to families to maximize their participation. Clincaltrials.gov Identifier: NCT01874327, 6/11/2013.
... When we consider implementation, it's helpful to better understand some of the underlying theories which make a systems-wide approach effective. From the field of implementation science (Ogden & Fixsen, 2014), we can consider two concepts relevant to the current study: (a) implementation stages and (b) implementation drivers. ...
Article
Positive Behavioral Interventions and Supports (PBIS) is a prevention-oriented multitiered system of support. In this article, we discuss how PBIS implementation might be different for schools in rural settings. We used two subsamples of an extant data set of 11,561 schools in 44 U.S. states reporting on PBIS implementation fidelity during the 2018-19 school year. We examined PBIS implementation in rural and nonrural settings using a subsample of 6,631 schools during their first five years of PBIS implementation (2014-15 to 2018-19 school years). Further, we used a subsample of 2,266 schools to examine differences in implementation for rural schools, specifically ( n = 1,215) in their first five years of PBIS implementation (2014-15 to 2018-19) compared to rural schools ( n = 1,051) implementing six or more years (2000-01 to 2013-14). Rural schools differ from other school locales in the implementation of Tiers 2 and 3 systems during initial implementation. When examining the implementation in rural schools implementing PBIS for five or fewer years to those implementing for six years or more, those implementing longer had higher scores at Tiers 2 and 3. Practical implications across all three tiers, special education, and rural locales are presented.
... Program implementation is influenced by contextual factors, which impact on an organization's capacity to implement with high fidelity [12][13][14][15][16]. Hasson, operationalized context as factors related to levels of policies, finances, organizations, and groups of participants [17]. Contextual factors like providers training and competency [18][19][20], supportive context and skilled providers, and receptive participants [21] are some of the determinants of program implementation with fidelity. Other studies verified pre-implementation training, presence of detailed delivery manuals or guidelines, ongoing support or supervision as main determinants of program implementation [12-14, 19, 22, 23]. ...
Article
Full-text available
Background The evaluation of all potential determinants of implementation fidelity of Youth-Friendly Services (YFS) is crucial for Ethiopia. Previous studies overlooked investigating the determinants at different levels. Therefore, this study aimed to assess the determinants of implementation fidelity of YFS considering individual and contextual levels. Methods This study was conducted among 1,029 youths, from 11 health centers that are implementing the YFS in Central Gondar Zone. Data were collected by face to face interview and facility observation using a semi-structured questionnaire. A Bivariable multi-level mixed effect modelling was employed to assess the main determinants. Four separate models were fitted to reach the full model. The fitness of the model was assessed using Akaike Information Criterion (AIC) and level of significance was declared at p-values < 0.05. The results of fixed effects were presented as adjusted odds ratio (AOR) at their 95% CI. Results Four hundred one (39.0%) of the respondents got the YFS with high level of fidelity. Had high level of involvement in the YFS provision (AOR = 1.35, 95% CI: 1.15, 1.57), knew any peer educator trained in YFS (AOR = 1.60, 95% CI: 1.36, 1.86), and involved as a peer educator (AOR = 1.46, 95% CI: 1.24, 1.71), were the individual level determinants. Whereas, got capacity building training; (AOR = 1.93, 95% CI (1.12, 3.48), got supportive supervision, (AOR 2.85, 95% CI (1.99, 6.37), had a separate waiting room (AOR = 9.84, 95%CI: 2.14, 17.79), and system in place to provide continuous support to staff (AOR = 2.81, 95%CI: 1.25, 6.34) were the contextual level determinants. Conclusions The level of implementation fidelity remains low. Both individual and contextual level determinants affect the implementation fidelity of YFS. Therefore, policy makers, planners, managers and YFS providers could consider both individual and contextual factors to improve the implementation fidelity.
Article
Schools have been identified as a promising setting for promoting physical activity (PA). Yet, to realize changes at the population level, successful school-based PA programs need to go to scale. The Svendborgproject is an effective school-based program promoting additional physical education (PE) lessons. The aim of this study is to determine program fidelity across different school groups, representing early and late adopters of the Svendborgproject, and how these are adapting the intervention. Three different school groups were identified, covering the original intervention schools and two groups of late adopters consisting of four former control schools, and five normal schools without any previous connection to the program. A PE teacher questionnaire (n = 122) was used to determine school fidelity. The results show that, while the original intervention schools have implemented the program with the highest fidelity, all schools have implemented the program with medium to high fidelity. It is suggested that having front-runner schools achieving early success with the program both strengthens political project support and provides strategies to back late adopters' implementation of the program. Furthermore, results from the current study suggest that continual promotion of the program by school heads is less important if support is established at the structural and organizational macro level. Finally, we highlight the importance of scaling up organizational capacity when scaling up program reach to assure a workable balance between fidelity and improving the fit to specific contexts.
Article
Full-text available
Worksites are important settings for implementing health promotion programs. Evidence for sustainable upscaling of physical activity (PA) programs and critical evaluation of the implementation process are scarce. In this article, we address the following research questions: (i) To what extent is the implementation process of PA programs theoretically informed? (ii) What characterizes the implementation process of PA programs in theory driven studies? (iii) Which facilitators and barriers are identified in the implementation process and at what level? We followed the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines. The databases Medline (Ovid) and Sportdiscuss (Ebsco) were searched for peer-reviewed original articles published in English (2000-2020), from a European, North American, New Zealand and Australian context. Reported implementation components and facilitators/barriers (F/Bs) were detected, interpreted and analyzed according to implementation theory. Appraisal of methodological quality on included studies was executed. Of 767 eligible studies, 17 studies were included, 11 of which conducted a theory-based process evaluation of the implementation. They implemented composite PA programs, at two or more levels with internal or mixed implementation teams. F/Bs were most frequently related to the implementation component 'fidelity', corresponding to organizational and implementer level, and the component 'reach' corresponding to program and participant level. Notably, only one study reported F/Bs on the socio-political level. Despite more frequent use in recent years of theory-based implementation, few studies reported implementation effectiveness. Major challenges regarding incoherent use of theoretical concepts and scarcity of empirically tested frameworks are discussed.
Article
While technology-based interventions enhance instruction and improve outcomes for students with disabilities, implementing and integrating technology in authentic learning environments continues to be a challenge. Based on the experiences of a variety of Stepping-Up Technology Implementation projects funded by the U.S. Department of Education, Office of Special Education Programs, this mixed-methods study explored the essential factors for the successful implementation of technology-based interventions in K-12 schools and early childhood programs. Based on the qualitative analysis of projects’ implementation reports and responses to the follow-up questionnaire, four major themes emerged. The barriers and facilitators to technology implementation were reported across such areas as (a) developing and sustaining buy-in, (b) ensuring implementation fidelity to support the intervention, (c) research-to-practice dilemmas, and (d) data serving multiple purposes. The discussion and practical implications for supporting technology implementation are provided.
Book
Full-text available
Available for download at http://nirn.fpg.unc.edu/resources/implementation-research-synthesis-literature
Article
Full-text available
Clinical and health services research is continually producing new findings that may contribute to effective and efficient patient care. However, the transfer of research findings into practice is unpredictable and can be a slow and haphazard process. Ideally, the choice of implementation strategies would be based upon evidence from randomised controlled trials or systematic reviews of a given implementation strategy. Unfortunately, reviews of implementation strategies consistently report effectiveness some, but not all of the time; possible causes of this variation are seldom reported or measured by the investigators in the original studies. Thus, any attempts to extrapolate from study settings to the real world are hampered by a lack of understanding of the effects of key elements of individuals, interventions, and the settings in which they were trialled. The explicit use of theory offers a way of addressing these issues and has a number of advantages, such as providing: a generalisable framework within which to represent the dimensions that implementation studies address, a process by which to inform the development and delivery of interventions, a guide when evaluating, and a way to allow for an exploration of potential causal mechanisms. However, the use of theory in designing implementation interventions is methodologically challenging for a number of reasons, including choosing between theories and faithfully translating theoretical constructs into interventions. The explicit use of theory offers potential advantages in terms of facilitating a better understanding of the generalisability and replicability of implementation interventions. However, this is a relatively unexplored methodological area.
Book
From a European Perspective This book charts territory that is profoundly important, and yet rarely fully understood. The authors have attempted a task that has relevance to the widest possible range of professionals working with children and adolescents. In describing and assessing the fields prevention and promotion they have performed an immense service to researchers in this field, but also to practitioners across the spectrum, from mental health nurses and doctors to teachers and psychologists, from social work professionals to psychiatrists and youth counselors. There are two other key elements that should be emphasized from the outset. The first is that the approach in this book is truly multi-disciplinary, with the authors making a genuine attempt to draw upon knowledge and practice derived from all the relevant disciplines. The second element which makes this book so important is that the authors have worked across countries, to ensure that work in the field of intervention from both North America and from Europe should be included. This is as welcome as it is refreshing. There appear to be so many barriers to true collaboration between the two continents, and so many examples of either North American to what is going on "across the or European social scientists appearing blind border" that the approach taken here should be wholeheartedly commended. This book is essentially a review, but a rather special review.
Article
Evidence-based programs will be useful to the extent they produce benefits to individuals on a socially significant scale. It appears the combination of effective programs and effective implementation methods is required to assure consistent uses of programs and reliable benefits to children and families. To date, focus has been placed primarily on generating evidence and determining degrees of rigor required to qualify practices and programs as "evidence-based." To be useful to society, the focus needs to shift to defining "programs" and to developing state-level infrastructures for statewide implementation of evidence-based programs and other innovations in human services. In this article, the authors explicate a framework for accomplishing these goals and discuss examples of the framework in use.
Article
This book is about conducting research on the process and outcomes of the translation and implementation of evidence-based practices in social work. Its aims are to outline a strategy for conducting such research and to identify the infrastructure and resources necessary to support such research within the field of social work. Using the National Institutes of Health (NIH) Roadmap as a guide, the book describes the challenges of investigating the process and outcomes of efforts to translate and implement evidence-based social work practice. It begins with a general introduction to the topic of translation and implementation of evidence-based practice and its importance to the field of social work. It then moves to an examination of the methods for studying the effectiveness, dissemination, and implementation of evidence-based practices and the organizational context in which these activities occur in social work practice. It also describes the use of mixed-methods designs and community-based participatory research (CBPR) methods to address these challenges. It is unique in that it provides case studies of research on the translation and implementation in social work practice, identifies potential barriers to conducting such research, and offers recommendations and guidelines for addressing these barriers. The proposed strategy is founded on the principle and practice of cultural exchange between members of social worker-led interdisciplinary research teams and between researchers and practitioners. The outcome of such exchanges is the transformation of social work research and practice through the linkage between translational research and research translation.
Chapter
This chapter highlights several research topics that are the most pressing areas for future inquiry. In large part, this list was developed from the recommendations contained within the previous twenty-three chapters. The issues covered are not meant to be exhaustive; rather, the aim is to identify the most promising areas that will move dissemination and implementation (D&I) science forward most quickly given the current state of the research and funding opportunities. Topics covered include terminology and theory, methods and measurement, strategies and populations, partnerships, and training and support.