ArticlePDF AvailableLiterature Review

Abstract and Figures

Implementation science is growing in importance among funders, researchers, and practitioners as an approach to bridging the gap between science and practice. We addressed three goals to contribute to the understanding of the complex and dynamic nature of implementation. Our first goal was to provide a conceptual overview of the process of implementation by synthesizing information from 25 implementation frameworks. The synthesis extends prior work by focusing on specific actions (i.e., the “how to”) that can be employed to foster high quality implementation. The synthesis identified 14 critical steps that were used to construct the Quality Implementation Framework (QIF). These steps comprise four QIF phases: Initial Considerations Regarding the Host Setting, Creating a Structure for Implementation, Ongoing Structure Once Implementation Begins, and Improving Future Applications. Our second goal was to summarize research support for each of the 14 QIF steps and to offer suggestions to direct future research efforts. Our third goal was to outline practical implications of our findings for improving future implementation efforts in the world of practice. The QIF's critical steps can serve as a useful blueprint for future research and practice. Applying the collective guidance synthesized by the QIF to the Interactive Systems Framework for Dissemination and Implementation (ISF) emphasizes that accountability for quality implementation does not rest with the practitioner Delivery System alone. Instead, all three ISF systems are mutually accountable for quality implementation. Special Issue: Advances in Bridging Research and Practice Using the Interactive System Framework for Dissemination and Implementation; Guest Editors: Abraham Wandersman, Paul Flaspohler, Catherine A. Lesesne, Richard Puddy; Action Editor: Emilie Phillips Smith
Content may be subject to copyright.
ORIGINAL PAPER
The Quality Implementation Framework: A Synthesis of Critical
Steps in the Implementation Process
Duncan C. Meyers Joseph A. Durlak
Abraham Wandersman
ÓSociety for Community Research and Action 2012
Abstract Implementation science is growing in impor-
tance among funders, researchers, and practitioners as an
approach to bridging the gap between science and practice.
We addressed three goals to contribute to the understand-
ing of the complex and dynamic nature of implementation.
Our first goal was to provide a conceptual overview of the
process of implementation by synthesizing information
from 25 implementation frameworks. The synthesis
extends prior work by focusing on specific actions (i.e., the
‘how to’’) that can be employed to foster high quality
implementation. The synthesis identified 14 critical steps
that were used to construct the Quality Implementation
Framework (QIF). These steps comprise four QIF phases:
Initial Considerations Regarding the Host Setting, Creating
a Structure for Implementation, Ongoing Structure Once
Implementation Begins, and Improving Future Applica-
tions. Our second goal was to summarize research support
for each of the 14 QIF steps and to offer suggestions to
direct future research efforts. Our third goal was to outline
practical implications of our findings for improving future
implementation efforts in the world of practice. The QIF’s
critical steps can serve as a useful blueprint for future
research and practice. Applying the collective guidance
synthesized by the QIF to the Interactive Systems Frame-
work for Dissemination and Implementation (ISF)
emphasizes that accountability for quality implementation
does not rest with the practitioner Delivery System alone.
Instead, all three ISF systems are mutually accountable for
quality implementation.
Keywords Implementation Knowledge utilization
Implementation framework Implementation science
Introduction
Numerous reviews have investigated the process of imple-
mentation and have advanced our understanding of how it
unfolds (e.g., Fixsen et al. 2005; Greenhalgh et al. 2004; Hall
and Hord 2006; Rogers 2003). We now have a growing body
of: (1) evidence which clearly indicates that implementation
influences desired outcomes (e.g., Aarons et al. 2009;
DuBois et al. 2002, Durlak and DuPre 2008; Smith et al.
2004; Tobler 1986; Wilson et al. 2003) and (2) several
frameworks that provide an overview of ideas and practices
that shape the complex implementation process (e.g.,
Damschroder et al. 2009; Greenberg et al. 2005). In recog-
nition of its critical importance, various professional groups
have determined that one of the criteria related to identifying
evidence-based interventions should involve documentation
of effective implementation (e.g., Society for Prevention
Research, Division 16 of the American Psychological
Association). In addition, various funders are emphasizing
implementation research and making more funds available
to address implementation in research proposals (e.g., The
William T. Grant Foundation, National Cancer Institute,
National Institute of Mental Health).
Prominent research agencies have intensified their role
in the advancement of implementation science. For
example, the National Institutes for Health (NIH) has an
initiative that involves 13 of its 27 Institutes and the Office
of Behavioral and Social Sciences Research in funding
D. C. Meyers (&)A. Wandersman
University of South Carolina, Columbia, SC, USA
e-mail: meyersd@mailbox.sc.edu
J. A. Durlak
Loyola University Chicago, Chicago, IL, USA
123
Am J Community Psychol
DOI 10.1007/s10464-012-9522-x
research to identify, develop, and refine effective methods
for disseminating and implementing effective treatments
(NIH 2011). The Centers for Disease Control and Pre-
vention (CDC) is currently playing a key role in improving
the quality and efficiency of a global public health initia-
tive through addressing operational questions related to
program implementation within existing and developing
health systems infrastructures (CDC 2010). In the United
Kingdom, the National Health System has established the
National Institute for Health Research (NIHR) which aims
to use research to improve national health outcomes. The
NIHR has built infrastructure through the creation of
Collaborations for Leadership in Applied Health Research
and Care (CLAHRC) which investigate methods of trans-
lating implementation research evidence to practice (Baker
et al. 2009).
These recent developments have been described as
‘stepping stones’’ that reflect the beginnings of an orga-
nized and resourced approach to bridging research and
practice (Proctor et al. 2009). New developments bring
new ideas, and these ideas have found their way into recent
dissemination- and implementation-related frameworks.
For example, the Interactive Systems Framework for Dis-
semination and Implementation (ISF) recognized that
quality implementation is a critical aspect of widespread
successful innovation (Wandersman et al. 2008). While the
original special issue on the ISF (American Journal of
Community Psychology 2008) recognized the importance
of implementation, it provided relatively little detail on
implementation frameworks per se (with the notable
exception of the review on implementation performed by
Durlak and Dupre 2008). In this article, we were motivated
to incorporate implementation research and related con-
cepts into the ISF to a greater degree, which, in turn, can
contribute to the field of implementation science. Given the
growing recognition of the importance of implementation,
its quickly expanding evidence base, and the numerous
implementation frameworks that are emerging, we sought
to increase understanding of the critical steps of the
implementation process by undertaking a conceptual syn-
thesis of relevant literature.
Implementation and the Interactive Systems
Framework
The ISF (Wandersman et al. 2008) is a framework that
describes the systems and processes involved in moving
from research development and testing of innovations to
their widespread use. It has a practical focus on infra-
structure, innovation capacities, and three systems needed
to carry out the functions necessary for dissemination and
implementation (Synthesis and Translation System,
Support System, Delivery System). The role of the Syn-
thesis and Translation System is to distill theory and evi-
dence and translate this knowledge into user-friendly
innovations (an idea, practice, or object that is perceived as
new by an individual or an organization/community
(Rogers 2003)). To increase the user-friendliness of these
innovations, this system may create manuals, guides,
worksheets, or other tools to aid in the dissemination of the
innovation. This system may strive to develop evidence-
based strategies for implementing a given innovation in
diverse contexts (e.g., Mazzucchelli and Sanders 2010;
Schoenwald 2008). Worthwhile innovations developed by
the Synthesis and Translation System need to be put into
practice, and actual use of these innovations is accom-
plished primarily by the Delivery System.
The Delivery System is comprised of the individuals,
organizations, and communities that can carry out activities
that use the innovations that the Synthesis and Translation
develops. Implementation in the Delivery System is sup-
ported by the Support System. To increase the likelihood that
innovation use will lead to desired outcomes, the Support
System works directly with the members of the Delivery
System to help them implement with quality. The Support
System does this by building two types of capacities through
training, technical assistance, and/or monitoring progress:
(1) innovation-specific capacity—the necessary knowledge,
skills, and motivation that are required for effective use of
the innovation; and (2) general capacity—effective struc-
tural and functional factors (e.g., infrastructure, aspects of
overall organizational functioning such as effective com-
munication and establishing relationships with key com-
munity partners) (Flaspohler et al. 2008b).
Each of the three systems in the ISF are linked with
bi-directional relationships. The stakeholders in each sys-
tem (e.g., funders, practitioners, trainers, and researchers)
should communicate and collaborate to achieve desired
outcomes. In the original ISF special issue, there was an
emphasis on building capacity for quality implementation
(e.g., Chinman et al. 2008; Fagan et al. 2008). This article
seeks to enhance the ISF’s emphasis on implementation
using a synthesis of implementation frameworks to further
inform the types of structures and functions that are
important for quality implementation per se. More specif-
ically, this collective guidance can be applied to the ISF
systems by creating more explicit links (both within and
between systems) that detail specific actions that can be
used to collaboratively foster high quality implementation.
Overview of the Article
This article has conceptual, empirical research, and prac-
tical goals. Our first goal was to provide a conceptual
Am J Community Psychol
123
overview of the implementation process through a syn-
thesis of the literature. The literature synthesis was
designed to develop a new implementation meta-frame-
work which we call the Quality Implementation Frame-
work (QIF). The QIF identifies the critical steps in the
implementation process along with specific actions related
to these steps that can be utilized to achieve quality
implementation.
Our research goal was to summarize the research sup-
port that exists for the different steps in the newly-devel-
oped QIF and to offer some suggestions for future research
efforts. Our practical goal was to outline the practical
implications of our findings in terms of improving future
implementation efforts in the world of practice.
Progress toward these goals will enhance theory related
to implementation research and practice. Theoretical con-
tributions will also be applied to the ISF, since the
framework synthesis will identify actions and strategies
that the three ‘‘mutually accountable’’ ISF systems can
employ to collaboratively foster quality implementation.
Wandersman and Florin (2003) discussed the importance
of interactive accountability in which funders, researchers/
evaluators, and practitioners are mutually accountable and
work together to help each other achieve results. The ISF
helps operationalize how these stakeholders can work
together. When collaborating for quality implementation,
these systems should strive to increase the likelihood that
the necessary standards of the innovation (e.g., active
ingredients, core components, critical features, essential
elements) are met and that the innovation’s desired out-
comes are achieved.
We hypothesized that our literature synthesis would
yield convergent evidence regarding many of the important
steps associated with quality implementation. Our frame-
work review differs from other recent framework reviews,
since we focus on literature relating specifically to the
‘how-to’’ of implementation (i.e., specific procedures and
strategies). Systematically identifying these action-oriented
steps can serve as practical guidance related to specific
tasks to include in the planning and/or execution of
implementation efforts. Another difference is that we
sought to develop a framework that spans multiple research
and practice areas as opposed to focusing on a specific field
such as healthcare (e.g., Damschroder et al. 2009; Green-
halgh et al. 2004). We believed our explicit focus on spe-
cific steps and strategies that can be used to operationalize
‘how to’’ implement would make a useful contribution to
the literature.
In the following section, we provide a brief overview of
prior implementation research that places implementation
in context, discuss issues related to terminology, and
describe prior work depicting the implementation process.
We then describe our literature synthesis and apply its
results to the advancement of the ISF and implementation
theory and practice.
Brief Overview of Implementation Research
In many fields, such as education, health care, mental
health treatment, and prevention and promotion, program
evaluations did not historically include any mention or
systematic study of implementation (Durlak and Dupre
2008). However, beginning in the 1980s, many empirical
studies began appearing that indicated how important
quality implementation was to intended outcomes (e.g.,
Abbott et al. 1998; Basch et al. 1985; Gottfredson et al.
1993; Grimshaw and Russell 1993; Tobler 1986).
As research on implementation evolved, so did our
understanding of its complexity. For example, authors have
identified eight different aspects to implementation such as
fidelity, dosage, and program differentiation, and at least 23
personal, organizational, or community factors that affect
one or more aspects of implementation (Dane and
Schneider 1998; Durlak and Dupre 2008). Because
implementation often involves studying innovations in real
world contexts, rigorous experimental designs encom-
passing all of the possible influential variables are impos-
sible to execute. Individual or multiple case studies have
been the primary vehicle for learning about factors that
affect the implementation process, yet the methodological
rigor and generalizability of these reports varies. Never-
theless, there has been a steady improvement in the number
and quality of studies investigating implementation, and
there are now more carefully done quantitative and quali-
tative reports that shed light on the implementation process
(e.g., Domitrovich et al. 2010; Fagan et al. 2008; Saunders
et al. 2006; Walker and Koroloff 2007).
Although there is extensive empirical evidence on the
importance of implementation and a growing literature on
the multiple contextual factors that can influence imple-
mentation (e.g., Aarons et al. 2011; Domitrovich et al.
2008), there is a need for knowing how to increase the
likelihood of quality implementation. Can a systematic,
comprehensive overview of implementation be developed?
If so, what would be its major elements? Could specific
steps be identified to aid future research and practice on
implementation? Our review helps to address these ques-
tions and focuses on issues related to high quality
implementation.
Context
Using Rogers’ (2003) classic model, implementation is one
of five crucial stages in the wide-scale diffusion of inno-
vations: (1) dissemination (conveying information about
Am J Community Psychol
123
the existence of an innovation to potentially interested
parties), (2) adoption (an explicit decision by a local unit or
organization to try the innovation), (3) implementation
(executing the innovation effectively when it is put in
place), (4) evaluation (assessing how well the innovation
achieved its intended goals), and (5) institutionalization
(the unit incorporates the innovation into its continuing
practices). While there can be overlap among Rogers’
stages, our discussion of implementation assumes that the
first two stages (dissemination of information and explicit
adoption) have already occurred.
Terminology
There has yet to be a standardized language for describing
and assessing implementation. For example, the extent to
which an innovation that is put into practice corresponds to
the originally intended innovation has been called fidelity,
compliance, integrity, or faithful replication. Our focus is
on quality implementation—which we define as putting an
innovation into practice in such a way that it meets the
necessary standards to achieve the innovation’s desired
outcomes (Meyers et al. 2012). This definition is consistent
with how the International Organization for Standardiza-
tion (ISO) views quality as a set of features and charac-
teristics of a product or service that bear on its ability to
satisfy stated or implied needs (ISO/IEC 1998). Imple-
mentation is not an all-or-none construct, but exists in
degrees. For example, one may eventually judge that the
execution of some innovations was of low quality, medium
quality, or high quality (e.g., Saunders et al. 2006). This
article focuses on issues related to high quality
implementation.
Implementation Frameworks
Implementation scholars have made gains in describing the
process of implementation. These efforts have taken dif-
ferent forms. Sometimes, they are descriptions of the major
steps involved in implementation and at other times they
are more refined conceptual frameworks based on research
literature and practical experiences (e.g., theoretical
frameworks, conceptual models). Miles and Huberman
(1994) define a conceptual framework as a representation
of a given phenomenon that ‘‘explains, either graphically or
in narrative form, the main things to be studied—the key
factors, concepts, or variables’’ (p. 18) that comprise the
phenomenon. Conceptual frameworks organize a set of
coherent ideas or concepts in a manner that makes them
easy to communicate to others. Often, the structure and
overall coherence of frameworks are ‘‘built’’ and borrow
elements from elsewhere (Maxwell 2005).
Implementation frameworks have been described as
windows into the key attributes, facilitators, and challenges
related to promoting implementation (Flaspohler et al.
2008a). They provide an overview of ideas and practices
that shape the complex implementation process and can
help researchers and practitioners use the ideas of others
who have implemented similar projects. Some frameworks
are able to provide practical guidance by describing spe-
cific steps to include in the planning and/or execution of
implementation efforts, as well as mistakes that should be
avoided.
Toward a Synthesis of Implementation Frameworks:
A Review of Implementation Frameworks
In this section, we describe our work on our conceptual
goal. We use the term implementation framework to
describe reports that focus on the ‘‘how-to’’ of implemen-
tation; that is, sources that offer details on the specific
procedures and strategies that various authors believe are
important for quality implementation. By synthesizing
these frameworks, we are able to cross-walk the critical
themes from the available literature to suggest actions that
practitioners and those who work with them can employ to
ensure quality implementation.
Inclusion Criteria and Literature Search Procedures
To be included in our review of implementation frame-
works, a document about implementation had to meet two
main criteria: (1) contain a framework that describes the
main actions and strategies believed to constitute an
effective implementation process related to using innova-
tions in new settings, and (2) be a published or unpublished
report that appeared in English by the end of June 2011.
The framework could be based on empirical research or be
a theoretical or conceptual analysis of what is important in
implementation based on experience or a literature review.
We placed no restrictions on the content area, population of
interest, or type of innovation being considered; however,
to be retained, the framework needed to focus on specific
details of the implementation process.
Three strategies were used to locate relevant reports: (1)
computer searches of six databases (Business Source Pre-
mier, Dissertation Abstracts,Google Scholar, MEDLINE,
PsycINFO, and Web of Science) using variants of multiple
search terms in various configurations (e.g., ‘‘implemen-
tation,’’ ‘‘framework’’, ‘‘model’’, ‘‘approach’’, and ‘‘strat-
egy’’), (2) hand searches over the last 5 years of four
journals that we judged were likely to contain relevant
publications (American Journal of Community Psychology,
American Journal of Evaluation,Implementation Science,
Am J Community Psychol
123
Prevention Science), and (3) inspection of the reference
lists of each relevant report and review of implementation
research (e.g., Durlak and DuPre 2008; Fixsen et al. 2005;
Greenhalgh et al. 2004).
We did not include reports about implementation based
on a single implementation trial (e.g., Chakravorty 2009),
articles with implementation frameworks that have not
been cited more than once in the literature (e.g., Chinow-
sky 2008; Spence and Henderson-Smart 2011), articles that
focus on contextual factors that can influence implemen-
tation (e.g., Aarons et al. 2011; Domitrovich et al. 2008),
articles that focus more on fidelity (i.e., adherence, integ-
rity) and less on the implementation process as a whole
(e.g., Bellg et al. 2004;), articles that do not contain an
implementation framework (e.g., Chorpita et al. 2002),
articles that focus on a framework that is redundant with
another source, or articles that do not put enough focus on
the process of implementation and instead focus on a more
expansive process (e.g., Simpson 2002). Instead, we only
included reports in which authors attempted to offer a
framework for implementation that was intended to be
applied generally across one or more areas of research or
practice, has been utilized over extended periods of time,
and has been cited more than once in the literature (e.g.,
Kilbourne et al. 2007; Klein and Sorra 1996). Figure 1is a
flow diagram depicting our study selection for the imple-
mentation framework synthesis. The diagram was created
in light of reporting guidance from the preferred reporting
items for systematic reviews and meta-analyses (PRISMA;
Liberati et al. 2009).
Once the sample of frameworks was established, we
examined each one and distilled what appeared to be dis-
tinct critical steps for quality implementation, and we
identified specific actions and strategies associated with
each step. We then created broad categories to group
similar steps and actions from the different frameworks to
depict what appears to constitute quality implementation
from beginning to end. Although authors used different
terminology in many cases, the activities they described
greatly assisted the categorization process. Few issues
arose in placing elements in categories, and these were
resolved through discussion among the authors.
Results
A total of 25 frameworks contained in 27 different sources
were retained for the current synthesis. Two sources each
were used for the Communities That Care and the PROS-
PER frameworks, since combining these sources provided
a more elaborate description of the main steps and actions
of each framework. All the sources are listed in Table 1,
which also describes how each framework was based on a
particular literature area, target population, and type of
innovation.
Most of the 25 frameworks were based on the imple-
mentation of evidence-based programs via community-
based planning approaches (n=6) or health care delivery
(n=5), while others are specifically related to prevention/
promotion (n=4), evidence-based programs and/or
Reports Initially Screened
(n = 1945)
Detailed inspection for
inclusion (n= 152)
Included
(n = 27 sources)
Excluded (n = 125)
Reasons for exclusion:
Source did not focus on the process of implementation (n =
49)
Source did not contain a framework (n = 43)
Source focused on contextual factors that impact
implementation (n = 11)
Framework contained in source was based on single case
study (n = 8)
Framework posited in source redundant with a framework
already in our sample (n = 6)
Source focused on fidelity of implementation (n = 6)
Source is not cited more than once (n = 2)
Excluded as not-applicable
(n = 1807)
Fig. 1 Flow diagram of
selected sources for the
implementation framework
synthesis. While there were a
total of 27 sources that were
used to comprise our sample,
only 25 frameworks were
described in these sources (two
additional sources were retained
to allow for a greater level of
detail for the Communities
That Care framework and the
PROSPER framework)
Am J Community Psychol
123
treatments (n=3), specific to school-based innovations
(n=3), implementing non-specific innovations in organi-
zations (n=2), or are related to management (n=2).
Most of the evidence-based programs/treatments targeted
children and adolescents. Many of the health care
innovations were related to integrating different aspects of
evidence-based medicine into routine practice.
The synthesis of the critical steps associated with quality
implementation is summarized in Table 2. Table 3contains
important questions to answer at each step and the overall
Table 1 Sources for implementation frameworks included in the review
Source Primary literature areas examined as basis for
framework
Target population
CASEL (2011) School-based social and emotional learning Children and adolescents
Chinman et al. (2004)—GTO Community-based substance abuse prevention planning Children and adolescents
Damschroder et al. (2009)—CFIR Evidence-based health care Not specified
Durlak and DuPre (2008) Prevention and health promotion programs Children and adolescents
Feldstein and Glasgow (2008)—PRISM Evidence-based health care Not specified
Fixsen et al. (2005) Implementation of evidence-based practices including
human services (e.g., mental health, social services,
juvenile justice, education, employment services,
substance abuse prevention and treatment),
agriculture, business, engineering, medicine,
manufacturing, and marketing
Not specified
Glisson and Schoenwald (2005)—ARC Evidence-based treatments Children, adolescents, and
their families
Greenberg et al. (2005) School-based preventive and mental health promotion
interventions
Children and adolescents
Greenhalgh et al. (2004) Health care Not specified
Guldbrandsson (2008) Health promotion and disease prevention Not specified
Hall and Hord (2006) School-based innovations Children and adolescents
Hawkins et al. (2002)—CTC; Mihalic
et al. (2004)—Blueprints
Evidence-based violence and drug prevention programs Children and adolescents
Kilbourne et al. (2007)—REP Community-based behavioral and treatment
interventions for HIV
Not specified
Klein and Sorra (1996) Management Organizational managers
Okumus (2003) Management Organizational managers
PfS (2003) Community-based prevention planning Children and adolescents
Rogers (2003) Diffusion of innovations in organizations Not specified
Rycroft-Malone (2004)—PARIHS Evidence-based healthcare Not specified
Spoth et al. (2004); Spoth and Greenberg
(2005)—PROSPER
Population-based youth development and reduction of
youth problem behaviors (e.g., substance use,
violence, and other conduct problems)
Children and adolescents
Sandler et al. (2005) Community-based prevention services Children and adolescents
Stetler et al. (2008)—QUERI Evidence-based health care United States Veterans
Stith et al. (2006) Community-based programs for violence prevention
and substance abuse prevention
Children and adolescents
Van de Ven et al. (1989) Technological innovations Organizational managers
and stakeholders
Walker and Koroloff (2007) Comprehensive, individualized, family-driven mental
health services
Children, adolescents, and
their families
Wandersman et al. (2008)—ISF Injury and violence prevention Children and adolescents
ARC Availability, Responsiveness, Continuity community intervention model, Blueprints for Violence Prevention, CASEL Collaborative for
Academic, Social, and Emotional Learning, CFIR Consolidated Framework for Implementation Research, CTC Communities That Care, GTO
Getting To Outcomes, PfS Partnerships for Success, ISF Interactive Systems Framework, PARIHS Promoting Action on Research Implemen-
tation in Health Services, PRISM Practical, Robust Implementation and Sustainability Model, PROSPER PROmoting School/Community-
University Partnerships to Enhance Resilience, QUERI Quality Enhancement Research Initiative, REP Replicating Effective Programs
Am J Community Psychol
123
frequency with which each step was included in the sampled
frameworks. We call the results of our synthesis the Quality
Implementation Framework (QIF) because it focuses on
important elements (critical steps and actions) believed to
constitute quality implementation. Four important findings
emerged from our synthesis: (1) it was possible to identify 14
distinct steps comprising quality implementation; (2) these
steps could be logically divided into four temporal phases;
(3) there was considerable agreement among the various
sources on many of these steps; and (4) the overall con-
ceptualization of implementation that emerged suggests that
quality implementation is a systematic process that involves
a coordinated series of related elements. These findings offer
a useful blueprint for future research and practice.
For example, the information in Table 3indicates that
quality implementation can be viewed conceptually as a
systematic, step-by-step, four-phase sequence that contains
over one dozen steps. Most of these steps (10 of the 14)
should be addressed before implementation begins, and
they suggest that quality implementation is best achieved
through a combination of multiple activities that include
assessment, negotiation and collaboration, organized
planning and structuring, and, finally, personal reflection
and critical analysis.
The four phase conceptualization that appears in Table 3
suggests when and where to focus one’s attention in order
to achieve quality implementation. The first phase, Initial
Considerations Regarding the Host Setting, contains eight
critical steps and focuses on the host setting. Activities in
this phase involve various assessment strategies related to
organizational needs, innovation-organizational fit, and a
capacity or readiness assessment. Each implementation
effort also raises the critical question regarding if and how
the innovation should be adapted to fit the host setting. In
other words, work in the first phase of implementation
focuses primarily on the ecological fit between the inno-
vation and the host setting.
Although it is not noted in Table 3, a clear explanation
and definition of the specified standards for implementation
(e.g., active ingredients, core components, critical features,
or essential elements) should be agreed on by all involved
parties. Therefore, decisions about whether any adaptations
are to be made should occur before explicit buy-in for the
innovation is obtained so all stakeholders understand what
the innovation consists of and what using it entails. If the
core components of the innovation are clearly known,
many of the framework authors emphasized that any
adaptations should preserve these components to maintain
the integrity of the innovation.
An emerging strategy for adaptation calls upon inno-
vation developers and researchers to identify which com-
ponents of innovations can be adapted. Unless practitioners
have a deep understanding of effective implementation and
program theory, they need support and guidance when
adapting innovations to new contexts and populations.
Such support must rely on the local knowledge that these
practitioners have about the setting that hosts the innova-
tion. Multiple frameworks in this review state that inno-
vation developers should provide a foundation for
adaptations by identifying what can be modified (e.g.,
surface structure modifications that are intended to boost
engagement and retention) and what should never be
modified (e.g., an innovation’s core components) as part of
their dissemination strategy. Approaches have been
developed to help resolve the tension between the need for
fidelity and adaptation (e.g., Lee et al. 2008), and such
guidance can foster adherence to an innovation’s protocol
for use while also enhancing its fit and relevance to the
organization/community (Forehand et al. 2010).
In addition, all but two frameworks indicated that steps
should be taken to foster a supportive climate for imple-
mentation and secure buy-in from key leaders and front-
line staff in the organization/community. Some of the
specific strategies suggested in this critical step include: (1)
assuring key opinion leaders and decision-makers are
engaged in the implementation process and perceive that
the innovation is needed and will benefit organizational
Table 2 Summary of the four implementation phases and 14 critical
steps in the Quality Implementation Framework that are associated
with quality implementation
Phase One: Initial considerations regarding the host setting
Assessment strategies
1. Conducting a needs and resources assessment
2. Conducting a fit assessment
3. Conducting a capacity/readiness assessment
Decisions about adaptation
4. Possibility for adaptation
Capacity-building strategies
5. Obtaining explicit buy-in from critical stakeholders and
fostering a supportive community/organizational climate
6. Building general/organizational capacity
7. Staff recruitment/maintenance
8. Effective pre-innovation staff training
Phase Two: Creating a structure for implementation
Structural features for implementation
9. Creating implementation teams
10. Developing an implementation plan
Phase Three: Ongoing structure once implementation begins
Ongoing implementation support strategies
11. Technical assistance/coaching/supervision
12. Process evaluation
13. Supportive feedback mechanism
Phase Four: Improving future applications
14. Learning from experience
Am J Community Psychol
123
Table 3 Critical steps in implementation, important questions to answer at each step in the Quality Implementation Framework, and the
frequency with which each step was included in the 25 reviewed frameworks
Phases and steps of the quality implementation framework Frequency
Phase one: Initial considerations regarding the host setting
Assessment strategies
1. Conducting a needs and resources assessment:
Why are we doing this?
What problems or conditions will the innovation address (i.e., the need for the innovation)?
What part(s) of the organization and who in the organization will benefit from improvement efforts?
14 (56 %)
2. Conducting a fit assessment:
Does the innovation fit the setting?
How well does the innovation match the:
Identified needs of the organization/community?
Organization’s mission, priorities, values, and strategy for growth?
Cultural preferences of groups/consumers who participate in activities/services provided by the organization/community?
14 (56 %)
3. Conducting a capacity/readiness assessment:
Are we ready for this?
To what degree does the organization/community have the will and the means (i.e., adequate resources, skills and motivation) to
implement the innovation?
Is the organization/community ready for change?
11 (44 %)
Decisions about adaptation
4. Possibility for adaptation
Should the planned innovation be modified in any way to fit the host setting and target group?
What feedback can the host staff offer regarding how the proposed innovation needs to be changed to make it successful in a new
setting and for its intended audience?
How will changes to the innovation be documented and monitored during implementation?
19 (76 %)
Capacity Building Strategies (may be optional depending on the results of previous elements)
5. Obtaining explicit buy-in from critical stakeholders and fostering a supportive community/organizational climate:
Do we have genuine and explicit buy-in for this innovation from:
Leadership with decision-making power in the organization/community?
From front-line staff who will deliver the innovation?
The local community (if applicable)?
Have we effectively dealt with important concerns, questions, or resistance to this innovation? What possible barriers to
implementation need to be lessened or removed?
Can we identify and recruit an innovation champion(s)?
Are there one or more individuals who can inspire and lead others to implement the innovation and its associated practices?
How can the organization/community assist the champion in the effort to foster and maintain buy-in for change?
23 (92 %)
Note. Fostering a supportive climate is also important after implementation begins and can be maintained or enhanced through such strategies as
organizational policies favoring the innovation and providing incentives for use and disincentives for non-use of the innovation
6. Building general/organizational capacity:
What infrastructure, skills, and motivation of the organization/community need enhancement in order to ensure the innovation will
be implemented with quality?
Of note is that this type of capacity does not directly assist with the implementation of the innovation, but instead enables the
organization to function better in a number of its activities (e.g., improved communication within the organization and/or with
other agencies; enhanced partnerships and linkages with other agencies and/or community stakeholders).
15 (60 %)
7. Staff recruitment/maintenance:
Who will implement the innovation?
Initially, those recruited do not necessarily need to have knowledge or expertise related to use of the innovation; however, they
will ultimately need to build their capacity to use the innovation through training and on-going support
Who will support the practitioners who implement the innovation?
These individuals need expertise related to (a) the innovation, (b) its use, (c) implementation science, and (d) process evaluation
so they can support the implementation effort effectively
Might roles of some existing staff need realignment to ensure that adequate person-power is put towards implementation?
13 (52 %)
Am J Community Psychol
123
functioning; (2) aligning the innovation with the setting’s
broader mission and values; (3) identifying policies that
create incentives for innovation use, disincentives for non-
use, and/or reduce barriers to innovation use; and (4)
identifying champions for the innovation who will advo-
cate for its use and support others in using it properly.
Advocates for the innovation should be able to answer
the following questions before proceeding further: How
well does the innovation (either as originally intended or in
a modified format) fit this setting? To what extent does
staff understand what the innovation entails? In what ways
will the innovation address important perceived needs of
the organization? Does staff have a realistic view of what
the innovation may accomplish, and are they ready and
able to sponsor, support, and use the innovation with
quality?
The second phase of quality implementation, Creating a
Structure for Implementation, suggests that an organized
structure should be developed to oversee the process. At a
minimum, this structure includes having a clear plan for
Table 3 continued
Phases and steps of the quality implementation framework Frequency
8. Effective pre-innovation staff training
Can we provide sufficient training to teach the why, what, when, where, and how regarding the intended innovation?
How can we ensure that the training covers the theory, philosophy, values of the innovation, and the skill-based competencies
needed for practitioners to achieve self-efficacy, proficiency, and correct application of the innovation?
22 (88 %)
Phase two: Creating a structure for implementation
Structural features for implementation
9. Creating implementation teams:
Who will have organizational responsibility for implementation?
Can we develop a support team of qualified staff to work with front-line workers who are delivering the innovation?
Can we specify the roles, processes, and responsibilities of these team members?
17 (68 %)
10. Developing an implementation plan:
Can we create a clear plan that includes specific tasks and timelines to enhance accountability during implementation?
What challenges to effective implementation can we foresee that we can address proactively?
13 (52 %)
Phase three: Ongoing structure once implementation begins
Ongoing implementation support strategies
11. Technical assistance/coaching/supervision:
Can we provide the necessary technical assistance to help the organization/community and practitioners deal with the inevitable
practical problems that will develop once the innovation begins?
These problems might involve a need for further training and practice in administering more challenging parts of the innovation,
resolving administrative or scheduling conflicts that arise, acquiring more support or resources, or making some required
changes in the application of the innovation
20 (80 %)
12. Process evaluation
Do we have a plan to evaluate the relative strengths and limitations in the innovation’s implementation as it unfolds over time?
Data are needed on how well different aspects of the innovation are being conducted as well as the performance of different
individuals implementing the innovation
24 (96 %)
13. Supportive feedback mechanism
Is there an effective process through which key findings from process data related to implementation are communicated, discussed,
and acted upon?
How will process data on implementation be shared with all those involved in the innovation (e.g., stakeholders, administrators,
implementation support staff, and front-line practitioners)?
This feedback should be offered in the spirit of providing opportunities for further personal learning and skill development and
organizational growth that leads to quality improvement in implementation
18 (72 %)
Phase four: Improving future applications
14. Learning from experience
What lessons have been learned about implementing this innovation that we can share with others who have an interest in its use?
Researchers and innovation developers can learn how to improve future implementation efforts if they critically reflect on their
experiences and create genuine collaborative relationships with those in the host setting
Collaborative relationships appreciate the perspectives and insights of those in the host setting and create open avenues for
constructive feedback from practitioners on such potentially important matters as: (a) the use, modification, or application of
the innovation; and (b) factors that may have affected the quality of its implementation
7 (28 %)
Am J Community Psychol
123
implementing the innovation and identifying a team of
qualified individuals who will take responsibility for these
issues. Two important questions to answer before this
phase concludes are: (1) Is there a clear plan for what will
happen, and when it should occur; and (2) who will
accomplish the different tasks related to delivering the
innovation and overseeing its implementation?
The work involved in the first two phases is in prepa-
ration for beginning implementation (i.e., planning imple-
mentation). Implementation actually begins in phase three
of our framework: Ongoing Structure Once Implementa-
tion Begins. There are three important tasks in this phase:
(1) providing needed on-going technical assistance to
front-line providers; (2) monitoring on-going implementa-
tion; and (3) creating feedback mechanisms so involved
parties understand how the implementation process is
progressing. Therefore, the corresponding questions that
require answers involve: (1) Do we have a sound plan in
place to provide needed technical assistance? (2) Will we
be able to assess the strengths and limitations that occur
during implementation? (3) Will the feedback system be
rapid, accurate, and specific enough so that successes in
implementation can be recognized and changes to improve
implementation can be made quickly?
The fourth phase, Improving Future Applications, indi-
cates that retrospective analysis and self-reflection coupled
with feedback from the host setting can identify particular
strengths and weaknesses that occurred during implemen-
tation. The primary question is: ‘‘What has this effort
taught us about quality implementation?’’ This phase only
includes one critical step—learning from experience—
which appears because it was implicit in many of the
frameworks and explicit in a few of them. For example,
many authors implied that they learned about implemen-
tation from practical experience and from the feedback
received from host staff. This is understandable because in
the absence of systematic theory and research on imple-
mentation in many fields of inquiry, learning by doing was
the primary initial vehicle for developing knowledge about
implementation. Several authors revised their frameworks
over time by adding elements or modifying earlier notions
about implementation. While there have been instances of
researchers empirically testing their implementation
framework and modifying it based on data (Klein et al.
2001), modifications were often shaped by: feedback
received from a host setting about ineffective and effective
strategies, considering what others were beginning to
report in the literature, and/or by critical self-reflection
about one’s effort. In sum, over time, based on their own or
others’ experiences, both mistakes and successes in the
field coalesced to shape various conceptualizations of what
quality implementation should look like (e.g., Grol and
Jones 2000; Van de Ven et al. 1989).
Convergent Evidence for Specific Elements
Table 4indicates how many of the 25 reviewed frame-
works included each of the 14 steps. As we hypothesized,
there was substantial agreement about many of the steps.
We did not expect perfect agreement on each critical step
because the individual frameworks appeared at different
times in the history of implementation research, and the
frameworks came from different content areas (health care,
prevention and promotion, mental health treatment, edu-
cation, and industry) served different populations (adults or
children) and had different goals (e.g., promotion, treat-
ment, or increased organizational effectiveness). Never-
theless, there was near universal agreement on the
importance of monitoring implementation (critical step 12;
present in 96 % of the reviewed reports) and strong
agreement on the value of developing buy-in and a sup-
portive organizational climate (critical step 5; 92 %),
training (critical step 8; 88 %), technical assistance (critical
step 11; 80 %), feedback mechanisms (critical step 13;
72 %), the creation of implementation teams (critical step
9; 68 %), and the importance of building organizational
capacity (critical step 6; 60 %). Several other steps were
present in more than half of the frameworks (e.g., critical
steps 1 and 2; assessing the need for the innovation and the
fit of the innovation, respectively).
Research Support for Different Elements
Which elements in our framework have received research
support? It is difficult to make exact comparisons between
our synthesis and the findings from specific research
investigations. Some critical steps represent a combination
of behaviors and actions that may address multiple targets
and constructs and that can be applied somewhat differ-
ently across different contexts. Most research on imple-
mentation has not focused on critical steps for quality
implementation as we define them here, but instead on
specific factors that influence the overall success of
implementation such as challenges inherent in the imple-
mentation process (e.g., Aarons et al. 2011) or contextual
factors that influence quality of implementation (e.g.,
Domitrovich et al. 2008). However, several research stud-
ies have examined issues that relate to one or more activ-
ities within the scope of different critical steps.
Given these considerations, with one exception, there is
some support for each of the QIF critical steps. This sup-
port varies in strength and character depending on the step,
and is discussed in several sources (Durlak and Dupre
2008; Fixsen et al. 2005; Greenhalgh et al. 2004). The
strongest support, in terms of the quantity and quality of
empirical studies, exists for the importance of training and
on-going technical assistance (critical steps 8 and 11,
Am J Community Psychol
123
Table 4 Steps included in each reviewed framework
Framework phases and steps Van de Ven
et al. (1989)
Klein and Sorra
(1996)
Hawkins et al. (2002);
Mihalic et al. (2004)
Okumus
(2003)
Rogers
(2003)
PfS
(2003)
Chinman
et al. (2004)
Greenhalgh et al.
(2004)
Rycroft-Malone
(2004)
Phase One: Initial considerations
1. Needs and resources assessment X X X X
2. Fit assessment X X X X X X
3. Capacity/readiness assessment X X X X
4. Possibility for adaptation X X X X
5. Buy-in; supportive climate X X X X X X X X X
6. General org. capacity building X X X X X
7. Staff recruitment/maintenance X X X X X
8. Pre-innovation training X X X X X X X
Phase Two: Structure for implementation
9. Implementation teams X X X X X
10. Implementation plan X X X X
Phase Three: Ongoing support strategies
11. TA/coaching/supervision X X X X X
12. Process evaluation X X X X X X X X
13. Feedback mechanism X X X X X
Phase Four: Improving future applications
14. Learning from experience X X
Framework phases and steps Spoth et al. (2004);
Spoth and Greenberg
(2005)
Fixsen
et al. (2005)
Glisson and
Schoenwald (2005)
Greenberg
et al. (2005)
Sandler
et al. (2005)
Hall and
Hord (2006)
Stith
et al. (2006)
Kilbourne
et al. (2007)
Phase One: Initial considerations
1. Needs and resources assessment X X X X X X X
2. Fit assessment X X X X X X
3. Capacity/readiness assessment X X X X
4. Possibility for adaptation X X X X X X X
5. Buy-in; supportive climate X X X X X X
6. General org. capacity building X XX
7. Staff recruitment/maintenance X X X X
8. Pre-innovation training X X X X X X X X
Phase Two: Structure for implementation
9. Implementation teams X X X X X X
10. Implementation plan X X X X
Phase Three: Ongoing support strategies
11. TA/coaching/supervision X X X X X X X X
Am J Community Psychol
123
Table 4 continued
Framework phases and steps Spoth et al. (2004);
Spoth and Greenberg
(2005)
Fixsen
et al. (2005)
Glisson and
Schoenwald (2005)
Greenberg
et al. (2005)
Sandler
et al. (2005)
Hall and
Hord (2006)
Stith
et al. (2006)
Kilbourne
et al. (2007)
12. Process evaluation X X X X X X X X
13. Feedback mechanism X X X X X X
Phase Four: Improving future applications
14. Learning from experience X X X
Framework phases and steps Walker and
Koroloff (2007)
Durlak and
DuPre (2008)
Feldstein and
Glasgow (2008)
Guldbrandsson (2008) Stetler
et al. (2008)
Wandersman
et al. (2008)
Damschroder
et al. (2009)
CASEL
(2011)
Phase One: Initial considerations
1. Needs and resources assessment X X X
2. Fit assessment X X
3. Capacity/readiness assessment X XX
4. Possibility for adaptation X X X X X X X X
5. Buy-in; supportive climate X X X X X X X X
6. General org. capacity building X X X X X X X
7. Staff recruitment/maintenance X X X X
8. Pre-innovation training X X X X X X X
Phase Two: Structure for implementation
9. Implementation Teams X X X X X X
10. Implementation plan X X X X X
Phase Three: Ongoing support strategies
11. TA/coaching/supervision X X X X X X X
12. Process evaluation X X X X X X X X
13. Feedback mechanism X X X X X X X
Phase Four: Improving future applications
14. Learning from experience X X
Am J Community Psychol
123
respectively); the evidence indicates that it is the combi-
nation of training and on-going support that enhances
learning outcomes (Miller et al. 2004; Sholomskas et al.
2005). Historically, work on implementation focused only
on training, and it was only later as a result of both research
findings and experiences from the field that the necessary
added value of supportive technical assistance was noted
(e.g., Fixsen et al. 2005; Joyce and Showers 2002).
Using an approach similar to Durlak and DuPre (2008),
we interpreted research support to mean the existence of at
least five reports that generally agree on the importance of
the step. Using this metric indicates that there is research
support for the importance of studying the needs of the host
setting (critical step 1), determining the degree of fit
between the innovation and the setting and target popula-
tion (critical step 2), taking steps to foster a supportive
organizational climate for implementation and having
champions on hand to advocate for the program (critical
step 5), the importance of capacity building (critical step
6), and for monitoring the process of implementation
(critical step 12). There is also both quantitative and
qualitative support for the value of adaptation (critical
step 4).
Support for other elements rests upon conclusions from
the field based mainly on a few individual qualitative case
studies rather than quantitative studies. This refers to
importance of developing an implementation team and plan
(critical steps 9 and 10), and instituting a feedback system
regarding how well the implementation process is pro-
ceeding (critical step 13). These qualitative investigations
are important because it would be difficult to arrange an
experimental or quasi-experimental study in which these
elements were missing in one program condition but
present in another. Nevertheless, empirical studies have
documented how early monitoring of implementation can
identify those having difficulties, and that subsequent
retraining and assistance can lead to dramatic improve-
ments in implementation (DuFrene et al. 2005; Greenwood
et al. 2003).
Step 7, which involves recruiting staff to deliver the
intervention, does not require research confirmation per se,
but rests on the obvious consideration that someone must
provide the innovation. Most support for the importance of
learning from experience (step 14) is largely implicit and
was inferred from several reports. For example, data from
multi-year interventions indicated how implementation
improves over time (Cook et al. 1999; Elder et al. 1996;
Riley et al. 2001), presumably because authors have seen
the need for and have acted to enhance implementation in
one fashion or another. In other cases, authors recognized
strengths or weaknesses in their implementation efforts—
either in retrospect or as the innovation was being deliv-
ered—that offered important lessons for improving future
trials. There are reports in which suggestions about better
subsequent implementation might occur through improving
communication among stakeholders (Sobo et al. 2008),
changing aspects of training or technical assistance
(Wandersman et al. 2012), or modifying the innovation
itself to fit the host setting (Blakely et al. 1987; Kerr et al.
1985; McGraw et al. 1996; Mihalic et al. 2004).
Temporal Ordering of Elements
Our synthesis suggests there is a temporal order to the
critical steps of quality implementation. Some steps need
attention prior to the beginning of any innovation (namely,
critical steps 1–10), some are ascendant as implementation
unfolds (critical steps 11–13), and the last element offers
opportunities for learning once the first innovation trial is
complete (critical step 14).
The temporal ordering of implementation steps suggests
why some innovations may have failed to achieve their
intended effects because of poor implementation. In some
cases, researchers realized only after the fact that they had
not sufficiently addressed one or more steps in the imple-
mentation process. The need to be proactive about possible
implementation barriers is reported by Mihalic et al. (2004)
in their description of the Blueprints for Violence Pre-
vention initiative. They found that lack of staff buy-in
usually resulted in generalized low morale and eventually
led to staff turnover. Moreover, lack of administrative
support was present in every case of failed implementation.
Proactive monitoring systems can be developed to identify
such challenges as they arise during implementation and
provide feedback to stakeholders so they can take action.
An example of a proactive monitoring system’s benefit is
described in Fagan et al. (2008). The proactive system was
developed to ensure high-fidelity prevention program
implementation in the Community Youth Development
Study. In this study, local input was sought for how to
modify the implementation procedures to increase owner-
ship and buy-in. Together, actively fostering this buy-in
and administrative support, providing training and techni-
cal assistance, and developing a proactive monitoring
system helped support 12 communities in replicating pre-
vention programs with high rates of adherence to the pro-
grams’ core components. Therefore, the sequence offered
in Table 2may assist other practitioners and researchers in
preventing future problems in implementation, if they
attend to its critical steps.
The temporal order suggested in Table 2is not invariant
because implementation is a dynamic process. Quality
implementation does not always occur in the exact
sequence of steps illustrated in Table 2. In some cases,
individuals must revisit some of the steps at a later time
(e.g., if necessary, to gather more support and resources, to
Am J Community Psychol
123
re-train some staff, to re-secure genuine buy-in from crit-
ical stakeholders). In other cases, some steps might be
skipped, for example, if evidence exists that the organiza-
tion already has sufficient capacity to conduct the innova-
tion, or if champions are already apparent and have
advocated for the innovation. Furthermore, some steps may
need to be addressed simultaneously because of time,
financial, or administrative pressures. In addition, it may be
more efficient to conduct some steps simultaneously (e.g.,
the self-assessment strategies in Phase 1).
The dynamic nature of the implementation process is
such that some of the phases in Table 2overlap. For
example, step 5 relates to gaining buy-in and fostering a
climate that is supportive of appropriate use of the inno-
vation. We have included this critical step as part of our
first phase of the QIF, yet our literature review indicated
that this element could also be viewed as part of creating a
supportive structure in the second phase (e.g., enacting
policies that remove barriers to implementation and enable
practitioners to implement an innovation with greater
ease), or in the third phase related to maintaining ongoing
support (e.g., monitoring the enforcement of policies and
evaluating their benefit). We had to make a final decision to
place each step into one of the four phases. In order to
display the dynamic nature of the phases and critical steps
of the QIF, we have provided a figure that suggests the
dynamic interplay (see Fig. 2).
Modifications in implementation might be necessary
because of the complexities of the host setting. Context is
always important. Innovations are introduced into settings
for many reasons and via different routes. Organizations/
communities might become involved because of true per-
ceived needs, because of administrative fiat, or as a result
of political or financial pressures. Such entities also have
varied histories in terms of their ability to promote change
and work effectively together. If the above circumstances
are not clarified, it is likely that their importance will not
emerge until after contact with the host organization or
community has been established. As a result, some critical
steps in implementation might have to be prioritized and
periodically revisited to confirm the process is on a suc-
cessful track. Nevertheless, the QIF can serve as a cross-
walk that can offer guidance in the form of an ordered
sequence of activities that should be considered and
accomplished to increase the odds of successful
implementation.
Discussion
Our findings reflected success in achieving our main con-
ceptual, research, and practical goals. Based on our liter-
ature synthesis, we developed the QIF, which provides a
conceptual overview of the critical steps that comprise the
process of quality implementation. The QIF contains four
temporal phases and 14 distinct steps and offers a useful
blueprint for future research and practice. For example, the
QIF indicates that quality implementation is best achieved
by thinking about the implementation process systemati-
cally as a series of coordinated steps and that multiple
activities that include assessment, collaboration and nego-
tiation, monitoring, and self-reflection are required to
enhance the likelihood that the desired goals of the inno-
vation will be achieved.
Our review of existing frameworks, which the QIF is
based upon, is different from previous reviews because its
sample of frameworks (1) were from multiple domains
(e.g., school-based prevention programs, health care
Fig. 2 Dynamic interplay
among the critical steps of the
QIF. The arrows from one
phase to the next are intended to
suggest that the steps in each of
the phases should continue to be
addressed throughout the
implementation process. Steps
in each of the phases may need
to be strengthened, revisited, or
adapted throughout the use of an
innovation in an organization/
community. While a logical
order in which the critical steps
unfold was needed to develop a
coherent framework, we believe
the manner in which they are
implemented in practice will
depend on many factors (e.g.,
context, resources, logistical
concerns)
Am J Community Psychol
123
innovations, management) and (2) focused on the ‘‘how to’
of implementation (i.e., details on the specific actions and
strategies that authors believe are important). There was
considerable convergence on many elements in the QIF,
which is an important finding. Science frequently advances
through the identification of principles with broad appli-
cability. Our findings suggest that there are similar steps in
the implementation process regardless of the type of
innovation, target population, and desired outcomes, and
thus offers guidance to others working in many different
fields. The QIF can assist those interested in incorporating
more evidence-based innovations into everyday practice by
offering assistance on how to approach implementation in a
systematic fashion.
Our second goal was to summarize the research support
that exists for the QIF’s critical steps for quality imple-
mentation. While support exists in varying degrees for each
of the synthesized elements of implementation presented
here, there are still many unknowns. The strongest empir-
ical support is for the critical steps related to training and
on-going technical assistance (Wandersman et al. 2012).
These support strategies are often essential to quality
implementation and using both is recommended. Other
steps which have empirical support include assessing the
needs and resources of the host setting when planning for
implementation, assessing how the innovation aligns and
fits with this setting, fostering and maintaining buy-in, and
building organizational capacity. Also, it is apparent that
implementation should always be monitored.
Our findings also suggest implementation-related
research questions that require careful study. Research
questions about the host setting where implementation will
take place (Phase One of the QIF) include: How compre-
hensively should we conduct assessments of organizational
needs and the degree of fit between the innovation and each
setting? Who should provide this information and how can it
be obtained most reliably, validly, and efficiently? Which
dimensions of ‘‘innovation fit’’ (e.g., cultural preferences,
organizational mission and values) are most important?
How do we know whether an innovation fits sufficiently with
the host setting? Questions related to capacity are also rel-
evant, including: How can we best capture the current and
future capacity of host organizations? What criteria should
be used to assess when this capacity is sufficient to mount an
innovation? How can we assess the relative effectiveness of
different training strategies, and how do we measure staff
mastery of required skills before we launch the innovation?
In the first phase of the QIF, we need to better under-
stand the conditions when adaptations are necessary and
which criteria should be used to make this determination. If
adaptations are planned, they need to be operationalized
and carefully assessed during implementation, or else the
nature of the new innovation is unclear. What are the most
effective methods to ensure we have clear data on adap-
tation and its effects? How do we judge if the adaptation
improved the innovation or lessened its impact? Is it pos-
sible to conduct an experiment in which the relative
influence of the originally intended and adapted forms of
an innovation can be compared?
In Phase Two, we need more information on what forms
of on-going technical assistance are most successful for
different purposes and how we can accurately measure the
impact of this support. In past research, it seems many
authors have assumed that training or on-going technical
assistance leads to uniform mastery among front-line staff;
yet the empirical literature is now clear that substantial
variability in implementation usually occurs among pro-
gram providers (Durlak and Dupre 2008). There is a need
to develop the evidence base for effective training and
technical assistance (Wandersman et al. 2012).
Additional questions about the QIF include: How can it
be applied to learn more about the degree to which its use
improves implementation, the value and specifics of each
critical step, and the connections and interactions among
these steps? Are there important these steps in the current
framework that are missing? Should some steps in the
framework be revised?
Our third goal was to discuss the practical implications
of our findings. We will discuss these implications by
applying the elements of quality implementation from the
QIF to the three ISF systems. First we will specify the roles
that the systems of the ISF have in ensuring quality
implementation. Second, we will apply the collective
guidance synthesized via the QIF by making explicit links
between and within these systems, and detail specific
actions that can be used to collaboratively foster high
quality implementation.
In the ISF, innovations are processed by the Synthesis
and Translation System. This system promotes innovations
that can achieve their intended outcomes. The Delivery
System is comprised of the end-implementers (practitio-
ners) of innovations; therefore, quality implementation by
the Delivery System is crucial since this is where innova-
tions are used in real-world settings. In order to ensure
quality implementation by the Delivery System, the Sup-
port System provides ongoing assistance to build and
strengthen the necessary capacities for effective innovation
use. In other words, the Support System aims to build and
help maintain an adequate level of capacity in the Delivery
System, and the Delivery System utilizes its capacities to
put the innovation into practice so that outcomes are likely
to be achieved. In this way, the three systems in the ISF are
mutually accountable for quality implementation and need
to work together to make sure it happens.
The QIF can facilitate how these systems work together,
and the Support System can use this framework to help
Am J Community Psychol
123
plan for how it will provide support to the Delivery System
during implementation. For example, in Phase One, the
Support System can facilitate the assessment of key aspects
of the Delivery System’s environment (e.g., needs and
resources, how the innovation ‘‘fits’’ with the setting, and
whether the organization/community is ready to imple-
ment), help identify appropriate adaptations to the inno-
vation (e.g., cultural or other modifications required by
local circumstances, changes in the manner or intensity of
delivery of program components), ensure adequate buy-in
from key leaders and staff members, and provide necessary
training so the innovation is used properly. Given the
interactive nature of this process, there is a need to foster
and maintain positive relationships among these systems
and the QIF can help identify key issues that require
collaboration.
In regard to adaptation, our review indicated that the
Synthesis and Translation System plays a critical role in
deciding whether and how to modify an innovation. Given
that this system is charged with developing user-friendly
evidence-based innovations, several frameworks in our
review indicated that this system is accountable for pro-
viding information relevant to adaptation as a critical
aspect of their dissemination strategy. Such information
guides practitioners in the process of adapting programs to
new contexts: this may include consulting at the initial
stages where planning for implementation is taking place.
Such consultation could be considered part of the innova-
tion itself—an innovation that can be tailored to better fit
within the host setting. This is a much more involved
process than disseminating packaged program materials
(e.g., manuals and other tools) that lack guidance on what
can be adapted and what should never be adapted.
In Phase Two, the QIF indicates that the Delivery and
Support systems should work together to develop a struc-
ture that can support implementation. A key component of
this structure is a team that is accountable for implemen-
tation. An implementation plan needs to be created that
serves to guide implementation and anticipate challenges
that may be encountered. This plan can be strengthened by
incorporating the Delivery System’s local knowledge of
the host setting with the Support System’s knowledge of
effective support strategies (e.g., effective methods for
technical assistance) and of the innovation.
During Phase Three (when actual implementation tran-
spires), the Support System may assure that implementa-
tion by the Delivery System is supported. It is fundamental
that sufficient funding be in place during this phase to
ensure that adequate resources are available for innovation
use and support, and this has implications for important
implementation support policy considerations. A major
mechanism for support is technical assistance which is
intended to maintain the self-efficacy and skill proficiency
that were developed through training (Durlak and DuPre
2008). The key notion here is that support is on-going,
including monitoring and evaluating the implementation
process: Durlak and DuPre (2008) argue that this is nec-
essary for implementing innovations. If appropriate adap-
tations were identified during Phase One, then the Support
System may assure that monitoring and evaluation activi-
ties are tailored to these adaptations. Then, the Support
System may assess the extent to which the adaptations
impact the implementation process and resulting outcomes.
Other aspects of the process that should be monitored
include the extent to which tasks in the implementation
plan are accomplished in a timely manner, whether prac-
titioners are actually using the innovation (adherence), as
well as performance data related to the quality of innova-
tion delivery. This information can be used by the Support
System to enhance quality assurance and should be fed
back to the Delivery System.
Some researchers are beginning to develop more spe-
cific guidelines on how to monitor the implementation
process. The Collaborative for Academic, Social, and
Emotional Learning (CASEL 2011) has categorized each
of the elements in their implementation framework into one
of five ascending levels. For example, with respect to
availability of human resources, the CASEL guidelines ask
change agents to consider whether there is no staff for the
program (level one), some staff are present (level two) up
through level five (whether there are formal organizational
structures in place that institutionalize adequate human
resources including leadership positions). Such delinea-
tions can help determine where more work is needed for
quality implementation to occur.
During Phase Four, the Support System engages with
the Delivery System to reflect on the implementation pro-
cess. Reflection can illuminate what lessons have been
learned about implementing this innovation that can be
used to improve future applications and can be shared with
others who have similar interests. Researchers and program
developers are encouraged to form genuine collaborative
relationships that appreciate the perspectives and insights
of those in the Delivery System. Constructive feedback
from practitioners in the Delivery System can be important
to the use, modification, or application of the innova-
tion, and factors that may have affected the quality of
implementation.
A practical application of our findings was the synthesis
and translation of QIF concepts into a tool that can be used
to guide the implementation process. The tool, called the
Quality Implementation Tool, is described in Meyers et al.
(2012); the article also discusses how this instrument was
applied to foster implementation in two different projects.
Am J Community Psychol
123
Limitations
Although we searched carefully for relevant articles, it is
likely that some reports were overlooked. The different
terminology used among reviewed authors led us to focus
more on the activities they were describing rather than
what the activities were called. For example, sometimes
notions about obtaining critical support were being used in
the same way that others were discussing the importance of
having local champions, and terminology related to
capacity and capacity-building has yet to achieve universal
acceptance. As a result, we had to make judgments about
how best to categorize the features of different frameworks.
Although our synthesis identified 14 steps related to quality
implementation, it is possible that others might construe
the literature differently and derive fewer or more steps. As
already noted, some steps consist of multiple actions that
might be broken down further into separate, related steps.
The frameworks we reviewed were based on innova-
tions for adults or children—with or without adjustment or
medical problems—in diverse fields such as health care,
mental health, industry, and primary education. Although
there was convergent evidence for many QIF critical steps,
whether our findings can be generalized to diverse fields of
study needs to be explicitly tested. Whether the QIF can be
used effectively in all these settings to achieve diverse
goals needs empirical support. Such investigation can
identify which conditions might affect its application and
whether its critical steps require modifications to suit par-
ticular circumstances.
Another issue is that we included both peer-reviewed
and non-peer reviewed sources. It could be argued that
peer-reviewed sources have a higher level of rigor when
compared to those which have not been subject to such a
process. In addition, one of the ways that we limited our
sample was to exclude sources that had not been cited more
than once. This opens up the possibility of having a time
effect since those more recently published are less likely to
be cited.
Conclusion
Our findings suggest that the implementation process can be
viewed systematically in terms of a temporal series of linked
steps that should be effectively addressed to enhance the
likelihood of quality implementation. Past research indi-
cated that quality implementation is an important element of
any effective innovation, and that many factors may affect
the ultimate level of implementation attained. The current
synthesis and resulting QIF suggest a conceptual overview
of the critical steps of quality implementation that can be
used as a guide for future research and practice.
References
Aarons, G. A., Hurlburt, M., & Horwitz, S. M. (2011). Advancing a
conceptual model of evidence-based practice implementation in
public service sectors. Administration and Policy in Mental
Health and Mental Health Services Research, 38, 4–23.
Aarons, G. A., Sommerfeld, D., Hecht, D. B., Silovsky, J. F., &
Chaffin, M. J. (2009). The impact of evidence-based practice
implementation and fidelity monitoring on staff turnover:
Evidence for a protective effect. Journal of Consulting and
Clinical Psychology, 77, 270–280.
Abbott, R. D., O’Donnell, J., Hawkins, J. D., Hill, K. G., Kosterman,
R., & Catalano, R. F. (1998). Changing teaching practices to
promote achievement and bonding to school. American Journal
of Orthopsychiatry, 68, 542–552.
Baker, R., Robertson, N., Rogers, S., Davies, M., Brunskill, N., &
Sinfield, P. (2009). The National Institute of Health Research
(NIHR) Collaboration for Leadership in Applied Health
Research and Care (CLAHRC) for Leicestershire, Northampt-
onshire and Rutland (LNR): A programme protocol. Implemen-
tation Science, 4, 72.
Basch, C. E., Sliepcevich, E. M., Gold, R. S., Duncan, D. F., & Kolbe,
L. J. (1985). Avoiding type III errors in health education
program evaluations: A case study. Health Education Quarterly,
12, 3154–3331.
Bellg, A. J., Borrelli, B., Resnick, B., Hecht. J., Minicucci, D. S., Ory,
M., et al. (2004). Enhancing treatment fidelity in health behavior
change studies: best practices and recommendations from the
NIH Behavior Change Consortium. Health Psychology, 23,
443–451.
Blakely, C. H., Mayer, J. P., Gottschalk, R. G., Schmitt, N., Davidson,
W. S., Roitman, D. B., et al. (1987). The fidelity-adaptation
debate: Implications for the implementation of public sector
social programs. American Journal of Community Psychology,
15, 253–268.
Centers for Disease Control and Prevention Global AIDS Program.
(2010, August 9). CDC’s role in PEPFAR and the U.S. Global
Health Initiative. Retrieved from http://www.cdc.gov/globalaids/
support-evidence-based-programming/implementation-science.
html.
Chakravorty, S. S. (2009). Six sigma programs: An implementation
model. International Journal of Production Economics, 119,
1–16.
Chinman, M., Hunter, S. B., Ebener, P., Paddock, S. M., Stillman, L.,
Imm, P., et al. (2008). The Getting To Outcomes demonstration
and evaluation: An illustration of the prevention support system.
American Journal of Community Psychology, 41, 206–224.
Chinman, M., Imm, P., & Wandersman, A. (2004). Getting to
Outcomes 2004: Promoting accountability through methods and
tools for planning, implementation, and evaluation. (No. TR-
TR101). Santa Monica, CA: RAND.
Chinowsky, P. S. (2008). Staircase model for new practice imple-
mentation. Journal of Management in Engineering, 24, 187–195.
Chorpita, B. F., Yim, L. M., Donkervoet, J. C., Arensdorf, A.,
Amundsen, M. J., McGee, C., et al. (2002). Toward large-scale
implementation of empirically supported treatments for children: A
review and observations by the Hawaii empirical basis to services
task force. Clinical Psychology: Science and Practice, 9, 165–190.
Collaborative for Academic, Social, and Emotional Learning,
National Center for Mental Health Promotion and Youth
Violence Prevention. (2011). Leading an SEL school: Steps to
implement social and emotional learning for all students. 5/20/
11, Education Development Center.
Cook, T. D., Habib, F. N., Phillips, M., Settersten, R. A., Shagle, S.
C., & Degirmencioglu, S. M. (1999). Comer’s school
Am J Community Psychol
123
development program in Prince George’s county, Maryland: A
theory-based evaluation. American Educational Research Jour-
nal, 36, 543–597.
Damschroder, L. J., Aron, D. C., Keith, R. E., Kirsh, S. R., Alexander,
J. A., & Lowery, J. C. (2009). Fostering implementation of
health services research findings into practice: A consolidated
framework for advancing implementation science. Implementa-
tion Science, 4, 50.
Dane, A. V., & Schneider, B. H. (1998). Program integrity in primary
and early secondary prevention: Are implementation effects out
of control. Clinical Psychology Review, 18, 23–45.
Domitrovich, C. E., Bradshaw, C. P., Poduska, J., Hoagwood, K.,
Buckley, J., Olin, S., et al. (2008). Maximizing the implemen-
tation quality of evidence-based preventive interventions in
schools: A conceptual framework. Advances in School Mental
Health Promotion, 1, 6–28.
Domitrovich, C. E., Gest, S. D., Jones, D., Gill, S., & DeRousie, R.
M. S. (2010). Implementation quality: Lessons learned in the
context of the Head Start REDI trial. Early Childhood Research
Quarterly, 25, 284–298.
DuBois, D. L., Holloway, B. E., Valentine, J. C., & Cooper, H.
(2002). Effectiveness of mentoring programs for youth: A meta-
analytic review. American Journal of Community Psychology,
30, 157–198.
DuFrene, B. A., Noell, G. H., Gilbertson, D. N., & Duhon, G. J.
(2005). Monitoring implementation of reciprocal peer tutoring:
Identifying and intervening with students who do not main-
tain accurate implementation. School Psychology Review, 34,
74–86.
Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A
review of research on the influence of implementation on
program outcomes and the factors affecting implementation.
American Journal of Community Psychology, 41, 327–350.
Elder, J. P., Perry, C. L., Stone, E. J., Johnson, C. C., Yang, M.,
Edmundson, E. W., et al. (1996). Tobacco use measurement,
prediction, and intervention in elementary schools in four states:
The CATCH study. Preventive Medicine, 25, 489–494.
Fagan, A. A., Hanson, K., Hawkins, J. D., & Arthur, M. W. (2008).
Bridging science to practice: Achieving prevention program
implementation fidelity in the Community Youth Development
Study. American Journal of Community Psychology, 41, 235–249.
Feldstein, A. C., & Glasgow, R. E. (2008). A practical, robust
implementation and sustainability model (PRISM) for integrat-
ing research findings into practice. Joint Commission Journal on
Quality and Patient Safety/Joint Commission Resources, 34,
228–243.
Fixsen, D. L., Naoom, S. F., Blase
´, K. A., Friedman, R. M., &
Wallace, F. (2005). Implementation research: A synthesis of the
literature. Tampa, FL: University of South Florida, Louis de la
Parte Florida Mental Health Institute, The National Implemen-
tation Research Network (FMHI Publication #231). Retrieved
November 1, 2006, from http://nirn.fmhi.usf.edu/resources/
publications/Monograph/pdf/monograph_full.pdf.
Flaspohler, P. D., Anderson-Butcher, D., & Wandersman, A. (2008a).
Supporting implementation of expanded school mental health
services: Application of the Interactive Systems Framework in
Ohio. Advances in School Mental Health Promotion, 1, 38–48.
Flaspohler, P., Duffy, J., Wandersman, A., Stillman, L., & Maras, M.
A. (2008b). Unpacking prevention capacity: An intersection of
research-to-practice models and community-centered models.
American Journal of Community Psychology, 41, 182–196.
Forehand, R., Dorsey, S., Jones, D. J., Long, N., & McMahon, R.
(2010). Adherence and flexibility: They can (and do) coexist!
Clinical Psychology: Science and Practice, 17, 258–264.
Glisson, C., & Schoenwald, S. K. (2005). The ARC organizational
and community intervention strategy for implementing
evidence-based children’s mental health treatments. Mental
Health Services Research, 7, 243–259.
Gottfredson, D. C., Gottfredson, G. D., & Hybl, L. G. (1993).
Managing adolescent behavior: A multiyear, multi school study.
American Educational Research Journal, 30, 179–215.
Greenberg, M. T., Domitrovich, C. E., Graczyk, P. A., & Zins, J. E.
(2005). The study of implementation in school-based preventive
interventions: Theory, research, and practice (volume 3). DHHS
Pub. No. (SMA). Rockville, MD: Center for Mental Health
Services, Substance Abuse and Mental Health Services Admin-
istration, 2005.
Greenhalgh, T., Robert, G., MacFarlane, F., Bate, P., & Kyriakidou,
O. (2004). Diffusion of innovations in service organizations:
Systematic review and recommendations. Milbank Quarterly,
82, 581–629.
Greenwood, C. R., Tapia, Y., Abbott, M., & Walton, C. (2003). A
building-based case study of evidence-based literacy practices:
Implementation, reading behavior, and growth in reading
fluency, K-4. The Journal of Special Education, 37, 95–110.
Grimshaw, J. M., & Russell, I. T. (1993). Effect of clinical guidelines
on medical practice: A systematic review of rigorous evalua-
tions. The Lancet, 342, 1317–1322.
Grol, R., & Jones, R. (2000). Twenty years of implementation
research. Family Practice, 17, S32–S35.
Guldbrandsson, K. (2008). From news to everyday use: The difficult
art of implementation. Ostersund, Sweden: Swedish National
Institute of Public health. Retrieved from www.fhi.se/Page
Files/3396/R200809_implementering_eng0805.pdf.
Hall, G. E., & Hord, S. M. (2006). Implementing change: Patterns,
principles and potholes (2nd ed.). Boston, MA: Allyn and Bacon.
Hawkins, J. D., Catalano, R. F., & Arthur, M. W. (2002). Promoting
science-based prevention in communities. Addictive Behaviors,
27, 951–976.
International Organization for Standardization. (1998). ISO/IEC
international standard 13236: Information technologyQuality
of service: Framework. First edition.
Joyce, R. B., & Showers, B. (2002). Student achievement through
staff development (3rd ed.). Alexandria, VA: Association for
Supervision and Curriculum Development.
Kerr, D. M., Kent, L., & Lam, T. C. M. (1985). Measuring program
implementation with a classroom observation instrument: The
interactive teaching map. Evaluation Review, 9, 461–482.
Kilbourne, A. M., Neuman, M. S., Pincus, H. A., Bauer, M. S., &
Stall, R. (2007). Implementing evidence-based interventions in
health care: Applications of the replicating effective programs
framework. Implementation Science, 2, 42.
Klein, K. J., Conn, A., & Sorra, J. (2001). Implementing computer-
ized technology: An organizational analysis. Journal of Applied
Psychology, 86(5), 811–824.
Klein, K. J., & Sorra, J. S. (1996). The challenge of innovation
implementation. Academy of Management Review, 21, 1055–1080.
Lee, S. J., Altschul, I., & Mowbray, C. T. (2008). Using planned
adaptation to implement evidence-based programs with new
populations. American Journal of Community Psychology, 41,
290–303.
Liberati, A., Altman, D. G., Tetzlaff, J., Mulrow, C., Gøtzsche, P. C.,
Alonnidis, J. P., et al. (2009) The PRISMA statement for
reporting systematic reviews and meta-analyses of studies that
evaluate health care interventions: explanation and elaboration.
BMJ, 339, b2700.
Maxwell, J. A. (2005). Qualitative research design: An interactive
approach (2nd ed.). Thousand: SageOaks. 2005.
Mazzucchelli, T. G., & Sanders, M. R. (2010). Facilitating practi-
tioner flexibility within an empirically supported intervention:
Lessons from a system of parenting support. Clinical Psychol-
ogy: Science and Practice, 17, 238–252.
Am J Community Psychol
123
McGraw, S. A., Sellers, D. E., Stone, E. J., Bebchuk, J., Edmundson,
E. W., Johnson, C. C., et al. (1996). Using process data to
explain outcomes: An illustration from the child and adolescent
trial for cardiovascular health (CATCH). Evaluation Review, 20,
291–312.
Meyers, D. C., Katz, J., Chien, V., Wandersman, A., Scaccia, J. P., &
Wright, A. (2012). Practical implementation science: Develop-
ing and piloting the Quality Implementation Tool. American
Journal of Community Psychology. doi:10.1007/s10464-012-
9521-y.
Mihalic, S., Fagan, A. A., Irwin, K., Ballard, D., & Elliott, D. (2004).
Blueprints for violence prevention. Washington, DC: Office of
Juvenile Justice and Delinquency Prevention.
Miles, M., & Huberman, M. (1994). Qualitative data analysis: An
expanded sourcebook (2nd ed.). London: Sage.
Miller, W. R., Yahne, C. E., Moyers, T. B., Martinez, J., & Pirritano,
M. (2004). A randomized trial of methods to help clinicians learn
motivational interviewing. Journal of Consulting and Clinical
Psychology, 72, 1050–1062.
National Institutes of Health. (2011, October 25). Dissemination and
implementation. Retrieved from http://obssr.od.nih.gov/scien
tific_areas/translation/dissemination_and_implementation/index.
aspx.
Okumus, F. (2003). A framework to implement strategies in
organizations. Management Decision, 41(9), 871–882.
Partnerships for Success Community Planning and Implementation
Guide. (2003). www.pfsacademy.org.
Proctor, E. K., Landsverk, J., Aarons, G., Chambers, D., Glisson, C.,
& Mittman, B. (2009). Implementation research in mental health
services: An emerging science with conceptual, methodological,
and training challenges. Administration and Policy in Mental
Health and Mental Health Services Research, 36, 24–34.
Riley, B. L., Taylor, S. M., & Elliott, S. J. (2001). Determinants of
implementing heart healthy promotion activities in Ontario
public health units: A social ecological perspective. Health
Education Research, 16, 425–441.
Rogers, E. M. (2003). Diffusion of innovations (5th ed.). New York:
Free Press.
Rycroft-Malone, J. (2004). The PARIHS framework: A framework
for guiding the implementation of evidence-based practice.
Journal of Nursing Care Quality, 19, 297–304.
Sandler, I., Ostrom, A., Bitner, M. J., Ayers, T. S., Wolchik, S., &
Daniels, V. S. (2005). Developing effective prevention services
for the real world: A prevention service development model.
American Journal of Community Psychology, 35, 127–142.
Saunders, R. P., Ward, D., Felton, G. M., Dowda, M., & Pate, R. R.
(2006). Examining the link between program implementation
and behavior outcomes in the lifestyle education for activity
program (LEAP). Evaluation and Program Planning, 29,
352–364.
Schoenwald, S. K. (2008). Toward evidence-based transport of
evidence-based treatments: MST as an example. Journal of
Child & Adolescent Substance Abuse, 17, 69–91.
Sholomskas, D. E., Syracuse-Siewert, G., Rounsaville, B. J., Ball, S.
A., Nuro, K. F., & Carroll, K. M. (2005). We don’t train in Vain:
A dissemination trial of three strategies of training clinicians in
cognitive-behavioral therapy. Journal of Consulting and Clinical
Psychology, 73, 106–115.
Simpson, D. D. (2002). A conceptual framework for transferring
research to practice. Journal of Substance Abuse Treatment, 22,
171–182.
Smith, J. D., Schneider, B. H., Smith, P. K., & Ananiadou, K. (2004).
The effectiveness of whole-school antibullying programs: A
synthesis of evaluation research. School Psychology Review, 33,
547–560.
Sobo, E. J., Bowman, C., Halloran, J., Aarons, G. A., Asch, S., &
Gifford, A. L. (2008). Enhancing organizational change and
improvement prospects: Lessons from an HIV testing interven-
tion for veterans. Human Organization, 67, 443–453.
Spence, K., & Henderson-Smart, D. (2011). Closing the evidence-
practice gap for newborn pain using clinical networks. Journal of
Paediatrics and Child Health, 47, 92–98.
Spoth, R. L., & Greenberg, M. T. (2005). Toward a comprehensive
strategy for effective practitioner-scientist partnerships and
larger-scale community benefits. American Journal of Commu-
nity Psychology, 35, 107–126.
Spoth, R., Greenberg, M., Bierman, K., & Redmond, C. (2004).
PROSPER community-university partnership model for public
education systems: Capacity-building for evidence-based, com-
petence-building prevention. Prevention Science, 5, 31–39.
Stetler, C. B., McQueen, L., Demakis, J., & Mittman, B. S. (2008). An
organizational framework and strategic implementation for
system-level change to enhance research-based practice: QUERI
Series. Implementation Science, 3, 30.
Stith, S., Pruitt, I., Dees, J., Fronce, M., Green, N., Som, A., et al.
(2006). Implementing community-based prevention program-
ming: A review of the literature. The Journal of Primary
Prevention, 27, 599–617.
Tobler, N. S. (1986). Meta-analysis of 143 adolescent drug prevention
programs: Quantitative outcome results of program participants
compared to a control or comparison group. Journal of Drug
Issues, 16, 537–567.
Van de Ven, A. H., Angles, H. L., & Poole, M. S. (1989). Research on
the management of innovation: The Minnesota studies. New
York: Harper and Row.
Walker, J. S., & Koroloff, N. (2007). Grounded theory and backward
mapping: Exploring the implementation context for Wrap-
around. Journal of Behavioral Health Services & Research,
34, 443–458.
Wandersman, A., Chien, V., & Katz. J. (2012). Toward an evidence-
based system for innovation support for implementing innova-
tions with quality: Tools, training, technical assistance, and
quality assurance/quality improvement. American Journal of
Community Psychology. doi:10.1007/s10464-012-9509-7.
Wandersman, A., Duffy, J., Flaspohler, P., Noonan, R., Lubell, K.,
Stillman, L., Blachman, M., Dunville, R., & Saul, J. (2008).
Bridging the gap between prevention research and practice: The
Interactive Systems Framework for dissemination and imple-
mentation. American Journal of Community Psychology, 41,
171–181.
Wandersman, A., & Florin, P. (2003). Community interventions and
effective prevention. American Psychologist, 58, 441–448.
Wilson, S. J., Lipsey, M. W., & Derzon, J. H. (2003). The effects of
school-based intervention programs on aggressive behavior: A
meta-analysis. Journal of Consulting and Clinical Psychology,
71, 136–149.
Am J Community Psychol
123
... However, in this case, since the Clayton HOPE model of care was already established, we developed an implementation model of care a posteriori, which we termed the pragmatic framework of implementation [PFI]. To examine the transferability of the PFI, we compared it with two existing implementation frameworks with which it best aligned [23]-the Quality Implementation Framework (QIF) [24] and the Levels of Change [LoC] model [21]. Accordingly, we first categorized the activities undertaken by the project champion into six steps (henceforth referred to as PFI). ...
... Accordingly, we first categorized the activities undertaken by the project champion into six steps (henceforth referred to as PFI). We then aligned the PFI with the Quality Implementation Framework (QIF) [24] which describes or guides the process of translating research into practice [23]. Subsequently, we describe how the factors that lead to successful implementation of Clayton HOPE correspond with the LoC model proposed by Proctor and colleagues [21], who adapted Shortell's framework [25] to propose the different levels at which change is necessary when implementing mental health services [21]. ...
... The main factors that influenced the steps in the implementation were that, it was inspired by consumer needs, that funding was already available, and that it was driven by a clinician manager from the parent mental health service with the backing of the executive. Despite the fact that the implementation framework developed around this service was pragmatic, it appears to align with both the QIF [24] and LoC [21]. There are currently so many theories, models and frameworks applied to implementation science and research that it has become quite a challenge to choose one that fits [23]. ...
Article
Full-text available
Background Suicide prevention strategies are targeted at three levels: the general population (Universal), persons at risk (Selected), and persons who have attempted suicide or have suicidal ideation (Indicated). This study describes the implementation of an innovative indicated suicide prevention service that prioritizes peer and psychosocial support at one of Australia’s largest mental health services. The purpose of this paper is threefold. (1) To describe the process of designing and implementing an innovative indicated suicide prevention service in Melbourne (2) To compare the implementation framework developed around it with other relevant frameworks and (3) To describe its stages of care. Results Based on the activities undertaken by the ‘project champion’ in designing and implementing Clayton HOPE, a pragmatic framework of implementation (PFI) was developed. The PFI included six steps. 1: Determine client needs; 2: Plan the model of care; 3: Determine the workforce and other resource requirements to achieve client needs; 4: Establish the workforce and finalize the team; 5: Facilitate stakeholder buy-in and 6: Regular monitoring and evaluation. The steps of the PFI, fit within the Quality Implementation Framework, albeit in a different sequence, owing to variations in settings, organizational circumstances, and readiness for change. The PFI also enhances the Levels of Change model by including additional requirements. A five-stage model of care was developed and implemented. They are 1: Early engagement and empathetic support (within 24 h of referral); 2: Assessment of psychosocial needs and suicidal risk (within 72 h of referral) 3: Construction of a personal safety plan (within 7 days of referral) 4: Implementation of the personal safety plan and risk management (week 2 - week12) and 5: Discharge and handover to ongoing supports (12 weeks from enrollment). Conclusions The main implications of this work are twofold: (1) The implementation of innovative models of care can be achieved by a ‘project champion’ with the relevant experience, authority and determination when funding is available and (2) Indicated suicide prevention models of care can strike a balance between clinical and non-clinical interventions that are tailored to client needs.
... To succeed with improvements in healthcare, it is important to use strategies to identify barriers and enabling factors within the organization before the implementation process begins [2,3]. Moreover, a systematic approach to implementation that includes the participants' view is recommended to achieve success [3,4]. The conceptual Promoting Action on Research Implementation in Health Services (PARIHS) framework was developed based on the idea that research results are rarely applied clinically, which is often due to barriers in the local setting [5]. ...
... The QIF was used as a guide for organizing the implementation process [4], and a total of 24 FLMs and hygiene representatives were assigned the role as facilitators. To increase the facilitators' knowledge of managing improvement work and implementing the required improvements, five meetings were planned and spread out during the prearranged time for conducting the improvement work. ...
... 3) At my workplace, we always act based on the risks we see. 4) At my workplace, improvements are always made after negative events (a negative event entail something undesirable). The questionnaire was handed out three times, two times before the pandemic occurred and once during the pandemic ( Figure 1). ...
Article
Full-text available
Background and Aims More knowledge about perceptions of implementing new ways of working to prevent organism transmission and create safety engagement in health care are needed. This study aimed to explore managers and hygiene representatives', in the role as facilitators, perceptions of safety engagement and factors of importance when implementing measures to reduce healthcare‐associated infections. Methods Data were collected using both a quantitative and qualitative approach. A total of 24 facilitators were involved in the implementation process (12 managers, and 12 hygiene representatives, all female). The facilitators responded to the Sustainable Safety Engagement Index at three occasions, and 13 of the facilitators participated in open‐ended semi‐structured interviews. Results The results displayed that both internal and external organizational factors affected the implementation process as well as the interactions between individuals within the organization. The Sustainable Safety Engagement Index did not indicate any deviations before and during the implementation process. Conclusion To create a patient safety culture and get healthcare personnel engaged, it is important for healthcare managers to be aware of the complexity of healthcare and adapt organizational factors and specific elements in the caring chain. A systematic implementation approach, and reliable measurements along with use of single or multiple strategies is recommended. Furthermore, dedicated facilitators who creates an environment of support and cooperation between different professions and provides inspiration is crucial to maintain the improvement work. Prevailing behaviors should also be considered when planning and implementing patient safety interventions.
... We piloted a proof-of-concept study [42] and feasibility-initial impact randomized trial [8,10,19,[43][44][45] to examine the impact of the Healthy Mom Zone (HMZ) intervention on GWG. The social cognitive theory-based components [15][16][17] noted above were designed with the Multiphase Optimization Strategy [46] translational science framework [47][48][49][50], and control systems methodology [51][52][53][54][55], with the long-term goal to scale-up use by clinicians as an adjunct treatment to prenatal care in order to regulate GWG. This multiphase approach [46] builds an intervention in a principled manner whereby key constraints expected to impact scalability (eg, implementation feasibility and subject or staff burden) are considered from the start so that the end goal is an optimized (effective and efficient) and scalable intervention that delivers the best possible outcome [46]. ...
... This multiphase approach [46] builds an intervention in a principled manner whereby key constraints expected to impact scalability (eg, implementation feasibility and subject or staff burden) are considered from the start so that the end goal is an optimized (effective and efficient) and scalable intervention that delivers the best possible outcome [46]. Our translational science framework [47][48][49][50] guided by the Quality Implementation Framework [49] and Quality Implementation Tool [50] aligns with the paradigm shift in the literature to prospectively examine implementation markers (eg, subject acceptability, dosage exposure, and staff burden) from the start of an intervention to identify and resolve challenges during delivery that impact efficacy and scalability [49,50]. ...
... This multiphase approach [46] builds an intervention in a principled manner whereby key constraints expected to impact scalability (eg, implementation feasibility and subject or staff burden) are considered from the start so that the end goal is an optimized (effective and efficient) and scalable intervention that delivers the best possible outcome [46]. Our translational science framework [47][48][49][50] guided by the Quality Implementation Framework [49] and Quality Implementation Tool [50] aligns with the paradigm shift in the literature to prospectively examine implementation markers (eg, subject acceptability, dosage exposure, and staff burden) from the start of an intervention to identify and resolve challenges during delivery that impact efficacy and scalability [49,50]. ...
Article
Background Regulating gestational weight gain (GWG) in pregnant women with overweight or obesity is difficult, particularly because of the narrow range of recommended GWG for optimal health outcomes. Given that many pregnant women show excessive GWG and considering the lack of a “gold standard” intervention to manage GWG, there is a timely need for effective and efficient approaches to regulate GWG. We have enhanced the Healthy Mom Zone (HMZ) 2.0 intervention with a novel digital platform, automated dosage changes, and personalized strategies to regulate GWG, and our pilot study demonstrated successful recruitment, compliance, and utility of our new control system and digital platform. Objective The goal of this paper is to describe the study protocol for a randomized controlled optimization trial to examine the efficacy of the enhanced HMZ 2.0 intervention with the new automated control system and digital platform to regulate GWG and influence secondary maternal and infant outcomes while collecting implementation data to inform future scalability. Methods This is an efficacy study using a randomized controlled trial design. HMZ 2.0 is a multidosage, theoretically based, and individually tailored adaptive intervention that is delivered through a novel digital platform with an automated link of participant data to a new model-based predictive control algorithm to predict GWG. Our new control system computes individual dosage changes and produces personalized physical activity (PA) and energy intake (EI) strategies to deliver just-in-time dosage change recommendations to regulate GWG. Participants are 144 pregnant women with overweight or obesity randomized to an intervention (n=72) or attention control (n=72) group, stratified by prepregnancy BMI (<29.9 vs ≥30 kg/m2), and they will participate from approximately 8 to 36 weeks of gestation. The sample size is based on GWG (primary outcome) and informed by our feasibility trial showing a 21% reduction in GWG in the intervention group compared to the control group, with 3% dropout. Secondary outcomes include PA, EI, sedentary and sleep behaviors, social cognitive determinants, adverse pregnancy and delivery outcomes, infant birth weight, and implementation outcomes. Analyses will include descriptive statistics, time series and fixed effects meta-analytic approaches, and mixed effects models. Results Recruitment started in April 2024, and enrollment will continue through May 2027. The primary (GWG) and secondary (eg, maternal and infant health) outcome results will be analyzed, posted on ClinicalTrials.gov, and published after January 2028. Conclusions Examining the efficacy of the novel HMZ 2.0 intervention in terms of GWG and secondary outcomes expands the boundaries of current GWG interventions and has high clinical and public health impact. There is excellent potential to further refine HMZ 2.0 to scale-up use of the novel digital platform by clinicians as an adjunct treatment in prenatal care to regulate GWG in all pregnant women. International Registered Report Identifier (IRRID) DERR1-10.2196/66637
... There is expanding interest in leveraging implementation science to enhance the scale-up of EBIs in schools (Bradshaw, 2023;Lyon & Bruns, 2019). Similar to implementation approaches in healthcare and public health (e.g., Durlak & DuPre, 2008;Wandersman et al., 2012), strategies to increase schoollevel capacity for the implementation of EBIs include both pre-implementation training and other supports during the implementation process, such as technical assistance (TA; Domitrovich et al., 2008;Pas et al., 2023). TA is an individualized approach that builds organizational capacity to implement EBIs (Chinman et al., 2005). ...
... TA is an individualized approach that builds organizational capacity to implement EBIs (Chinman et al., 2005). Although TA is highly effective for supporting organizations during the implementation of complex interventions (Wandersman et al., 2012), its core features, including timing, dose, and types of implementation strategies used during TA meetings, are still under investigation (Katz & Wandersman, 2016;Le et al., 2016). Learning collaboratives, which allow practitioners to engage (virtually or in-person) with one another as a mutual capacity-building exercise, may also help improve implementation outcomes in school mental health systems (Orenstein et al., 2023;Zubkoff et al., 2019). ...
... The research group served as (1) the synthesis and translation system (distilling evidence into actionable strategies for the deliverers) and (2) the prevention support system (providing deliverers with technical assistance throughout implementation), and schools served as (3) the delivery system. The Quality Implementation Framework (Meyers et al., 2012) details actions to be taken by each of the three parts of the ISF system during four phases of implementation: (1) assess context and build capacity; (2) create a structure for implementation, such as teams; (3) provide ongoing structure as implementation occurs; and (4) improve future applications. We used both frameworks to conceptualize the design of the trial, the RS3 intervention, the process measures, and the philosophy to guide how our team interacted with educators (i.e., the delivery systems). ...
Article
Full-text available
The need for well-implemented evidence-based interventions (EBIs) for the prevention of behavioral issues among children and adolescents is substantial. In rural areas, the need often matches or surpasses that of urban areas. Schools have a wide reach for prevention-focused EBIs. However, implementation in rural schools is often hindered by limited resources and capacity. Rural School Support Strategies (RS3) are a bundle of implementation supports that address implementation challenges in rural settings. They include providing additional leadership and coaching training, individualized technical assistance (mostly virtual), and monthly meetings of a virtual learning collaborative. A cluster-randomized Hybrid Type 3 implementation-effectiveness trial tested RS3 for implementing school-wide positive behavioral interventions and supports (PBIS), a universal prevention approach to improving student behavior, academic outcomes, and school climate. Forty rural schools received a multi-day training on PBIS each summer for 3 years. Half were randomized to also receive RS3 support. Linear and logistic regression models examined the effect of treatment condition and dosage of support on implementation fidelity for Tier 1 (universal) PBIS. Condition and dosage (number of hours) of support increased the odds of schools achieving the 70% threshold for adequate implementation fidelity. In the first year, the higher dosage of technical assistance events increased the likelihood of schools reaching fidelity, whereas later in the trial, the higher dosage of attendance at the virtual learning collaborative sessions yielded significant improvements in fidelity. Implications for accelerating the implementation of universal prevention initiatives in schools—particularly in rural settings—are discussed. This study was prospectively registered on ClinicalTrials.gov (NCT03736395), on November 9, 2018.
... They were preparing to send out surveys to grade ten students (i.e., step four of the IPM). The interview guide included questions about the context and processes of implementation, with a subset of questions that were adapted from the Quality Implementation Framework [48]. Questions were designed to capture Steering Committee member perspectives of key successes and challenges related to the early stages of adoption of the IPM (see [49] for the interview guide). ...
... Yet, participants felt that this aspect helped stakeholders to engage in strategic planning and adaptation to existing needs and assets. Researchers examining the implementation of other community collaboratives have highlighted the importance for initiatives to be adaptable to context in order to be successful [48,52,53]. This is likewise the case with the IPM [54]. ...
Article
Full-text available
The Icelandic Prevention Model (IPM) is a sequential 10-step community-driven collaborative intervention that is designed to support the prevention of substance use in youth by establishing healthy developmental contexts. The IPM has been implemented across Iceland for over 20 years and is now being implemented in other countries. Recognizing the need to explore how to adapt the IPM to new contexts and document the implementation of the model, this paper describes a process evaluation of the first three steps of the IPM within a Canadian rural community to capture experiences during the early development. Specifically, this study addresses the following research questions: (1) What are the processes of development and contextual features that influence the implementation of the IPM within Lanark County, Ontario? and (2) What adaptations are needed to successfully implement the IPM in Canada? Semi-structured interviews were conducted to examine experiences and lessons learned through the implementation of the model. Thematic analyses were completed using QSR NVivo. A deductive and inductive approach was applied, whereby some interview guide questions were derived from the IPM implementation steps and others were more exploratory, examining context and processes of development. Nine interviews were conducted with key partners who were leading the implementation of the IPM. Themes highlighting cultural factors that influence implementation, processes of development related to community engagement, and themes relating to youth participation, fidelity issues, fundraising, health equity and challenges related to the COVID-19 pandemic were identified. This paper contributes new scientific knowledge related to implementation processes within upstream prevention of substance use and practical information that is useful for communities interested in implementing the IPM.
... The NIP guides CAMHS staff and leaders in developing local models based on THRIVE principles and creating detailed plans for implementation over four phases, using six components (Moore et al., 2023). The implementation strategies in the NIP are drawn from the Quality Implementation Framework (Meyers et al., 2012) and the Normalisation Processing Theory (May, 2006). The embodiment of THRIVE principles depends not only on efforts by CAMHS but also on broader community involvement (Wolpert et al., 2019), requiring effective working relationships among local systems. ...
Article
Full-text available
Aims Developing integrated mental health services focused on the needs of children and young people is a key policy goal in England. The THRIVE Framework and its implementation programme, i-THRIVE, are widely used in England. This study examines experiences of staff using i-THRIVE, estimates its effectiveness, and assesses how local system working relationships influence programme success. Methods This evaluation uses a quasi-experimental design (10 implementation and 10 comparison sites.) Measurements included staff surveys and assessment of ‘THRIVE-like’ features of each site. Additional site-level characteristics were collected from health system reports. The effect of i-THRIVE was evaluated using a four-group propensity-score-weighted difference-in-differences model; the moderating effect of system working relationships was evaluated with a difference-in-difference-in-differences model. Results Implementation site staff were more likely to report using THRIVE and more knowledgeable of THRIVE principles than comparison site staff. The mean improvement of fidelity scores among i-THRIVE sites was 16.7, and 8.8 among comparison sites; the weighted model did not find a statistically significant difference. However, results show that strong working relationships in the local system significantly enhance the effectiveness of i-THRIVE. Sites with highly effective working relationships showed a notable improvement in ‘THRIVE-like’ features, with an average increase of 16.41 points (95% confidence interval: 1.69–31.13, P-value: 0.031) over comparison sites. Sites with ineffective working relationships did not benefit from i-THRIVE (−2.76, 95% confidence interval: − 18.25–12.73, P-value: 0.708). Conclusions The findings underscore the importance of working relationship effectiveness in the successful adoption and implementation of multi-agency health policies like i-THRIVE.
... that support translational research. Our study highlights key publications of interest in implementation science that could provide opportunities for integration within translational science, including work that outlines common implementation theories, models, and frameworks [25,36,[54][55][56][57]; assists in the selection and use of these in practice [38,58] Publications providing information on implementation strategies and how to select them are also available for integration into CTSA hubs [39,[64][65][66]. This literature can assist CTSA hubs to find and utilize tools from implementation science more easily. ...
Article
This paper describes the collaborative development of national guidance in the Republic of Ireland, specifically designed to support the upholding of the rights of disabled children and young people to meaningful participation in decision-making, as outlined in Article 12 of the UN Convention on the Rights of the Child ( crc ), General Comments Nos. 5, 9 and 12, as well as the UN Convention on the Rights of Persons with Disabilities ( uncrpd ). We conceptualise this process within the Interactive Systems Framework for Dissemination and Implementation (Wandersman et al ., 2008), which highlights the need for a shared planning process and multi-system support to ensure effective implementation. We propose that strategies from Implementation Science are essential to translate this guidance into practice, enabling the monitoring and evaluation of its effectiveness in safeguarding the participation rights of disabled children and young people. Our paper advocates for the application of Implementation Science as a critical tool to further the rights enshrined in both the crc and uncrpd , ensuring that disabled children and young people are actively involved in decision-making processes that affect their lives.
Article
Despite the growing prevalence of mindfulness in schools, the empirical landscape describing best practices for implementation is incomplete. The purpose of this project was to develop a framework that explicated the steps and considerations for implementing mindfulness at the whole‐school level, rather than just considering individual classroom programs or practices. The Whole School Mindfulness (WSM) conceptual framework was developed using a consensus‐building approach with 39 expert educators, researchers, program developers, and practitioners with unique perspectives on what is needed to implement and sustain mindfulness successfully in schools. Information was gathered across three initial meetings and a 2.5‐day conference using a process inspired by Appreciative Inquiry; notes were used to create a consensus document that was reviewed and augmented by attendees. Expert contributions were aligned and expanded upon using existing organizational change and implementation science frameworks across various disciplines. The WSM framework, with four phases of development, is structurally similar to other frameworks, but accounts for unique needs related to mindfulness including professional development of personal mindfulness for leaders and school staff, the importance of voluntary engagement in practices, and consideration of contextual and cultural issues.
Article
Full-text available
Research on the effectiveness of school-based programs for preventing or reducing aggressive behavior was synthesized with a meta-analysis. Changes in aggressive behavior between pretest and posttest were analyzed for developmental patterns and characteristics associated with differential effects. Control groups showed little change in aggressive behavior, but there were significant reductions among intervention groups. Most studies were conducted on demonstration programs; the few studies of routine practice programs showed much smaller effects. Among demonstration programs, positive outcomes were associated with a variety of study, subject, and intervention characteristics. Most notably, higher risk youth showed greater reductions in aggressive behavior, poorly implemented programs produced smaller effects, and different types of programs were generally similar in their effectiveness, other things equal.
Article
Full-text available
Why do some organizations succeed and others fail in implementing the innovations they adopt? To begin to answer this question, the authors studied the implementation of manufacturing resource planning, an advanced computerized manufacturing technology, in 39 manufacturing plants (number of individual respondents = 1,219). The results of the plant-level analyses suggest that financial resource availability and management support for technology implementation engender high-quality implementation policies and practices and a strong climate for implementation, which in turn foster implementation effectiveness-that is, consistent and skilled technology use. Further research is needed to replicate and extend the findings.
Article
Full-text available
Systematic reviews and meta-analyses are essential to summarize evidence relating to efficacy and safety of health care interventions accurately and reliably. The clarity and transparency of these reports, however, is not optimal. Poor reporting of systematic reviews diminishes their value to clinicians, policy makers, and other users. Since the development of the QUOROM (QUality Of Reporting Of Meta-analysis) Statement—a reporting guideline published in 1999—there have been several conceptual, methodological, and practical advances regarding the conduct and reporting of systematic reviews and meta-analyses. Also, reviews of published systematic reviews have found that key information about these studies is often poorly reported. Realizing these issues, an international group that included experienced authors and methodologists developed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) as an evolution of the original QUOROM guideline for systematic reviews and meta-analyses of evaluations of health care interventions. The PRISMA Statement consists of a 27-item checklist and a four-phase flow diagram. The checklist includes items deemed essential for transparent reporting of a systematic review. In this Explanation and Elaboration document, we explain the meaning and rationale for each checklist item. For each item, we include an example of good reporting and, where possible, references to relevant empirical studies and methodological literature. The PRISMA Statement, this document, and the associated Web site (http://www.prisma-statement.org/) should be helpful resources to improve reporting of systematic reviews and meta-analyses.
Article
We used meta‐analysis to review 55 evaluations of the effects of mentoring programs on youth. Overall, findings provide evidence of only a modest or small benefit of program participation for the average youth. Program effects are enhanced significantly, however, when greater numbers of both theory‐based and empirically based “best practices” are utilized and when strong relationships are formed between mentors and youth. Youth from backgrounds of environmental risk and disadvantage appear most likely to benefit from participation in mentoring programs. Outcomes for youth at‐risk due to personal vulnerabilities have varied substantially in relation to program characteristics, with a noteworthy potential evident for poorly implemented programs to actually have an adverse effect on such youth. Recommendations include greater adherence to guidelines for the design and implementation of effective mentoring programs as well as more in‐depth assessment of relationship and contextual factors in the evaluation of programs.
Article
Although numerous studies address the efficacy and effectiveness of health interventions, less research addresses successfully implementing and sustaining interventions. As long as efficacy and effectiveness trials are considered complete without considering implementation in nonresearch settings, the public health potential of the original investments will not be realized. A barrier to progress is the absence of a practical, robust model to help identify the factors that need to be considered and addressed and how to measure success. A conceptual framework for improving practice is needed to integrate the key features for successful program design, predictors of implementation and diffusion, and appropriate outcome measures.
Article
Presented is a meta-analysis of the outcome results for 143 adolescent drug prevention programs to identify the most effective program modalities for reducing teenage drug use. Glass' et al. (1981) meta-analysis techniques provided a systematic approach for the accumulation, quantification and integration of the numerous research findings. Five major modalities were identified and their effect sizes computed for five distinctly different outcomes: Knowledge, Attitudes, Use, Skills and Behavior measures. The magnitude of the effect size was found dependent on the outcome measure employed and the rigor of the experimental design. These factors were controlled for through use of a standard regression analysis. Peer Programs were found to show a definite superiority for the magnitude of the effect size obtained on all outcome measures. On the ultimate criteria of drug use, Peer Programs were significantly different than the combined results of all the remaining programs (p < .0005). Peer Programs maintained high effect size for alcohol, soft drugs and hard drugs, as well as for cigarette use. Recommendations are made concerning the effectiveness of the underlying theoretical assumption for the different program modalities. Future programming implications are discussed as Peer Programs were identified as effective for the average school-based adolescent population, but the Alternatives programs were shown to be highly successful for the “at risk” adolescents such as drug abusers, juvenile delinquents or students having school problems.
Article
Most research on why health care quality improvement implementation succeeds or fails focuses on front-line or provider-based factors. However, background factors related to the structures and processes of projects themselves also pose challenges. Using a focused ethnographic assessment approach, we undertook a case study to characterize particularly challenging background factors in an ongoing implementation effort. We found that the organizational structure of the project under study sustained several key "cultural" differences in stakeholder agendas. Moreover, it fostered the emergence of strategic communication processes that, despite their immediate utility, sometimes undermined progress and threatened long-term relations by distorting information flow in particularly patterned ways. These included a "focus on the local" and "information reconfigurations" or "partiality" that sometimes led to miscommunication or interpretive disjunctions between various stakeholders. Successful cross-organizational communication is in certain ways a cross-cultural achievement, and several guidelines were devised to facilitate this. Our experience with other health care systems and with health services research in general suggests that our findings and recommendations are broadly applicable. Because the main barriers identified were generated by complex organizational arrangements, lessons learned may also be transferable to other complex organizational contexts.