Article

Putting Evidence to Work: A School's Guide to Implementation

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

There are legitimate barriers to implementing effectively in schools – the bombardment of new ideas and initiatives, limited time and resources, and the pressure to yield quick results, to name just a few. Nevertheless, this guidance report shows a lot can be achieved with careful thought, planning, and delivery using existing resources and structures. It is about making the implicit explicit, providing clarity and purpose to existing processes, and reframing what you are already doing, rather than bolting on a whole new set of procedures. The guide can be used to help implement any school improvement decision, whether programme or practice, whole-school or targeted approach, or internal or externally generated ideas. Over the last few years, the Education Endowment Foundation (EEF) has developed an approach to evidenceinformed school improvement, which treats the school as a continuously improving system. The model aims to frame research evidence in a school’s context, rather than the other way around, integrating the best available external evidence with professional expertise and internal data. The cycle has five steps: 1. Decide what you want to achieve. 2. Identify possible solutions and strategies. 3. Give the idea the best chance of success. 4. Did it work? 5. Secure and spread change. We suggest schools use this implementation guide as part of an overall advance towards evidence-informed school improvement. This guide covers all of the steps briefly, but focuses mainly on Step 3, ‘Giving an idea the best chance of success’. The EEF has a range of additional resources to support schools across the other steps of this process, for example, the Families of Schools database (Step 1), the Teaching and Learning Toolkit (Step 2), and the DIY Evaluation Guide (Step 4).

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... 378) and that data collection be undertaken by practitioners from multiple disciplines without formal academic credentials. The education sector has noted the long-standing tensions regarding the gap between research and practice, highlighting the need for more practitioner participation in knowledge generation processes (Brown and Greany 2018;Farley-Ripple et al., 2017;Nelson and Campbell, 2019;Sharples et al., 2019). ...
... These included individual measures of evidence use in practice based on level of expertise, from novice through to expert (e.g., Brown and Rogers, 2015), as well as continuous self-assessment of individual and school engagement in research use (e.g., Stoll et al., 2018a;2018b). The notion of continuously improving school systems relied on ongoing evidence-informed reflective practices (e.g., Brown and Greany, 2018;Coldwell et al., 2017;EEF, 2019), as well as intentional links to whole-school improvement initiatives (e.g., Creaby et al., 2017;Sharples et al., 2019). Additionally, the policy sector focused on oversight of governance practices. ...
... In education, Stoll and colleagues (2018a, b) developed an empirically informed self-assessment framework to assist teachers and schools with developing and evaluating key aspects of evidence-informed practice. Further, both the education and health literature emphasised the need for ongoing feedback to support continuous improvement at both the individual and organisational levels (e.g., Brown and Greany, 2018;Chambers et al., 2013;Sharples et al., 2019). ...
Article
Full-text available
Recent decades have seen widespread efforts to improve the generation and use of evidence across a number of sectors. Such efforts can be seen to raise important questions about how we understand not only the quality of evidence, but also the quality of its use. To date, though, there has been wide-ranging debate about the former, but very little dialogue about the latter. This paper focuses in on this question of how to conceptualise the quality of research evidence use. Drawing on a systematic review and narrative synthesis of 112 papers from health, social care, education and policy, it presents six initial principles for conceptualising quality use of research evidence. These concern taking account of: the role of practice-based expertise and evidence in context; the sector-specific conditions that support evidence use; how quality use develops and can be evaluated over time; the salient stages of the research use process; whether to focus on processes and/or outcomes of evidence use; and the scale or level of the use within a system. It is hoped that this paper will act as a stimulus for future conceptual and empirical work on this important, but under-researched, topic of quality of use.
... Other research highlights similar factors including the importance of school staff having a shared vision (Durlak & DuPre, 2008), staff being committed and engaged in the intervention (Greenberg et al., 2005) and the importance of leadership support (Humphrey, 2013). In the Education Endowment Foundation guidance on intervention implementation, the importance of support from leadership and the school climate is also emphasised (Sharples, Albers & Fraser, 2018). Carroll and Hurry's (2008) work indicates the importance of these implementation factors in the efficacy of SEMH interventions. ...
... Innovation involves adapting the intervention to overcome intervention challenges. Researchers support this model, explaining that implementation is a process not an event, and should be treated so within the research base (Sharples et al., 2018). ...
Conference Paper
This research examines the structure of the Emotional Literacy Support Assistant (ELSA) project in two schools at different points in the implementation process. Implementation research is essential because the way interventions are implemented links to intervention outcomes. A mixed methods comparative multiple case study design was adopted, involving two mainstream primary schools at different stages of implementation. Semi-structured one-to-one interviews were conducted with key stakeholders in School 1 (n = 8) and School 2 (n = 7), including the ELSA, Special Educational Needs Coordinator, Senior Leadership, teachers and Educational Psychologists (EP). Interview data were thematically analysed. A questionnaire was also circulated to wider school staff. Analyses were conducted separately for each school in order to retain the integrity of each case. Following this themes were examined across the two cases. The schools were found to implement the project in different ways, and some practices did not adhere to ELSA guidance. Intervention length was longer in School 1 than guidance outlines. In School 2, there was an absence of intervention endings, the ELSA supported pupils with complex behavioural needs as opposed to a wider range of needs, and ELSA support often operated in a reactive way. Factors found to facilitate implementation consistent with the espoused approach include: a mental health ethos, Link EP support, shared responsibility for mental health across school staff and practices in endings which acknowledge the ELSA-pupil attachment. Barriers to implementation included: lack of school-wide understanding and support of the ELSA project’s primary task, difficulties incorporating and distinguishing between the intervention and other SEMH provision, blurring of ELSA role boundaries, emotional impact on ELSAs, lack of equality of access in referrals and pupil overdependence on the ELSA. Results indicate that implementation barriers are more prominent in the earlier stages of implementation. Implications for research and practice are discussed.
... Within the subtheme Working With and Through School Staff, interview participants reported varying degrees of success when implementing and maintaining attachment-aware or trauma-informed approaches, expressing a desire for increased staff awareness and understanding about the needs of careexperienced children. Research indicates that successful implementation is a staged process that develops over a period of time (Aarons et al., 2011;Sharples et al., 2018). ...
Conference Paper
The Children and Young Persons Act 2008 (amended by the Children and Social WCopyright © The Author [year]. Original content in this thesis is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) Licence (https://creativecommons.org/licenses/by-nc/4.0/). Any third-party copyright material present remains the property of its respective owner(s) and is licensed under its existing terms. Access may initially be restricted at the author’s request. ork Act 2017) placed statutory duties on all state-maintained schools in England to allocate a designated teacher responsible for promoting the educational achievement of care-experienced children in schools. Despite their integral role, there is little research exploring how designated teachers perceive, experience and enact their role. The current research aimed to explore the relationship between statutory regulations about the designated teacher role and practice. This included an exploration of key roles and responsibilities, barriers and facilitating factors that impact the role, perceptions around personal effectiveness, and an exploration into how designated teachers work with virtual schools, social care, educational psychologists (EPs) and wider professionals. This mixed-methods study used surveys with a sample of virtual schools (n=44) and designated teachers (n=142), and semi-structured interviews with designated teachers (n=16). Quantitative data were analysed using a statistical analysis programme, providing descriptive statistics and exploring trends, while qualitative data were analysed using thematic analysis. Survey findings provided an exploration into support from virtual schools, EPs, and wider agencies; key challenges faced by designated teachers; time spent enacting key duties; designated teacher’s sense of effectiveness; and multiagency working. Thematic analysis from interviews elicited three themes: complexities of the designated teacher role; building relationships and making contacts; and negotiating challenges in the wider system. Implications include raising the profile of designated teachers by increasing awareness and recognition about the role in schools and among professionals, including social care and EPs. It is hoped that by supporting and strengthening the designated teacher role, holistic outcomes for care-experienced children can be improved.
... A further limitation is that the policy context is changing, and that the studies we included and used in this review, describe program functioning in schools 10 years or even longer ago. In the meantime implementing evidence-based programs has become more mainstream and, moreover, tool kits for schools to implement these programs properly have become available (e.g., Sharples et al., 2019). ...
Article
Full-text available
Purpose The effectiveness of the 14-component evidence-based whole-school reform Success for All (SfA) has been well established, but research on its implementation is limited although fidelity of implementation is vital for the effectiveness of such a program. This review sheds light on this issue. Design/Approach/Methods A systematic literature review was conducted to acquire an overview of qualitative and quantitative aspects of SfA’s implementation in primary schools as well as to identify stimulating and hindering factors when implementing SfA. Sixteen studies, conducted in the United States and the United Kingdom, were included in this review. Findings Results indicate that 48% of schools implemented SfA at the minimal level, 45% at a more advanced level, and 7% at an insufficient level. Information on the implementation for each of the 14 components was rare. Most of the factors that affected implementation were hindering factors rather than stimulating ones. For successful implementation of this evidence-based program, the crucial factors appear to be leadership, fulfilling organizational conditions, staff development, and relentlessly implementing all 14 components together. Originality/Value This review shows that for an evidence-based program to be effective, implementation fidelity is a very serious concern, which needs to be addressed systematically.
... Implementation also needs to be revisited as other components of the model shift JPCC (new partners, succession in leadership roles, change in governments or policy environments). Sharples et al. (2018), Putting Evidence to Work: A School's Guide to Implementation, a focuses on implementation in schools. Sharples advocates for attention to six areasthat RPPs could consider in relation to implementation: ...
Article
Purpose This scoping review utilizes findings from 80 articles to build a research model to study research-practice-policy networks in K-12 education systems. The purpose of this study was to generate a broad understanding of the variation in conceptualizations of research-practice-policy partnerships, rather than dominant conceptualizations. Design/methodology/approach Arskey and O'Malley's (2005) five stage scoping review process was utilized including: (1) a consultative process with partners to identify research questions, (2) identify relevant studies, (3) study selection based on double-blind peer review, (4) charting the data and (5) collating, summarizing and reporting the results in a research model identifying key dimensions and components of research-practice partnerships (RPPs). Findings Coburn et al. (2013) definition of RPPs arose as an anchoring definition within the emerging field. This article proposes a model for understanding the organization and work of RPPs arising from the review. At the core lies shared goals, coproduction and multistakeholder collaboration organized around three dimensions: (1) Systems and structures: funding, governance, strategic roles, policy environment, system alignment; (2) Collaborative processes: improvement planning and data use, communication, trusting relationships, brokering activities, capacity building; (3) Continuous Learning Cycles: social innovation, implementation, evaluation and adaptation. Research limitations/implications By using a common framework, data across RPPs and from different studies can be compared. Research foci might test links between elements such as capacity building and impacts, or test links between systems and structures and how those elements influence collaborative processes and the impact of the RPPs. Research could test the generalizability of the framework across contexts. Through the application and use of the research model, various elements might be refuted, confirmed or refined. More work is needed to use this framework to study RPPs, and to develop accompanying data collection methods and instruments for each dimension and element. Practical implications The practical applications of the framework are to be used by RPPs as a learning framework for strategic planning, iterative learning cycles and evaluation. Many of the elements of the framework could be used to check-in with partners on how things are going – such as exploring how communication is working and whether these structures move beyond merely updates and reporting toward joint problem-solving. The framework could also be used prior to setting up an RPP as an organizing approach to making decisions about how that RPP might best operate. Originality/value Despite increased attention on multistakeholder networks in education, the conceptual understanding is still limited. This article analyzed theoretical and empirical work to build a systematic model to study RPPs in education. This research model can be used to: identify RPP configurations, analyze the impact of RPPs, and to compare similarities and differences across configurations.
... Indeed, previous research suggests that a major factor in the successful transferability of interventions is their adaptability (Castro et al., 2004). Thus, future research should seek to identify the cultural adaptations of both the GBG and other school-based interventions that ensure that they are suitable to the English context, while also ensuring the program's critical components are still in place (Sharples, Albers, & Fraser, 2018). ...
Article
This study aimed to examine the impact of a universal, school-based intervention, the Good Behavior Game (GBG), on children’s behavior, and to explore any subgroup moderator effects among children at varying levels of cumulative risk (CR) exposure. A 2-year cluster-randomized controlled trial was conducted comprising 77 primary schools in England. Teachers in intervention schools delivered the GBG, whereas their counterparts in control schools continued their usual provision. Behavior (specifically disruptive behavior, concentration problems, and pro-social behavior) was assessed via the checklist version of the Teacher Observation of Classroom Adaptation. A CR index was calculated by summing the number of risk factors to which each child was exposed. Multilevel models indicated that no main or subgroup effects were evident. These findings were largely insensitive to the modeling of CR although a small intervention effect on disruptive behavior was found when the curvilinear trend was used. Further sensitivity analyses revealed no apparent influence of the level of program differentiation. In sum, our findings indicate that the GBG does not improve behavior when implemented in this sample of English schools.
Technical Report
Full-text available
Phase one of research into school years work and wellbeing during the pandemic
Presentation
Full-text available
The 2-year (11/2018-10/2020) MAS project opens up museum-multiliteracies, virtual museums, and novel interdisciplinary pedagogical scenarios as part of sustainable and inclusive museum-school partnerships. This presentation provides an overview of the project and delves into the narratives from the students' experiences from participation and interaction.
Article
This chapter calls for researchers to reconceptualize research quality from the perspective of its expected use, attending to power dynamics that influence how knowledge is defined, constructed, and validated through the research enterprise. Addressing these concerns when designing and conducting education research can yield more useful research evidence for building more equitable education systems. Anchored in scholarship on research utilization and methodological critiques, the chapter introduces a research quality framework that integrates relevance and rigor through five key dimensions of Research Worth Using: (1) relevance of question: alignment of research topics to practical priorities; (2) theoretical credibility: explanatory strength and coherence of principles investigated; (3) methodological credibility: internal and external credibility of study design and execution; (4) evidentiary credibility: robustness and consistency of cumulative evidence; and (5) relevance of answers: justification for practical application. This framework simultaneously uplifts the voices and needs of policymakers, practitioners, and community members, while elevating standards for excellence in education research. We call attention to the myriad ways in which the quality of evidence generated can be strengthened, before describing implications for curating and using research. We conclude by offering suggestions for applying and further developing the framework.
Book
Full-text available
This book highlights decisions governments have to make about their public education systems, the options they have before them and the consequences of their decisions. As well as covering issues such as values, curriculum, teacher training, structures and so on, the book addresses education planning for epidemics, pandemics and disasters. Education systems provide the foundations for the future wellbeing of every society, yet existing systems are a point of global concern. Education System Design is a response to debates in developing and developed countries about the characteristics of a high-quality national education service. It questions what makes a successful system of education. With chapters that draw on experience in education systems around the world, each one considers an element of a national education service and its role in providing a coherent and connected set of structures to ensure good education for all members of society. Key topics include: * Existing education systems and what a future system might look like * Inclusion and social justice * Leadership and teacher education * Policy options, and the consequences of policy changes This book suggests an education system be viewed as an ecosystem with interdependencies between many different components needing to be considered when change is contemplated. It is a vital book for any stakeholders in educational systems including students, teachers and senior leaders. It would be particularly useful to policy makers and those implementing policy changes.
Article
Designs combining different types of data are increasingly used in educational evaluation, to provide both evidence of impact and an explanation of the processes by which impacts are created. Logic models are visual representations of how an intervention leads via a set of steps from resources and inputs to outputs and then sets of outcomes. Their use has become widespread to underpin evaluations; and they have become of more interest in education as they have been promoted by policy makers and funders including the Education Endowment Foundation (EEF) in England. This paper addresses the question: how can logic models be used to frame and implement educational evaluations using combinations of methods? To do so, the paper draws on theory‐based evaluation literature to identify a set of issues to be considered: the role of implementation logic; causal mechanisms; the context of interventions; and the importance of considering and addressing issues around complexity. Using detailed examples from two study designs for EEF evaluations, the paper presents an evidence‐informed logic model approach to deal with these issues. The paper concludes by reflecting on the practical and theoretical implications of this approach, laying out a set of key issues to address in future evaluations for which a design framed by an evidence‐informed logic model may be appropriate.
Article
Full-text available
Teacher coaching has emerged as a promising alternative to traditional models of professional development. We review the empirical literature on teacher coaching and conduct meta-analyses to estimate the mean effect of coaching programs on teachers’ instructional practice and students’ academic achievement. Combining results across 60 studies that employ causal research designs, we find pooled effect sizes of 0.49 standard deviations (SD) on instruction and 0.18 SD on achievement. Much of this evidence comes from literacy coaching programs for prekindergarten and elementary school teachers in the United States. Although these findings affirm the potential of coaching as a development tool, further analyses illustrate the challenges of taking coaching programs to scale while maintaining effectiveness. Average effects from effectiveness trials of larger programs are only a fraction of the effects found in efficacy trials of smaller programs. We conclude by discussing ways to address scale-up implementation challenges and providing guidance for future causal studies.
Article
Full-text available
Professional development programs are based on different theories of how students learn and different theories of how teachers learn. Reviewers often sort programs according to design features such as program duration, intensity, or the use of specific techniques such as coaches or online lessons, but these categories do not illuminate the programs’ underlying purpose or premises about teaching and teacher learning. This review sorts programs according to their underlying theories of action, which include (a) a main idea that teachers should learn and (b) a strategy for helping teachers enact that idea within their own ongoing systems of practice. Using rigorous research design standards, the review identifies 28 studies. Because studies differ in multiple ways, the review presents program effects graphically rather than statistically. Visual patterns suggest that many popular design features are not associated with program effectiveness. Furthermore, different main ideas are not differentially effective. However, the pedagogies used to facilitate enactment differ in their effectiveness. Finally, the review addresses the question of research design for studies of professional development and suggests that some widely favored research designs might adversely affect study outcomes.
Article
Full-text available
Implementing behavioral health interventions is a complicated process. It has been suggested that implementation strategies should be selected and tailored to address the contextual needs of a given change effort; however, there is limited guidance as to how to do this. This article proposes four methods (concept mapping, group model building, conjoint analysis, and intervention mapping) that could be used to match implementation strategies to identified barriers and facilitators for a particular evidence-based practice or process change being implemented in a given setting. Each method is reviewed, examples of their use are provided, and their strengths and weaknesses are discussed. The discussion includes suggestions for future research pertaining to implementation strategies and highlights these methods’ relevance to behavioral health services and research.
Article
Full-text available
During the last decade, positive behavior interventions have resulted in improvement of school behavior and academic gains in a range of school settings worldwide. Recent studies identify sustainability of current positive behavior intervention programs as a major concern. The purpose of this article is to identify future direction for effective implementation of positive behavior interventions based on a comprehensive review of the current status of positive behavior interventions in terms of sustainability. The review will also examine implementation fidelity, as a factor that impacts upon sustainability. Literature reviewed in this study demonstrates that administrator support and professional development were the most frequently cited influential factors in previous research on sustainability of positive behavior interventions. In particular, the review highlights the significance of implementation fidelity at the classroom level for sustaining positive outcomes of positive behavior interventions over time. It is argued that in order to sustain positive effects of positive behavior intervention, future implementation efforts need to emphasize administrator support for the school team, ongoing high-quality professional development, and technical assistance. Moreover, a focus on coaching classroom-level implementation fidelity is of significant importance, as is the development and validation of evaluation tools for sustainability based on large-scale longitudinal international studies and more in-depth qualitative investigations.
Article
Full-text available
There are many challenges when an innovation (i.e., a program, process, or policy that is new to an organization) is actively introduced into an organization. One critical component for successful implementation is the organization's readiness for the innovation. In this article, we propose a practical implementation science heuristic, abbreviated as R = MC2. We propose that organizational readiness involves (a) the motivation to implement an innovation, (b) the general capacities of an organization, and (c) the innovation-specific capacities needed for a particular innovation. Each of these components can be assessed independently and be used formatively. The heuristic can be used by organizations to assess readiness to implement and by training and technical assistance providers to help build organizational readiness. We present an illustration of the heuristic by showing how behavioral health organizations differ in readiness to implement a peer specialist initiative. Implications for research and practice of organizational readiness are discussed.
Article
Full-text available
Background: Identifying, developing, and testing implementation strategies is an important goal of implementation science. However, these efforts have been complicated by the use of inconsistent language and inadequate descriptions of implementation strategies in the literature. The Expert Recommendations for Implementing Change (ERIC) study, aimed to refine a published compilation of implementation strategy terms and definitions by systematically gathering input from a wide range of stakeholders with expertise in implementation science and clinical practice. Methods: Purposive sampling was used to recruit a panel of experts in implementation and clinical practice who engaged in three rounds of a modified Delphi process to generate consensus on implementation strategies and definitions. The first and second rounds involved web-based surveys soliciting comments on implementation strategy terms and definitions. After each round, iterative refinements were made based upon participant feedback. The third round involved a live polling and consensus process via a web-based platform and conference call. Results: Participants identified substantial concerns with 31% of the terms and/or definitions, and suggested five additional strategies. 75% of definitions from the originally published compilation of strategies were retained after voting. Ultimately, the expert panel reached consensus on a final compilation of 73 implementation strategies. Conclusions: This research advances the field by improving the conceptual clarity, relevance, and comprehensiveness of implementation strategies that can be used in isolation or combination in implementation research and practice. Future phases of ERIC will focus on developing conceptually distinct categories of strategies as well as ratings for each strategy’s importance and feasibility. Next, the expert panel will recommend multifaceted strategies for concrete real-world scenarios that differ in terms of the evidence-based programs and practices being implemented and the strength of contextual supports that surround the effort.
Article
Full-text available
Traditional approaches to disseminating research based programs and innovations for children and families, which rely on practitioners and policy makers to make sense of research on their own, have been found insufficient. There is growing interest in strategies that “make it happen” by actively building the capacity of service providers to implement innovations with high fidelity and good effect. This article provides an overview of the Active Implementation Frameworks (AIFs), a science-based implementation framework, and describes a case study in child welfare, where the AIF was used to facilitate the implementation of research-based and research-informed practices to improve the well-being of children exiting out of home placement to permanency. In this article, we provide descriptive data that suggest AIF is a promising framework for promoting high-fidelity implementation of both research-based models and innovations through the development of active implementation teams.
Article
Full-text available
Implementation refers to the process by which an intervention is put into practice. Research studies across multiple disciplines, including education, have consistently demonstrated that interventions are rarely implemented as designed and, crucially, that variability in implementation is related to variability in the achievement of expected outcomes. Put simply, implementation matters (Durlak & DuPre, 200816. Durlak , J.A. and DuPre , E.P. 2008. Implementation matters: a review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41: 327–350. [CrossRef], [PubMed], [Web of Science ®]View all references). This paper reviews several key issues in the study of implementation and calls for an increasing emphasis on this often neglected aspect of evaluation research in UK journals. These issues include programme-specific reasons for studying implementation as an intervention passes through the various stages of development, advancing knowledge and understanding about the processes of implementation (including the balance required between fidelity and adaptation, and the range of factors that may facilitate or impede implementation), and improving measurement and assessment of implementation. Through discussion of these issues the case is made for more research that focuses specifically on the examination of implementation in school settings.
Article
Full-text available
This systematic review examines experimental studies that test the effectiveness of strategies intended to integrate empirically supported mental health interventions into routine care settings. Our goal was to characterize the state of the literature and to provide direction for future implementation studies. A literature search was conducted using electronic databases and a manual search. Eleven studies were identified that tested implementation strategies with a randomized (n = 10) or controlled clinical trial design (n = 1). The wide range of clinical interventions, implementation strategies, and outcomes evaluated precluded meta-analysis. However, the majority of studies (n = 7; 64%) found a statistically significant effect in the hypothesized direction for at least one implementation or clinical outcome. There is a clear need for more rigorous research on the effectiveness of implementation strategies, and we provide several suggestions that could improve this research area.
Article
Full-text available
Despite growth in implementation research, limited scientific attention has focused on understanding and improving sustainability of health interventions. Models of sustainability have been evolving to reflect challenges in the fit between intervention and context. We examine the development of concepts of sustainability, and respond to two frequent assumptions ---'voltage drop,' whereby interventions are expected to yield lower benefits as they move from efficacy to effectiveness to implementation and sustainability, and 'program drift,' whereby deviation from manualized protocols is assumed to decrease benefit. We posit that these assumptions limit opportunities to improve care, and instead argue for understanding the changing context of healthcare to continuously refine and improve interventions as they are sustained. Sustainability has evolved from being considered as the endgame of a translational research process to a suggested 'adaptation phase' that integrates and institutionalizes interventions within local organizational and cultural contexts. These recent approaches locate sustainability in the implementation phase of knowledge transfer, but still do not address intervention improvement as a central theme. We propose a Dynamic Sustainability Framework that involves: continued learning and problem solving, ongoing adaptation of interventions with a primary focus on fit between interventions and multi-level contexts, and expectations for ongoing improvement as opposed to diminishing outcomes over time. A Dynamic Sustainability Framework provides a foundation for research, policy and practice that supports development and testing of falsifiable hypotheses and continued learning to advance the implementation, transportability and impact of health services research.
Article
Full-text available
Increased availability of research-supported, school-based prevention programs, coupled with the growing national policy emphasis on use of evidence-based practices, has contributed to a shift in research priorities from efficacy to implementation and dissemination. A critical issue in moving research to practice is ensuring high-quality implementation of both the intervention model and the support system for sustaining it. The paper describes a three-level framework for considering the implementation quality of school-based interventions. Future directions for research on implementation are discussed.
Article
Full-text available
Implementation science is a quickly growing discipline. Lessons learned from business and medical settings are being applied but it is unclear how well they translate to settings with different historical origins and customs (e.g., public mental health, social service, alcohol/drug sectors). The purpose of this paper is to propose a multi-level, four phase model of the implementation process (i.e., Exploration, Adoption/Preparation, Implementation, Sustainment), derived from extant literature, and apply it to public sector services. We highlight features of the model likely to be particularly important in each phase, while considering the outer and inner contexts (i.e., levels) of public sector service systems.
Article
Full-text available
An unresolved issue in the field of implementation research is how to conceptualize and evaluate successful implementation. This paper advances the concept of "implementation outcomes" distinct from service system and clinical treatment outcomes. This paper proposes a heuristic, working "taxonomy" of eight conceptually distinct implementation outcomes-acceptability, adoption, appropriateness, feasibility, fidelity, implementation cost, penetration, and sustainability-along with their nominal definitions. We propose a two-pronged agenda for research on implementation outcomes. Conceptualizing and measuring implementation outcomes will advance understanding of implementation processes, enhance efficiency in implementation research, and pave the way for studies of the comparative effectiveness of implementation strategies.
Article
Full-text available
Leadership in organizations is important in shaping workers' perceptions, responses to organizational change, and acceptance of innovations, such as evidence-based practices. Transformational leadership inspires and motivates followers, whereas transactional leadership is based more on reinforcement and exchanges. Studies have shown that in youth and family service organizations, mental health providers' attitudes toward adopting an evidence-based practice are associated with organizational context and individual provider differences. The purpose of this study was to expand these findings by examining the association between leadership and mental health providers' attitudes toward adopting evidence-based practice. Participants were 303 public-sector mental health service clinicians and case managers from 49 programs who were providing mental health services to children, adolescents, and their families. Data were gathered on providers' characteristics, attitudes toward evidence-based practices, and perceptions of their supervisors' leadership behaviors. Zero-order correlations and multilevel regression analyses were conducted that controlled for effects of service providers' characteristics. Both transformational and transactional leadership were positively associated with providers' having more positive attitudes toward adoption of evidence-based practice, and transformational leadership was negatively associated with providers' perception of difference between the providers' current practice and evidence-based practice. Mental health service organizations may benefit from improving transformational and transactional supervisory leadership skills in preparation for implementing evidence-based practices.
Article
A mixed methods study was conducted to examine the implementation process of 26 urban school-based mental health clinics that took part in a training and implementation support program for an evidence-based school trauma intervention. Implementation process was observed using the Stages of Implementation Completion (SIC) measure. Qualitative interviews were conducted with clinic leaders in order to gain insight into clinic processes related to the SIC. Results showed that almost all of the clinics engaged in some activities related to pre-implementation (engagement, feasibility, and readiness), but only 31% of the sites formally started delivering the program to youth. Completing more pre-implementation activities, particularly those related to readiness, predicted program start-up. Qualitative analysis comparing those that implemented the program to those that did not revealed critical differences in decision-making processes, leadership strategies, and the presence of local champions for the program. This study documented the patterns of clinic behavior that occurs as part of large-scale training efforts, suggests some unique challenges that occur in schools, and highlights the importance of engaging in particular implementation activities (i.e., readiness planning, stakeholder consensus and planning meetings) as part of program start-up. Findings indicate that pre-implementation and readiness-related consultation should be employed as part of broad-scale implementation and training efforts.
Article
Implementation is posited as a multiphasic process, influenced by a range of factors, within a multilevel context. While there appears to be a general consensus that every implementation initiative will have a unique combination of influences that vary in importance across the implementation phases, leadership is an essential tenant throughout implementation frameworks, models, and theories. The exploration, preparation, implementation, sustainment framework is used to explore leadership, at both the inner organizational level and the outer system level, and to guide a discussion regarding three critical issues involved in implementation (organizational climate/culture, collaborative relationships, and contracting). Finally, three implementation strategies focused on improving leadership are described in order to provide examples of these issues.
Chapter
Within the context of child mental health across the UK, a practice research network, Child Outcomes Research Consortium (CORC), has been formed and is committed to collating and using outcome data to inform clinical and service decision making . CORC began with 4 subscribing organizations in 2004 and has grown to approximately 65 in 2015. Based on this experience, learning about how such data are to be used safely and effectively to support performance management and service improvement are discussed. One challenge is that any attempt to measure the “impact” of a service using a given “outcome” is complex. Within psychotherapy in general, and in work with children and families where there are multiple perspectives in particular, this question raises a number of dilemmas. For some professionals, the whole notion of quantitatively measuring such human qualities is questioned. Despite these challenges, CORC has been pioneering in supporting its members to collect and cautiously make use of information with a particular emphasis on outcomes from the service user perspective. Such challenges may be recognized in other countries as well, and CORC aims to share learning to help mitigate these barriers.
Article
There is increasing emphasis on the use of evidence-based practices (EBPs) in child welfare settings and growing recognition of the importance of the organizational environment, and the organization's climate in particular, for how employees perceive and support EBP implementation. Recently, Ehrhart, Aarons, and Farahnak (2014) reported on the development and validation of a measure of EBP implementation climate, the Implementation Climate Scale (ICS), in a sample of mental health clinicians. The ICS consists of 18 items and measures six critical dimensions of implementation climate: focus on EBP, educational support for EBP, recognition for EBP, rewards for EBP, selection or EBP, and selection for openness. The goal of the current study is to extend this work by providing evidence for the factor structure, reliability, and validity of the ICS in a sample of child welfare service providers. Survey data were collected from 215 child welfare providers across three states, 12 organizations, and 43 teams. Confirmatory factor analysis demonstrated good fit to the six-factor model and the alpha reliabilities for the overall measure and its subscales was acceptable. In addition, there was general support for the invariance of the factor structure across the child welfare and mental health sectors. In conclusion, this study provides evidence for the factor structure, reliability, and validity of the ICS measure for use in child welfare service organizations.
Article
Treatment fidelity, or the application of an intervention as it is designed, is a critical issue for the successful implementation of evidence-based practices. Typically it is assumed that evidence-based practices implemented with high fidelity will result in improved outcomes, whereas low fidelity will lead to poorer outcomes. These assumptions presume agreement across researchers and practitioners on what fidelity is, how to measure it, and what level of fidelity optimizes outcomes; however, there is no widespread agreement on any of these issues. This article discusses the dimensions and nuances of treatment fidelity as well as the implications for measuring and analyzing it in relation to student outcomes. The authors review research demonstrating the differential relationship of fidelity across schools, program type, and impact on student outcomes that special educators should consider when designing intervention studies and implementing evidence-based practices. Special educators should prioritize practices and programs with clearly identified components that are empirically validated yet designed flexibly to match various contexts and student populations. Suggestions to support schools in implementing and sustaining evidence-based practices are provided.
Article
There has been a growing impetus to bridge the gap between basic science discovery, development of evidence-based practices (EBPs), and the availability and delivery of EBPs in order to improve the public health impact of such practices. To capitalize on factors that support implementation and sustainment of EBPs, it is important to consider that health care is delivered within the outer context of public health systems and the inner context of health care organizations and work groups. Leaders play a key role in determining the nature of system and organizational contexts. This article addresses the role of leadership and actions that leaders can take at and across levels in developing a strategic climate for EBP implementation within the outer (i.e., system) and inner (i.e., organization, work group) contexts of health care. Within the framework of Edgar Schein's "climate embedding mechanisms," we describe strategies that leaders at the system, organization, and work group levels can consider and apply to develop strategic climates that support the implementation and sustainment of EBP in health care and allied health care settings.
Article
Background System-wide scale up of evidence-based practice (EBP) is a complex process. Yet, few strategic approaches exist to support EBP implementation and sustainment across a service system. Building on the Exploration, Preparation, Implementation, and Sustainment (EPIS) implementation framework, we developed and are testing the Interagency Collaborative Team (ICT) process model to implement an evidence-based child neglect intervention (i.e., SafeCare®) within a large children's service system. The ICT model emphasizes the role of local agency collaborations in creating structural supports for successful implementation. Methods We describe the ICT model and present preliminary qualitative results from the use of the implementation model in one large scale EBP implementation. Qualitative interviews were conducted to assess challenges in building system, organization, and home visitor collaboration and capacity to implement the EBP. Data collection and analysis centered on EBP implementation issues, as well as the experiences of home visitors under the ICT model. Results Six notable issues relating to implementation process emerged from participant interviews, including: (a) initial commitment and collaboration among stakeholders, (b) leadership, (c) communication, (d) practice fit with local context, (e) ongoing negotiation and problem solving, and (f) early successes. These issues highlight strengths and areas for development in the ICT model. Conclusions Use of the ICT model led to sustained and widespread use of SafeCare in one large county. Although some aspects of the implementation model may benefit from enhancement, qualitative findings suggest that the ICT process generates strong structural supports for implementation and creates conditions in which tensions between EBP structure and local contextual variations can be resolved in ways that support the expansion and maintenance of an EBP while preserving potential for public health benefit.
Article
Evidence-based programs (EBPs) are increasingly being implemented in children's services agencies in developed countries. However, this trend is meeting resistance from some researchers, policy makers and practitioners. In this article we appraise the main critiques, focusing on scientific, ideological, cultural, organizational and professional arguments. We contend that some of the resistance stems from misconceptions or an oversimplification of issues, while others represent valid concerns that need to be addressed by proponents of EBPs. We set out implications for the development and evaluation of programs and how they are introduced into service systems, and conclude with broader recommendations for children's services.
Article
Pragmatic measures are important to facilitate implementation and dissemination, address stakeholder issues, and drive quality improvement. This paper proposes necessary and recommended criteria for pragmatic measures, provides examples of projects to develop and identify such measures, addresses potential concerns about these recommendations, and identifies areas for future research and application. Key criteria for pragmatic measures include importance to stakeholders in addition to researchers, low burden, broad applicability, sensitivity to change, and being actionable. Examples of pragmatic measures are provided, including ones for different settings (e.g., primary care, hospital) and levels (e.g., individual, practitioner, setting) that illustrate approaches to produce broad-scale dissemination and the development of brief, standardized measures for use in pragmatic studies. There is an important need for pragmatic measures to facilitate pragmatic research, guide quality improvement, and inform progress on public health goals, but few examples are currently available. Development and evaluation of pragmatic measures and metrics would provide useful resources to advance science, policy, and practice.
Developing Great Teaching: Lessons from the international reviews into effective professional development
  • P Cordingley
Cordingley, P. et al. (2015). Developing Great Teaching: Lessons from the international reviews into effective professional development. London: Teacher Development Trust.
A systematic review of what enables or hinders the use of research-based knowledge in primary and lower secondary school
  • C B Dyssegaard
Dyssegaard C.B. et al. (2017) A systematic review of what enables or hinders the use of research-based knowledge in primary and lower secondary school. Copenhagen: Aarhaus University, Danish Clearinghouse for Educational Research
Impact and You -Theory of Change
  • Nesta
  • Development
Nesta. Development, Impact and You -Theory of Change. [Online cited 24 Jan 2017]. Available from: http://diytoolkit.org/tools/theory-of-change/.
Dialogic Teaching Evaluation report and executive summary. London: Education Endowment Foundation
  • T Jay
Jay, T. et al (2017). Dialogic Teaching Evaluation report and executive summary. London: Education Endowment Foundation