Summary statistics for instrument ratings. a

Summary statistics for instrument ratings. a

Source publication
Article
Full-text available
Background Systematic measure reviews can facilitate advances in implementation research and practice by locating reliable, valid, pragmatic measures; identifying promising measures needing refinement and testing; and highlighting measurement gaps. This review identifies and evaluates the psychometric and pragmatic properties of measures of readine...

Contexts in source publication

Context 1
... 5 summarizes the availability of psychometric and pragmatic information for each PAPERS property for the measures of each construct. Table 6 displays for each focal construct the median and range of ratings for each PAPERS property, summarizing the psychometric and pragmatic strength of the measures of each construct. Tables 7 and 8 report the median and range of ratings for each PAPERS property for each measure, by construct. ...
Context 2
... to knowledge (n = 6) Table 6 describes the median ratings and range of ratings for psychometric and pragmatic properties for those measures for which evidence or information was available (i.e., those with non-zero ratings on PAPERS criteria). For measures of readiness for implementation used in mental or behavioral health care, the median rating for internal consistency was "2-adequate," for convergent validity "2-adequate," for known-groups validity "1-minimal," for predictive validity "1-minimal," for structural validity "-1-poor," for responsiveness "2-adequate," and for norms "2-adequate." ...
Context 3
... measures of leadership engagement used in mental or behavioral health care (Table 6), the median rating for internal consistency was "3-good," for convergent validity "4-excellent," for discriminant validity "2-adequate," for known-groups validity "1-minimal," for predictive validity "1-minimal," for concurrent validity "1-minimal," for structural validity "2-adequate," and for norms "2-adequate." Note the median rating of "2-adequate" for discriminant validity was based on the rating of just one measure: the Implementation Leadership Scale ( Aarons et al., 2014). ...
Context 4
... for the pragmatic property of cost was available for 14 measures, language readability for 13 measures, assessor burden (training) for 1 measure, assessor burden (interpretation) for 6 measures, and brevity for 15 measures. For measures of available resources used in mental or behavioral health care (Table 6), the median rating for internal consistency was "2-adequate," for convergent validity "1-minimal," for known-groups validity "-1-poor," for predictive validity "1-minimal," for structural validity "2-adequate," for responsiveness "4-excellent," and for norms "2-adequate." Note the median rating of "1-minimal" for structural validity was based on the rating of just one measure: the Educational Support scale of the Implementation Climate Scale ). ...
Context 5
... for the pragmatic property of cost was available for two measures, language readability for two measures, assessor burden (training) for no measures, assessor burden (interpretation) for no measures, and brevity for two measures. For measures of access to knowledge and information used in mental or behavioral health care (Table 6), the median rating for internal consistency was "2-adequate," for convergent validity "1-minimal," for discriminant validity "2-adequate," for known-groups validity "-1-poor," for predictive validity "-1-poor," for structural validity "1-poor," and for norms "2-adequate." Note that the median ratings for discriminant validity, knowngroups validity, predictive validity, and structural validity are based on the ratings of a single measure, although not the same measure in all instances. ...

Similar publications

Preprint
Full-text available
Background: Private retail pharmacies in developing countries present a unique channel for COVID-19 prevention. We assessed the response to the COVID-19 pandemic by pharmacies in Kenya, aiming to identify strategies for maximising their contribution to the national response. Methods: We conducted a prospective mixed-methods study, consisting of a q...

Citations

... With accumulating evidence in recent years, cancer prevention and control research has increasingly incorporated a focus on dissemination and implementation of evidencebased interventions (e.g., cancer screenings, HPV vaccination, and lifestyle behavior change interventions) [1]. Such progress has been informed by approaches that include community-based participatory research [2], adaptation of interventions to different populations and settings [3], and work with clinical and community partners to improve implementation within healthcare and community settings [4]. Implementation science as a field benefits from these cross-disciplinary perspectives and multi-sector collaborations to ensure that research findings lead to population-level health outcomes [5]. ...
Article
Full-text available
The Cancer Prevention and Control Research Network (CPCRN) is a national network of academic, public health, and community organizational partners across multiple geographic sites who collaborate to reduce the cancer burden in diverse communities. Given key recommendations that suggest the need for cross-disciplinary collaboration in cancer prevention and control, we sought to explore the historical and contemporary evolution of health equity and disparities research as an area of focus within the CPCRN over time. We conducted 22 in-depth interviews with former and current leaders, co-investigators, and other members of the network. Several key themes emerged from data that were analyzed and interpreted using a constructivist, reflexive, thematic analysis approach. Nearly all participants reported a strong focus on studying health disparities since the inception of the CPCRN, which offered the network a distinct advantage in recent years for incorporating an intentional focus on health equity. Recent law enforcement injustices and the inequities observed during the COVID-19 pandemic have further invigorated network activities around health equity, such as development of a health equity-focused workgroup toolkit, among other cross-center activities. Several participants noted that, in terms of deep, meaningful, and impactful health equity-oriented research, there are still great strides for the network to make, while also acknowledging CPCRN as well-aligned with the national dialogue led by federal agency partners around health equity. Finally, several future directions were mentioned by the participants, including a focus on supporting a diverse workforce and engaging organizational partners and community members in equity-focused research. Findings from these interviews provide direction for the network in advancing the science in cancer prevention and control, with a strengthened focus on health equity.
... Measurement instruments seek to elicit quantitative assessments of barriers and facilitators because this can be a more efficient way to assess context. However, these instruments are often exceedingly long or require expertise and training to use [6][7][8][9][10][11]. Frontline clinicians and staff who do the work of implementation may misunderstand or misapply questions designed to elicit potential barriers and facilitators; they are often more familiar with quality improvement language [12][13][14][15][16]. ...
Article
Full-text available
Background The Consolidated Framework for Implementation Research (CFIR) is a determinant framework that can be used to guide context assessment prior to implementing change. Though a few quantitative measurement instruments have been developed based on the CFIR, most assessments using the CFIR have relied on qualitative methods. One challenge to measurement is to translate conceptual constructs which are often described using highly abstract, technical language into lay language that is clear, concise, and meaningful. The purpose of this paper is to document methods to develop a freely available pragmatic context assessment tool (pCAT). The pCAT is based on the CFIR and designed for frontline quality improvement teams as an abbreviated assessment of local facilitators and barriers in a clinical setting. Methods Twenty-seven interviews using the Think Aloud method (asking participants to verbalize thoughts as they respond to assessment questions) were conducted with frontline employees to improve a pilot version of the pCAT. Interviews were recorded and transcribed verbatim; the CFIR guided coding and analyses. Results Participants identified several areas where language in the pCAT needed to be modified, clarified, or allow more nuance to increase usefulness for frontline employees. Participants found it easier to respond to questions when they had a recent, specific project in mind. Potential barriers and facilitators tend to be unique to each specific improvement. Participants also identified missing concepts or that were conflated, leading to refinements that made the pCAT more understandable, accurate, and useful. Conclusions The pCAT is designed to be practical, using everyday language familiar to frontline employees. The pCAT is short (14 items), freely available, does not require research expertise or experience. It is designed to draw on the knowledge of individuals most familiar with their own clinical context. The pCAT has been available online for approximately two years and has generated a relatively high level of interest indicating potential usefulness of the tool.
... Assessing organizational readiness is part of an overall strategy to ensure that the site will not only engage in the research so that fi ndings are valid but having these data will help explain diff erences between sites on resident and staff outcomes (Damschroder et al., 2009). Th ere are several models investigators can use to measure organizational readiness (Weiner et al., 2020). Some elements of readiness, however, may be unique to the NH setting, such as the role of the family as a member of the care delivery team (Gaugler & Mitchell, 2022). ...
Article
The current State of the Science Commentary focuses on workforce challenges in the nursing home (NH) setting that lie within the purview of professional nursing-what professional nurses can do to promote high-quality person-centered care within a context of existing resources-individually and broadly across the collective profession. Historically, three models of care delivery have characterized the way in which nursing care is organized and delivered in different settings: primary nursing, functional nursing, and team nursing. Based on the existing evidence, we call for scientific leadership in the redesign, testing, and implementation of a nursing care delivery model that operationalizes relationship-centered team nursing. This integrative model incorporates successful evidence-based approaches that have the potential to improve quality of care, resident quality of life, and staff quality of work life: clear communication, staff empowerment, coaching styles of supervision, and family/care partner involvement in care processes. In addition to the needed evidence base for NH care delivery models, it is imperative that educational programs incorporate content and clinical experiences that will enable the future nursing workforce to fill the leadership gap in NH care delivery. [Research in Gerontological Nursing, 16(1), 5-13.].
... This systematic review is reported according to the Preferred Reporting Items for Systematic review and Meta-Analysis Protocols checklist (PRISMA) [21] (see Additional file 1) and followed established procedures used by other systematic reviews of measures of implementation outcomes [18,20,22,23]. It was registered prospectively with Research Registry (revie-wregistry1097) prior to the final database search being conducted. ...
... For measures that had multiple reports of the same pragmatic or psychometric property, for instance in the case of multiple studies assessing the responsiveness, the median score was used. If the median value resulted in a non-integer, the score was rounded down [18,23,27]. Data were only assessed against the PAPERS psychometric criteria if it was being explicitly used to evaluate the psychometric properties of that measure. ...
... Items from measures of sustainability determinants were first mapped to lowerlevel constructs that define five higher-level domains proposed by the Integrated Sustainability Framework (i.e., outer context, inner context, intervention characteristics, processes, and implementer and population characteristics) [2] (see [14] for a more detailed description of the Integrated Sustainability Framework domains and constructs). Item mapping followed similar procedures undertaken in previous reviews [23,76], whereby two research team members proficient in the content area of sustainability (AH & AS), independently extracted and mapped the items from each measure to the domains of the relevant frameworks outlined above. We classified a measure as incorporating components of a specific construct if at least one item was mapped to that construct. ...
Article
Full-text available
Background Sustainability is concerned with the long-term delivery and subsequent benefits of evidence-based interventions. To further this field, we require a strong understanding and thus measurement of sustainability and what impacts sustainability (i.e., sustainability determinants). This systematic review aimed to evaluate the quality and empirical application of measures of sustainability and sustainability determinants for use in clinical, public health, and community settings. Methods Seven electronic databases, reference lists of relevant reviews, online repositories of implementation measures, and the grey literature were searched. Publications were included if they reported on the development, psychometric evaluation, or empirical use of a multi-item, quantitative measure of sustainability, or sustainability determinants. Eligibility was not restricted by language or date. Eligibility screening and data extraction were conducted independently by two members of the research team. Content coverage of each measure was assessed by mapping measure items to relevant constructs of sustainability and sustainability determinants. The pragmatic and psychometric properties of included measures was assessed using the Psychometric and Pragmatic Evidence Rating Scale (PAPERS). The empirical use of each measure was descriptively analyzed. Results A total of 32,782 articles were screened from the database search, of which 37 were eligible. An additional 186 publications were identified from the grey literature search. The 223 included articles represented 28 individual measures, of which two assessed sustainability as an outcome, 25 covered sustainability determinants and one explicitly assessed both. The psychometric and pragmatic quality was variable, with PAPERS scores ranging from 14 to 35, out of a possible 56 points. The Provider Report of Sustainment Scale had the highest PAPERS score and measured sustainability as an outcome. The School-wide Universal Behaviour Sustainability Index-School Teams had the highest PAPERS score (score=29) of the measure of sustainability determinants. Conclusions This review can be used to guide selection of the most psychometrically robust, pragmatic, and relevant measure of sustainability and sustainability determinants. It also highlights that future research is needed to improve the psychometric and pragmatic quality of current measures in this field. Trial registration This review was prospectively registered with Research Registry (reviewregistry1097), March 2021.
... Others were underrepresented (e.g., political will for policy implementation) or absent (e.g., sustainability) from measures, despite the known significance within implementation science (Glasgow & Chambers, 2012). Outer setting implementation determinants were assessed less frequently than inner setting determinants, which has been reported in previous studies (Chaudoir et al., 2013;Chor et al., 2015;Clinton-McHarg et al., 2016;McHugh et al., 2020;Powell et al., 2021;Weiner et al., 2020). The limited attention to outer setting implementation determinants is problematic, as factors like political will can impact mental health outcomes through policy development and resource allocation (Corrigan & Watson, 2003;Shera & Ramon, 2013;Weinberg et al., 2012). ...
... Psychometric and pragmatic qualities of measures were also assessed in this study. With the exception of internal consistency and sample norms, psychometric properties were frequently unreported, as found in previous implementation measures reviews (Allen et al., 2020; -McHarg et al., 2016;McHugh et al., 2020;Powell et al., 2021;Weiner et al., 2020). For example, information about structural validity was available for seven measures, and convergent and discriminant validity were reported for two measures. ...
... These findings suggest that there is significant room for improvement in the development or refining of existing measures to make them more psychometrically sound. Overall, the observed pragmatic qualities aligned with previous findings McLoughlin et al., 2021;Powell et al., 2021;Weiner et al., 2020). Most included measures were freely available (i.e., accessible through supplemental materials with the article or by contacting authors directly), though one was proprietary. ...
Article
Full-text available
Background Mental health is a critical component of wellness. Public policies present an opportunity for large-scale mental health impact, but policy implementation is complex and can vary significantly across contexts, making it crucial to evaluate implementation. The objective of this study was to (1) identify quantitative measurement tools used to evaluate the implementation of public mental health policies; (2) describe implementation determinants and outcomes assessed in the measures; and (3) assess the pragmatic and psychometric quality of identified measures. Method Guided by the Consolidated Framework for Implementation Research, Policy Implementation Determinants Framework, and Implementation Outcomes Framework, we conducted a systematic review of peer-reviewed journal articles published in 1995–2020. Data extracted included study characteristics, measure development and testing, implementation determinants and outcomes, and measure quality using the Psychometric and Pragmatic Evidence Rating Scale. Results We identified 34 tools from 25 articles, which were designed for mental health policies or used to evaluate constructs that impact implementation. Many measures lacked information regarding measurement development and testing. The most assessed implementation determinants were readiness for implementation, which encompassed training ( n = 20, 57%) and other resources ( n = 12, 34%), actor relationships/networks ( n = 15, 43%), and organizational culture and climate ( n = 11, 31%). Fidelity was the most prevalent implementation outcome ( n = 9, 26%), followed by penetration ( n = 8, 23%) and acceptability ( n = 7, 20%). Apart from internal consistency and sample norms, psychometric properties were frequently unreported. Most measures were accessible and brief, though minimal information was provided regarding interpreting scores, handling missing data, or training needed to administer tools. Conclusions This work contributes to the nascent field of policy-focused implementation science by providing an overview of existing measurement tools used to evaluate mental health policy implementation and recommendations for measure development and refinement. To advance this field, more valid, reliable, and pragmatic measures are needed to evaluate policy implementation and close the policy-to-practice gap. Plain Language Summary Mental health is a critical component of wellness, and public policies present an opportunity to improve mental health on a large scale. Policy implementation is complex because it involves action by multiple entities at several levels of society. Policy implementation is also challenging because it can be impacted by many factors, such as political will, stakeholder relationships, and resources available for implementation. Because of these factors, implementation can vary between locations, such as states or countries. It is crucial to evaluate policy implementation, thus we conducted a systematic review to identify and evaluate the quality of measurement tools used in mental health policy implementation studies. Our search and screening procedures resulted in 34 measurement tools. We rated their quality to determine if these tools were practical to use and would yield consistent (i.e., reliable) and accurate (i.e., valid) data. These tools most frequently assessed whether implementing organizations complied with policy mandates and whether organizations had the training and other resources required to implement a policy. Though many were relatively brief and available at little-to-no cost, these findings highlight that more reliable, valid, and practical measurement tools are needed to assess and inform mental health policy implementation. Findings from this review can guide future efforts to select or develop policy implementation measures.
... Our goal is to build from existing measures in the implementation science, public health, and health equity literature, creating new items when existing sources do not evaluate constructs of interest. We anticipate our study will result in one survey and one interview guide for each participant type with supplemental item banks, but will remain flexible to feedback we will receive in Aim 2. Similar to the steps taken by Lewis et al. (34), the research team will review example measurement tools, published measures reviews, and online measures repositories to determine whether existing instruments contain relevant items or scales that align with the conceptual content of the selected constructs and can be used verbatim or adapted for use in the new measures under development (19,(39)(40)(41)(42)(43)(44)(45)(46). For each chosen construct, we will draw upon relevant literature and examples from existing instruments to draft items. ...
Article
Full-text available
Background School-based policies that ensure provision of nutrition, physical activity, and other health-promoting resources and opportunities are essential in mitigating health disparities among underserved populations. Measuring the implementation of such policies is imperative to bridge the gap between policy and practice. Unfortunately, limited practical, psychometrically strong measures of school policy implementation exist. Few available explicitly focus on the issues of equity and social justice as a key component of implementation, which may result in underassessment of the equity implications of policy implementation. The purpose of this study is to develop equity-focused measures in collaboration with practitioners, researchers, and other key implementation partners that will facilitate evaluation of policy implementation determinants (i.e., barriers and facilitators), processes, and outcomes. Methods We will actively seek engagement from practitioners, researchers, and advocacy partners (i.e., stakeholders) who have expertise in school health policy throughout each phase of this project. We propose a multi-phase, 1-year project comprising the following steps: (1) selection of relevant constructs from guiding frameworks related to health equity and implementation science; (2) initial measure development, including expert feedback on draft items; (3) pilot cognitive testing with representatives from key target populations (i.e., school administrators, teachers, food service staff, and students and parents/guardians); and (4) measure refinement based on testing and assessment of pragmatic properties. These steps will allow us to establish initial face and content validity of a set of instruments that can undergo psychometric testing in future studies to assess their reliability and validity. Discussion Completion of this project will result in several school policy implementation measurement tools which can be readily used by practitioners and researchers to evaluate policy implementation through a health equity lens. This will provide opportunities for better assessment and accountability of policies that aim to advance health equity among school-aged children and their families. Trial registration Open Science Framework Registration doi: 10.17605/OSF.IO/736ZU .
... Measuring CFIR constructs quantitatively is a growing area that has great potential to assist with understanding the relationship between implementation factors and outcomes [46,47]. Implementation science researchers have started testing quantitative measures for CFIR constructs; however, more work is needed in this area to fully understand the validity and reliability of these constructs, how they are operationalized in practice, and their associations with implementation outcomes [48][49][50][51][52]. CFIR quantitative measures have typically examined relationships between constructs and shorter-term implementation (e.g., adoption) rather than later-term outcomes like sustainability [21]. ...
Article
Full-text available
Background Scaling evidence-based interventions are key to impacting population health. The National DPP lifestyle change program is one such intervention that has been scaled across the USA over the past 20 years; however, enrollment is an ongoing challenge. Furthermore, little is known about which organizations are most successful with program delivery, enrollment, and scaling. This study aims to understand more about the internal and external organization factors that impact program implementation and reach. Methods Between August 2020 and January 2021, data were collected through semi-structured key informant interviews with 30 National DPP delivery organization implementers. This study uses a qualitative cross-case construct rating methodology to assess which Consolidated Framework for Implementation Research (CFIR) inner and outer setting constructs contributed (both in valence and magnitude) to the organization’s current level of implementation reach (measured by average participant enrollment per year). A construct by case matrix was created with ratings for each CFIR construct by interviewee and grouped by implementation reach level. Results Across the 16 inner and outer setting constructs and subconstructs, the interviewees with greater enrollment per year provided stronger and more positive examples related to implementation and enrollment of the program, while the lower reach groups reported stronger and more negative examples across rated constructs. Four inner setting constructs/subconstructs (structural characteristics, compatibility, goals and feedback, and leadership engagement) were identified as “distinguishing” between enrollment reach levels based on the difference between groups by average rating, the examination of the number of extreme ratings within levels, and the thematic analysis of the content discussed. Within these constructs, factors such as organization size and administrative processes; program fit with existing organization services and programs; the presence of enrollment goals; and active leadership involvement in implementation were identified as influencing program reach. Conclusions Our study identified a number of influential CFIR constructs and their impact on National DPP implementation reach. These findings can be leveraged to improve efforts in recruiting and assisting delivery organizations to increase the reach and scale of the National DPP as well as other evidence-based interventions.
... Quantitatively assessed contextual factors include, e.g., implementation climate, organizational culture and climate, available resources, and readiness for change. Several reviews provide overviews of current measurement tools and their psychometric properties [14,15,25,[83][84][85][86][87][88]. Furthermore-for instance, on the CFIR [89] and EPIS framework [90] project websites-measurement and data extraction tools are available to assess aspects of context mentioned in the frameworks. ...
Article
Full-text available
Background Designing intervention and implementation strategies with careful consideration of context is essential for successful implementation science projects. Although the importance of context has been emphasized and methodology for its analysis is emerging, researchers have little guidance on how to plan, perform, and report contextual analysis. Therefore, our aim was to describe the Basel Approach for coNtextual ANAlysis (BANANA) and to demonstrate its application on an ongoing multi-site, multiphase implementation science project to develop/adapt, implement, and evaluate an integrated care model in allogeneic SteM cell transplantatIon facILitated by eHealth (the SMILe project). Methods BANANA builds on guidance for assessing context by Stange and Glasgow (Contextual factors: the importance of considering and reporting on context in research on the patient-centered medical home, 2013). Based on a literature review, BANANA was developed in ten discussion sessions with implementation science experts and a medical anthropologist to guide the SMILe project’s contextual analysis. BANANA’s theoretical basis is the Context and Implementation of Complex Interventions (CICI) framework. Working from an ecological perspective, CICI acknowledges contextual dynamics and distinguishes between context and setting (the implementation’s physical location). Results BANANA entails six components: (1) choose a theory, model, or framework (TMF) to guide the contextual analysis; (2) use empirical evidence derived from primary and/or secondary data to identify relevant contextual factors; (3) involve stakeholders throughout contextual analysis; (4) choose a study design to assess context; (5) determine contextual factors’ relevance to implementation strategies/outcomes and intervention co-design; and (6) report findings of contextual analysis following appropriate reporting guidelines. Partly run simultaneously, the first three components form a basis both for the identification of relevant contextual factors and for the next components of the BANANA approach. Discussion Understanding of context is indispensable for a successful implementation science project. BANANA provides much-needed methodological guidance for contextual analysis. In subsequent phases, it helps researchers apply the results to intervention development/adaption and choices of contextually tailored implementation strategies. For future implementation science projects, BANANA’s principles will guide researchers first to gather relevant information on their target context, then to inform all subsequent phases of their implementation science project to strengthen every part of their work and fulfill their implementation goals.
... Nonetheless, they believed that structural and cultural characteristics of the settings (e.g., small team size, nurses' interest in new projects, organizational support) created favourable conditions for implementing the competency framework, despite an unfavorable context exacerbated by the COVID-19 pandemic.This importance of finding the 'right unit' was a significant theme in participants' accounts and has not been discussed to the same extent as leadership or implementation strategies in the literature.Although the goal is to eventually implement the competency framework in all units of the participating organizations, this finding invites particular attention to the characteristics of the settings and groups of individuals affected by such change when planning for implementation. Although current formal measures of readiness for implementation appear to lack psychometric and pragmatic qualities,35 participants still described three main features of what they perceived to be the 'right unit' for implementation: the nursing team's motivation and interest, small size, and stability. These features and the various contextual issues that participants discussed regarding nurses' workload and fatigue can be part of a heuristic assessment of whether the context is conducive to implementing a similar innovation or whether additional upstream measures are needed to prepare the ground for such a project.As for the implementation process, changes that affect CPD in healthcare systems can be challenging and complicated, and educators must consider how to implement them effectively. ...
Article
Rationale: Nurses are responsible for engaging in continuing professional development throughout their careers. This implies that they use tools such as competency frameworks to assess their level of development, identify their learning needs, and plan actions to achieve their learning goals. Although multiple competency frameworks and guidelines for their development have been proposed, the literature on their implementation in clinical settings is sparser. If the complexity of practice creates a need for context-sensitive competency frameworks, their implementation may also be subject to various facilitators and barriers. Aims and objectives: To document the facilitators and barriers to implementing a nursing competency framework on a provincial scale. Methods: This multicentre study was part of a provincial project to implement a nursing competency framework in Quebec, Canada, using a three-step process based on evidence from implementation science. Nurses' participation consisted in the self-assessment of their competencies using the framework. For this qualitative descriptive study, 58 stakeholders from 12 organizations involved in the first wave of implementation participated in group interviews to discuss their experience with the implementation process and their perceptions of facilitators and barriers. Data were subjected to thematic analysis. Results: Analysis of the data yielded five themes: finding the 'right unit' despite an unfavourable context; taking and protecting time for self-assessment; creating value around competency assessment; bringing the project as close to the nurses as possible; making the framework accessible. Conclusion: This study was one of the first to document the large-scale, multi-site implementation of a nursing competency framework in clinical settings. This project represented a unique challenge because it involved two crucial changes: adopting a competency-based approach focused on educational outcomes and accountability to the public and valorizing a learning culture where nurses become active stakeholders in their continuing professional development.
... The field of implementation science has introduced multiple theories, methods, and frameworks to assess implementation readiness and to help organizations consider and prepare for changing demands, priorities, and contexts (19). For example, more than 50 measurement tools described in 3 seminal systematic reviews assess some dimension of organizational implementation readiness (20)(21)(22). However, to our knowledge no existing tools apply specifically to readiness to implement cancer screening interventions, and only 1 has been developed for use specifically in primary care settings (23). ...
... We also aimed to identify best practices for conducting readiness assessments in clinic settings and applying findings to decision making (eg, guidance on selecting which EBIs to adopt or determining gaps that clinics may need to address before implementation). We characterized potentially relevant readiness instruments and triangulated data from 1) a review of instruments described in 3 seminal systematic reviews of readiness assessment instruments in diverse health care settings (20)(21)(22), 2) a document review of CRCCP recipients' existing practices for determining readiness, and 3) semi-structured key informant interviews with a diverse subset of CRCCP recipients. We evaluated tools on the basis of their 1) ability to meet the 6 required assessment domains, 2) length, 3) applicability to primary care settings, 4) adaptability across clinic contexts, and 5) accessibility. ...
Article
Full-text available
Evidence-based interventions, including provider assessment and feedback, provider reminders, patient reminders, and reduction of structural barriers, improve colorectal cancer screening rates. Assessing primary care clinics' readiness to implement these interventions can help clinics use strengths, identify barriers, and plan for success. However, clinics may lack tools to assess readiness and use findings to plan for successful implementation. To address this need, we developed the Field Guide for Assessing Readiness to Implement Evidence-Based Cancer Screening Interventions (Field Guide) for the Centers for Disease Control and Prevention's (CDC's) Colorectal Cancer Control Program (CRCCP). We conducted a literature review of evidence and existing tools to measure implementation readiness, reviewed readiness tools from selected CRCCP award recipients (n = 35), and conducted semi-structured interviews with key informants (n = 8). We sought feedback from CDC staff and recipients to inform the final document. The Field Guide, which is publicly available online, outlines 4 assessment phases: 1) convene team members and determine assessment activities, 2) design and administer the readiness assessment, 3) evaluate assessment data, and 4) develop an implementation plan. Assessment activities and tools are included to facilitate completion of each phase. The Field Guide integrates implementation science and practical experience into a relevant tool to bolster clinic capacity for implementation, increase potential for intervention sustainability, and improve colorectal cancer screening rates, with a focus on patients served in safety net clinic settings. Although this tool was developed for use in primary care clinics for cancer screening, the Field Guide may have broader application for clinics and their partners for other chronic diseases.