Article

Incorporating Multiple Data Sources to Assess Changes in Organizational Capacity in Child Welfare Systems

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Three federally funded Child Welfare Capacity Building Centers provide services to build the organizational capacity of public child welfare agencies to help meet federal requirements, improve practice, and improve outcomes for children and families. The aim of this study was to explore capacity outcomes in five dimensions - resources, infrastructure, knowledge and skills, culture and climate, and engagement and partnership - achieved by child welfare jurisdictions who received Center services. Analyses describe the capacities targeted for improvement and the amount and type of services provided by Centers; assess the relationship between services and capacity increase; and explore whether that relationship differs depending on the jurisdiction's level of foundational capacity. Data collected through surveys and a service delivery tracking system reflect the perspectives of service recipients and service providers. Results reveal jurisdictions typically targeted capacity outcomes in the dimensions of knowledge and skills and infrastructure and received an average of 28 hours of direct services to support their capacity-building efforts. Dosage of service was positively associated with achievement of capacity outcomes, though no interaction was found between service dosage and foundational capacity in the effect on outcomes. Methodological lessons learned and implications for future evaluations of organizational capacity building efforts are offered.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
This study aims to understand the role implementation support practitioners can have in supporting the use of research-supported practices, policies, and programs in human service sectors. Through a survey design, the authors: 1) confirm and refine principles and competencies used by professionals to provide implementation support in human service systems; 2) increase understanding of the conditions under which implementation support practitioners can be more or less effective; and 3) describe the usefulness of competencies for professionals providing implementation support. Additional findings are presented on the role of context and trusting relationships in implementation support practice. Areas for further research are discussed.
Article
Full-text available
Knowledge transfer (KT) from the consultant to the client is an important area that needs to be repeatedly addressed and thoroughly understood. The aim of this research was to examine the assumption that client characteristics and consultant competencies play a defining role in the effective transfer of knowledge to the client party. The authors examined the critical aspects and competencies required of the consultant, and the characteristics and attitudes required of the client, which would contribute to a successful transfer of knowledge, through unstructured in-depth interviews and concise questionnaires. Eighty consulting assignments were studied from both the client-side and the consultant side. A conceptual model is presented, factor analysis was used to validate the constructs, and partial least squares were used to test the model. The findings showed that the consultants' professionalism, skills, and behavior were significant contributors to KT to the client. Surprisingly, neither the consultant knowledge nor client characteristics had any significance to the KT to the client.
Article
Full-text available
span>Findings from a synthesis of technical assistance models and frameworks were used to code the use of 25 core elements of technical assistance in studies and evaluations of implementation interventions to affect program, organization, and systems change. The 25 core elements were group into five components: preparation for the provision of technical assistance, development of a technical assistance plan, implementation of technical assistance, evaluation of the effects of technical assistance, and sustainability of technical assistance-facilitated changes. Results indicated that a subset of 11 core elements was related to between groups and between condition differences in the sizes of effect for program, organization, and systems changes. Results also showed that more intensive technical assistance was associated with larger sizes of effects compared to less intensive technical assistance and that particular combinations of practices were associated with the largest sizes of effect.</span
Article
Full-text available
Background There is limited research on capacity building interventions that include theoretical foundations. The purpose of this systematic review is to identify underlying theories, models and frameworks used to support capacity building interventions relevant to public health practice. The aim is to inform and improve capacity building practices and services offered by public health organizations. Methods Four search strategies were used: 1) electronic database searching; 2) reference lists of included papers; 3) key informant consultation; and 4) grey literature searching. Inclusion and exclusion criteria are outlined with included papers focusing on capacity building, learning plans, professional development plans in combination with tools, resources, processes, procedures, steps, model, framework, guideline, described in a public health or healthcare setting, or non-government, government, or community organizations as they relate to healthcare, and explicitly or implicitly mention a theory, model and/or framework that grounds the type of capacity building approach developed. Quality assessment were performed on all included articles. Data analysis included a process for synthesizing, analyzing and presenting descriptive summaries, categorizing theoretical foundations according to which theory, model and/or framework was used and whether or not the theory, model or framework was implied or explicitly identified. Results Nineteen articles were included in this review. A total of 28 theories, models and frameworks were identified. Of this number, two theories (Diffusion of Innovations and Transformational Learning), two models (Ecological and Interactive Systems Framework for Dissemination and Implementation) and one framework (Bloom’s Taxonomy of Learning) were identified as the most frequently cited. Conclusions This review identifies specific theories, models and frameworks to support capacity building interventions relevant to public health organizations. It provides public health practitioners with a menu of potentially usable theories, models and frameworks to support capacity building efforts. The findings also support the need for the use of theories, models or frameworks to be intentional, explicitly identified, referenced and for it to be clearly outlined how they were applied to the capacity building intervention. Electronic supplementary material The online version of this article (10.1186/s12889-017-4919-y) contains supplementary material, which is available to authorized users.
Article
Full-text available
This special issue of Training and Development in Human Services Supporting Change in Child Welfare: An Evaluation of Training and Technical Assistance primarily focuses on findings from a series of studies conducted as part of a cross-site evaluation of National Resource Centers and Implementation Centers funded by the Children’s Bureau from 2008-2014. Brian Deakins and Jane Morgan, from the Child Welfare Capacity Building Division of the Children’s Bureau helped pull together the papers from that cross-site evaluation and describe each one in their introduction which follows this one. The model of providing both training and technical assistance to courts, tribes, and states is one that local child welfare systems should consider as they approach building capacity in their own jurisdictions. To facilitate the ability of tribal, state, and county child welfare systems to make the leap from the national example to the local level, the final paper in this special issue by Helen Cahalane, Cindy Parry, and Wendy Unger shows an example of how one child welfare training system incorporates training, coaching, and organizational enhancement activities in their partnership with local child welfare agencies in order to build effective child welfare organizations.
Article
Full-text available
Key Points Increased accountability from foundations has created a culture in which nonprofits, with limited resources and a range of reporting protocols from multiple funders, struggle to meet data-reporting expectations. Responding to this, the Robert R. McCormick Foundation in partnership with the Chicago Tribune launched the Unified Outcomes Project, an 18-month evaluation capacity-building project. The project focused on increasing grantees’ capacity to report outcome measures and utilize this evidence for program improvement, while streamlining the number of tools being used to collect data among cohort members. It utilized a model that emphasized communities of practice, evaluation coaching, and collaboration between the foundation and 29 grantees to affect evaluation outcomes across grantee contexts. This article highlights the project’s background, activities, and outcomes, and its findings suggest that the majority of participating grantees benefited from their participation – in particular those that received evaluation coaching. This article also discusses obstacles encountered by the grantees and lessons learned.
Article
Full-text available
The need for multiple respondents per organization in organizational survey research is supported. Leadership teams' ratings of their implementations of market orientation are examined, along with learning orientation, entrepreneurial management, and organizational flexibility. Sixty diverse organizations, including not-for-profit organizations in education and healthcare as well as manufacturing and service businesses, were included. The major finding was the large rating variance within the leadership teams of each organization. The results are enlightening and have definite implications for improved design of survey research on organizations.
Book
Full-text available
Available for download at http://nirn.fpg.unc.edu/resources/implementation-research-synthesis-literature
Article
Full-text available
Numerous agencies are providing training, technical assistance, and other support to build community-based practitioners' capacity to adopt and implement evidence-based prevention interventions. Yet, little is known about how best to design capacity-building interventions to optimize their effectiveness. Wandersman et al. (Am J Community Psychol.50:445-59, 2102) proposed the Evidence-Based System of Innovation Support (EBSIS) as a framework to guide research and thereby strengthen the evidence base for building practitioners' capacity. The purpose of this review was to contribute to further development of the EBSIS by systematically reviewing empirical studies of capacity-building interventions to identify (1) the range of strategies used, (2) variations in the way they were structured, and (3) evidence for their effectiveness at increasing practitioners' capacity to use evidence-based prevention interventions. PubMed, EMBASE, and CINAHL were searched for English-language articles reporting findings of empirical studies of capacity-building interventions that were published between January 2000 and January 2014 and were intended to increase use of evidence-based prevention interventions in non-clinical settings. To maximize review data, studies were not excluded a priori based on design or methodological quality. Using the EBSIS as a guide, two researchers independently extracted data from included studies. Vote counting and meta-summary methods were used to summarize findings. The review included 42 publications reporting findings from 29 studies. In addition to confirming the strategies and structures described in the EBSIS, the review identified two new strategies and two variations in structure. Capacity-building interventions were found to be effective at increasing practitioners' adoption (n = 10 of 12 studies) and implementation (n = 9 of 10 studies) of evidence-based interventions. Findings were mixed for interventions' effects on practitioners' capacity or intervention planning behaviors. Both the type and structure of capacity-building strategies may have influenced effectiveness. The review also identified contextual factors that may require variations in the ways capacity-building interventions are designed. Based on review findings, refinements are suggested to the EBSIS. The refined framework moves the field towards a more comprehensive and standardized approach to conceptualizing the types and structures of capacity-building strategies. This standardization will assist with synthesizing findings across studies and guide capacity-building practice and research.
Article
Full-text available
Studies have shown that communities have not always been able to implement evidence-based prevention programs with quality and achieve outcomes demonstrated by prevention science. Implementation support interventions are needed to bridge this gap between science and practice. The purpose of this article is to present two-year outcomes from an evaluation of the Assets Getting To Outcomes (AGTO) intervention in 12 Maine communities engaged in promoting Developmental Assets, a positive youth development approach to prevention. AGTO is an implementation support intervention that consists of: a manual of text and tools; face-to-face training, and onsite technical assistance, focused on activities shown to be associated with obtaining positive results across any prevention program. This study uses a nested and cross-sectional, cluster randomized controlled design. Participants were coalition members and program staff from 12 communities in Maine. Each coalition nominated up to five prevention programs to participate. At random, six coalitions and their respective 30 programs received the two-year AGTO intervention and the other six maintained routine operations. The study assessed prevention practitioner capacity (efficacy and behaviors), practitioner exposure to and use of AGTO, practitioner perceptions of AGTO, and prevention program performance. Capacity of coalition members and performance of their programs were compared between the two groups across the baseline, one-, and two-year time points. We found no significant differences between AGTO and control group's prevention capacity. However, within the AGTO group, significant differences were found between those with greater exposure to and use of AGTO. Programs that received the highest number of technical assistance hours showed the most program improvement. This study is the first of its kind to show that use of an implementation support intervention-AGTO -yielded improvements in practitioner capacity and consequently in program performance on a large sample of practitioners and programs using a randomized controlled design.ClinicalTrials.gov identifierNCT00780338.
Article
Full-text available
Organizational research frequently involves seeking judgmental response data from informants within organizations. This article discusses why using multiple informants improves the quality of response data and thereby the validity of research findings. The authors show that when there are multiple informants who disagree, responses aggregated with confidence- or competence-based weights outperform those with response data-based weights, which in turn provide significant gains in estimation accuracy over simply averaging informant reports. The proposed methods are effective, inexpensive, and easy to use in organizational marketing research.
Article
Full-text available
Researchers use multiple informants’ reports to assess and examine behavior. However, informants’ reports commonly disagree. Informants’ reports often disagree in their perceived levels of a behavior (“low” versus “elevated” mood), and examining multiple reports in a single study often results in inconsistent findings. Although researchers often espouse taking a multi-informant assessment approach, they frequently address informant discrepancies using techniques that treat discrepancies as measurement error. Yet, recent work indicates that researchers in a variety of fields often may be unable to justify treating informant discrepancies as measurement error. In this review, the authors advance a framework (Operations Triad Model) outlining general principles for using and interpreting informants’ reports. Using the framework, researchers can test whether or not they can extract meaningful information about behavior from discrepancies among multiple informants’ reports.The authors provide supportive evidence for this framework and discuss its implications for hypothesis testing, study design, and quantitative review.
Article
Full-text available
Traditionally the measures used to evaluate the impact of an educational programme on student outcomes and the extent to which students change is a comparison of the student's pre‐test scores with his/her post‐test scores. However, this method of evaluating change may be problematic due to the confounding factor of response shift bias when student self‐reports of change are used. Response shift bias occurs when the student's internal frame of reference of the construct being measured, for example research ability or critical thinking, changes between the pre‐test and the post‐test due to the influence of the educational programme. To control for response shift bias the retrospective pre‐test method was used to evaluate the outcomes achieved from students completing a research module at master's level. The retrospective pre‐test method differs from the traditional pre‐test–post‐test design in that both post‐test and pre‐test perceptions of respondents are collected at the same time. The findings indicated that response shift bias was evident in that the programme had significantly greater impact on outcomes than identified using the traditional pre‐test–post‐test design leading to the conclusion that students may overestimate their ability at the commencement of an educational programme. The retrospective pre‐test design is not a replacement for the traditional pre‐test–post‐test measures but may be a useful adjunct in the evaluation of the impact of educational programmes on student outcomes.
Chapter
Full-text available
Utilization-focused evaluation begins with the premise that evaluations should be judged by their utility and actual use; therefore, evaluators should facilitate the evaluation process and design any evaluation with careful consideration of how everything that is done, from beginning to end, will affect use. This is consistent with standards developed by the Joint Committee on Standards for Evaluation and adopted by the American Evaluation Association that evaluations should be judged by their utility, feasibility, propriety, and accuracy. (See chapter on standards and principles for evaluations.)
Article
Full-text available
An individual or organization that sets out to implement an innovation (e.g., a new technology, program, or policy) generally requires support. In the Interactive Systems Framework for Dissemination and Implementation, a Support System should work with Delivery Systems (national, state and/or local entities such as health and human service organizations, community-based organizations, schools) to enhance their capacity for quality implementation of innovations. The literature on the Support ystem has been under-researched and under-developed. This article begins to conceptualize theory, research, and action for an evidence-based system for innovation support (EBSIS). EBSIS describes key priorities for strengthening the science and practice of support. The major goal of EBSIS is to enhance the research and practice of support in order to build capacity in the Delivery System for implementing innovations with quality, and thereby, help the Delivery System achieve outcomes. EBSIS is guided by a logic model that includes four key support components: tools, training, technical assistance, and quality assurance/quality improvement. EBSIS uses the Getting To Outcomes approach to accountability to aid the identification and synthesis of concepts, tools, and evidence for support. We conclude with some discussion of the current status of EBSIS and possible next steps, including the development of collaborative researcher-practitioner-funder-consumer partnerships to accelerate accumulation of knowledge on the Support System.
Article
Full-text available
Enterprise resource planning (ERP) systems and other complex information systems represent critical organizational resources. For such systems, firms typically use consultants to aid in the implementation process. Client firms expect consultants to transfer their implementation knowledge to their employees so that they can contribute to successful implementations and learn to maintain the systems independent of the consultants. This study examines the antecedents of knowledge transfer in the context of such an interfirm complex information systems implementation environment. Drawing from the knowledge transfer, information systems, and communication literatures, an integrated theoretical model is developed that posits that knowledge transfer is influenced by knowledge-related, motivational, and communication-related factors. Data were collected from consultant-and-client matched-pair samples from 96 ERP implementation projects. Unlike most prior studies, a behavioral measure of knowledge transfer that incorporates the application of knowledge was used. The analysis suggests that all three groups of factors influence knowledge transfer, and provides support for 9 of the 13 hypotheses. The analysis also confirms two mediating relationships. These results (1) adapt prior research, primarily done in non-IS contexts, to the ERP implementation context, (2) enhance prior findings by confirming the significance of an antecedent that has previously shown mixed results, and (3) incorporate new IS-related constructs and measures in developing an integrated model that should be broadly applicable to the interfirm IS implementation context and other IS situations. Managerial and research implications are discussed.
Article
Full-text available
A comprehensive assessment of organizational functioning and readiness for change (ORC) was developed based on a conceptual model and previous findings on transferring research to practice. It focuses on motivation and personality attributes of program leaders and staff, institutional resources, and organizational climate as an important first step in understanding organizational factors related to implementing new technologies into a program. This article describes the rationale and structure of the ORC and shows it has acceptable psychometric properties. Results of surveys of over 500 treatment personnel from more than 100 treatment units support its construct validity on the basis of agreement between management and staff on several ORC dimensions, relationships between staff organizational climate dimensions and patient engagement in treatment, and associations of agency resources and climate with organizational stability. Overall, these results indicate the ORC can contribute to the study of organizational change and technology transfer by identifying functional barriers involved.
Article
Full-text available
Research has shown that prevention programming can improve community health when implemented well. There are examples of successful prevention in local communities, however many continue to face significant challenges, demonstrating a gap between science and practice. Common strategies within the United States to address this gap are available (e.g., trainings), but lack outcomes. Building community capacity to implement high quality prevention can help communities achieve positive health outcomes, thereby narrowing the gap. While there is ample research on the efficacy of evidence-based programs, there is little on how to improve community capacity to improve prevention quality. In order to narrow the gap, a new model of research-one based in Community Science-is suggested that improves the latest theoretical understanding of community capacity and evaluates technologies designed to enhance it. In this article, we describe this model and suggest a research agenda that can lead to improved outcomes at the local level.
Article
Full-text available
Within the past 2 decades, community capacity building and community empowerment have emerged as key strategies for reducing health disparities and promoting public health. As with other strategies and best practices, these concepts have been brought to indigenous (American Indian and Alaska Native) communities primarily by mainstream researchers and practitioners. Mainstream models and their resultant programs, however, often have limited application in meeting the needs and realities of indigenous populations. Tribes are increasingly taking control of their local health care services. It is time for indigenous people not only to develop tribal programs but also to define and integrate the underlying theoretical and cultural frameworks for public health application.
Article
Full-text available
Measuring progress toward systems change, sustainable efforts that address root causes of an issue by changing policies and practices, is a difficult task for communities, evaluators, and foundations. Tracking and documenting changes in resources, power, policy, sustainable funding, structured relationships and roles, and underlying values require multi-level analyses. Systems change analysts must consider at least four "strata" at once: (1) events and trends, (2) patterns of interaction, (3) context and cultural or social models, and (4) the systems themselves. In this paper we provide a brief overview of systems change; a discussion of collaboratives as one "engine" of social change; a discussion of benchmarks and indicators of collaboratives focused on systems change; and suggestions for further research. The analysis draws upon several analytic frameworks described in the literature. We illustrate these concepts with examples from six systems change initiatives funded by The California Endowment. The need for further research is outlined.
Article
Full-text available
A major challenge in the dissemination of evidence-based family interventions (EBFIs) designed to reduce youth substance use and other problem behaviors is effective and sustainable community-based recruitment. This understudied topic is addressed by a preliminary study of 14 community-university partnership teams randomly assigned to an intervention condition in which teams attempted sustained implementation of EBFIs with two cohorts of middle school families. This report describes attendance rates of recruited families maintained over time and across both cohorts, along with exploratory analyses of factors associated with those rates. When compared with community-based recruitment rates in the literature, particularly for multisession interventions, relatively high rates were observed; they averaged 17% across cohorts. Community team functioning (e.g., production of quality team promotional materials) and technical assistance (TA) variables (e.g., effective collaboration with TA, frequency of TA requests) were associated with higher recruitment rates, even after controlling for community and school district contextual influences. Results support the community-university partnership model for recruitment that was implemented in the study.
Article
Full-text available
Communities are increasingly being required by state and federal funders to achieve outcomes and be accountable, yet are often not provided the guidance or the tools needed to successfully meet this challenge. To improve the likelihood of achieving positive outcomes, the Getting To Outcomes (GTO) intervention (manual, training, technical assistance) is designed to provide the necessary guidance and tools, tailored to community needs, in order to build individual capacity and program performance. GTO is an example of a Prevention Support System intervention, which as conceptualized by the Interactive Systems Framework, plays a key role in bridging the gap between prevention science (Prevention Synthesis and Translation System) and prevention practice (Prevention Delivery System). We evaluated the impact of GTO on individual capacity and program performance using survey- and interview-based methods. We tracked the implementation of GTO and gathered user feedback about its utility and acceptability. The evaluation of GTO suggests that it can build individual capacity and program performance and as such demonstrates that the Prevention Support System can successfully fulfill its intended role. Lessons learned from the implementation of GTO relevant to illuminating the framework are discussed.
Article
Full-text available
Capacity is a complex construct that lacks definitional clarity. Little has been done to define capacity, explicate components of capacity, or explore the development of capacity in prevention. This article represents an attempt to operationalize capacity and distinguish among types and levels of capacity as they relate to dissemination and implementation through the use of a taxonomy of capacity. The development of the taxonomy was informed by the capacity literature from two divergent models in the field: research-to-practice (RTP) models and community-centered (CC) models. While these models differ in perspective and focus, both emphasize the importance of capacity to the dissemination and sustainability of prevention innovations. Based on the review of the literature, the taxonomy differentiates the concepts of capacity among two dimensions: level (individual, organizational, and community levels) and type (general capacity and innovation-specific capacity). The proposed taxonomy can aid in understanding the concept of capacity and developing methods to support the implementation and sustainability of prevention efforts in novel settings.
Article
Full-text available
This study examined the impact of on-site and off-site technical assistance (TA) dosage on the functioning of Communities That Care prevention boards in Pennsylvania. Data on board functioning were collected over three years from board member and TA providers. Results of path models indicated little overall impact of TA dosage on board functioning the subsequent year. However, on-site TA dosage did appear to influence board functioning for younger boards and for boards who were relatively better functioning. In addition, the stability of board functioning and off-site TA was moderate to strong, the stability of on-site TA dosage was low, and poor functioning sites did not receive more TA in the following year. EDITORS' STRATEGIC IMPLICATIONS: This paper is one of the first quantitative examinations of the impact of TA on community-based prevention or health promotion coalitions. The authors provide a number of implications for further study with respect to TA. Thus, it should be valuable to researchers and practitioners involved in the development and implementation of such community-based efforts.
Chapter
Chapter 9 discusses organizational change and development, the procedures and methods intended to change the character of an organization and improve its performance, and how change efforts may be directed at selected groups, such as executive teams, certain units, locations, or the entire organization. It covers Organizational Development (OD), Process Consultation (PC), teams and team building, survey feedback and action research, externally imposed change, mergers and acquisitions, and planning and managing change.
Article
This study used longitudinal data from 307 mothers with firstborn infants participating in a home-visitation, child-abuse prevention program. A self-report measure of specific constructs the program hoped to affect showed that the retrospective pretest methodology produced a more legitimate assessment of program outcomes than did the traditional pretest-posttest methodology. Results showed that when response shift bias was present, traditional pretest-posttest comparisons resulted in an underestimation of program effects that could easily be avoided by the retrospective pretest methodology. With demands for documenting program outcomes increasing, retrospective pretest designs are shown to be a simple, convenient, and expeditious method for assessing program effects in responsive interventions. The limits of retrospective pretests, and methods for strengthening their use, are discussed.
Article
This preparation of this paper was supported, in part, by the Southeast Regional Resource Center (SERRC) and Data Accountability Center (DAC) and through a grant from the U.S. Department of Education, Office of Special Education Programs, to Westat (No. PR#H373Y070002). The contents of this paper do not necessarily represent the policy of the Department of Education or any other federal agency, and you should not assume endorsement by the Federal Government (Edgar 75.620(b)). The authors are solely responsible for all views expressed.
Article
Although community coalitions are an increasingly popular mechanism for attempting to change community-wide health, the empirical evidence has been mixed at best. Technical Assistance (TA) efforts have emerged in greater scale in hopes of improving both programming quality as well as the coalition structures supporting such programs. However, this commitment to TA interventions has outstripped our knowledge of optimal ways to deliver such assistance, and its limitations. This study takes advantage of results from a state-wide technical assistance project that generated longitudinal data on 41 health-oriented coalitions. The following questions were addressed: What are the circumstances under which coalitions will utilize available assistance? What are the effects of technical assistance on intermediate community outcomes? The results suggested that coalitions with greater initial “capacity” used more TA. Coalitions with low utilization mentioned difficulty in identifying their TA needs as the salient reason for not pursuing these resources. Over time, there were significant positive changes in coalition effectiveness as perceived by key informants, but these were not influenced by amount of TA.
Article
We tested for inflationary bias introduced through retrospective pretests by analyzing traditional pretest, retrospective pretest, and posttest evaluation data collected on a first-line supervisory leadership training program, involving 196 supervisors and their subordinates, across 17 organizational settings. Retrospective pretest ratings by both trained (supervisors) and untrained (subordinates) respondents were significantly lower than traditional pretest ratings, resulting in substantially inflated training effect sizes when posttests were compared with retrospective pretests rather than with traditional pretests. Further analysis revealed evidence of both respondents' application of an implicit theory of change (i.e., assumption that posttraining scores should generally be higher than pretraining scores) and a tendency to rate their own improvement as greater than that of others. Implications for program evaluation are discussed.
Article
This study used longitudinal data from 307 mothers with firstborn infants participating in a home-visitation, child-abuse prevention program. A self-report measure of specific constructs the program hoped to affect showed that the retrospective pretest methodology produced a more legitimate assessment of program outcomes than did the traditional pretest-posttest methodology. Results showed that when response shift bias was present, traditional pretest-posttest comparisons resulted in an underestimation of program effects that could easily be avoided by the retrospective pretest methodology. With demands for documenting program outcomes increasing, retrospective pretest designs are shown to be a simple, convenient, and expeditious method for assessing program effects in responsive interventions. The limits of retrospective pretests, and methods for strengthening their use, are discussed.
Article
More than a decade has passed since a conceptual framework was introduced to guide public health services and systems research (PHSSR) and elucidate the relationships associated with system performance. Since then, research has primarily focused on performance, standards, and key processes, with less emphasis on identification of measures or methods. Capacity lies at one end of the conceptual framework, although little emphasis has been placed on measuring and defining "capacity" of the public health system. This is striking, given organizational capacity is a critical determinant of performance and is necessary for understanding systematic effectiveness, sustainability, or generalizability. As a nascent field, PHSSR needs to develop a definition of organizational capacity and elucidate its relationship within a research framework. Evidence must be developed on the temporal and causal relationships between capacity, process/performance, and outcomes. The purpose of this article was to review research frameworks and capacity measures in various disciplines to expand the existing PHSSR conceptual framework.
Article
The value of multiple informant methodology for improving the validity in determining organizational properties has been increasingly recognized. However, the majority of empirical research still relies on single (key) informants. This is partly due to the lack of comprehensive methodological narratives and precise recommendations on the application of this important methodology. Therefore, the authors have developed a critical review and derived clear recommendations for the key challenges that researchers face in using multiple informants: (1) Which and how many informants should be considered? (2) How should the consensus among the informants be judged? (3) How are multiple responses combined into a single, organizational response to conduct further data analyses?
Article
Federal, state, and local governments spend substantial resources on training child welfare staff. Moreover, enhanced training is often proposed as a core solution to many problems facing public child welfare and other human service agencies. In this paper we conceptualize training as an element of the policy implementation process. We use data from a multiple case study evaluation of nine federally-funded training projects to examine training activity within a policy implementation framework. Findings indicate federal, state, county and organizational contexts were important in successful implementation; the projects were, for the most part, successfully implemented; training projects lacked explicit causal theory to link training activities to training outcomes; and elements of both top-down and bottom-up implementation frameworks were identified. Conclusions focus on the utility of training for enhancing policy implementation, as well as the need for greater theory development in this area.
Article
'Capacity building' is the objective of many development programmes and a component of most others. However, satisfactory definitions continue to elude us, and it is widely suspected of being too broad a concept to be useful. Too often it becomes merely a euphemism referring to little more than training. This paper argues that it is more important to address systemic capacity building, identifying a pyramid of nine separate but interdependent components. These form a four-tier hierarchy of capacity building needs: (1) structures, systems and roles, (2) staff and facilities, (3) skills, and (4) tools. Emphasizing systemic capacity building would improve diagnosis of sectoral shortcomings in specific locations, improve project/programme design and monitoring, and lead to more effective use of resources. Based on extensive action research in 25 States, experience from India is presented to illustrate how the concept of the capacity building pyramid has been put to practical use.
Article
To use multivariate regression methods to analyze simultaneously data obtained from multiple respondents or data sources (informants) at health centers. Surveys of executive directors, medical directors, and providers from 65 community health centers (176 informants) who participated in an evaluation of the Health Disparities Collaboratives. Cross-sectional survey of staff at the health centers during 2003-2004. In order to illustrate this method, we analyze the association between informants' assessments of the culture of the center and participation in the collaborative, and the association between computer availability and the effort made by management to improve the quality of the care and services at their center. Multivariate regression models are used to pool information across informants while accounting for informant-specific effects and retaining informants in the analysis even if the data from some of them are missing. The results are compared with those obtained by traditional methods that use data from a single informant or average over informants' ratings. In both the Collaborative participation and quality improvement efforts analyses, the multivariate regression multiple informants' analysis found significant effects and differences between informants that traditional methods failed to find. Participating centers emphasized developmental (entrepreneurship, innovation, risk-taking) and rational culture. The effect of hierarchical culture (stability and bureaucracy) on participation depended on the informant; executive directors and medical providers were the most discrepant. In centers that participated in the Collaborative, the availability of computers was positively associated with the effort that management made toward improving quality. The multiple informants model provided the most precise estimates and alerts users to differential effects across informants. Because different informants may have different insights or experiences, it is important that differences among informants be measured and ultimately understood by health services researchers.
Article
Recent analyses of organizational change suggest a growing concern with the tempo of change, understood as the characteristic rate, rhythm, or pattern of work or activity. Episodic change is contrasted with continuous change on the basis of implied metaphors of organizing, analytic frameworks, ideal organizations, intervention theories, and roles for change agents. Episodic change follows the sequence unfreeze-transition-refreeze, whereas continuous change follows the sequence freeze-rebalance-unfreeze. Conceptualizations of inertia are seen to underlie the choice to view change as episodic or continuous.
The evolution of resource and training centers
  • Barbee
Organizational enhancement and capacity building: The evolution of Pennsylvania’s child welfare resource center. Training and Development in Human Services,
  • Cahalane
Beyond Tradition: The Realm of Systemic Educational Reform Evaluation
  • J R Century
Century, J.R., 1999, April 23, Determining capacity within systemic educational reform [Paper presented as part of the symposium, Beyond Tradition: The Realm of Systemic Educational Reform Evaluation]. Annual Meeting of the American Educational Research Association, Montreal, Canada. 〈http://eric.ed.gov/? id=ED434162〉.
Gaining perspective on a complex task: A multidimensional approach to capacity building
  • S Harsh
Harsh, S., 2010, Gaining perspective on a complex task: A multidimensional approach to capacity building. Appalachia Regional Comprehensive Center at Edvantia.
Capacity-building series: Innovation: The new capacity for continuous improvement
  • S Harsh
Harsh, S., 2013, Capacity-building series: Innovation: The new capacity for continuous improvement. ICF International.
HIV prevention capacity building: A framework for strengthening and sustaining HIV prevention programs
  • C Motamed
  • F Beadle De Palomo
  • J Pritchett
  • J Wahlstrom
Motamed, C., Beadle de Palomo, F., Pritchett, J., & Wahlstrom, J., 2005, HIV prevention capacity building: A framework for strengthening and sustaining HIV prevention programs. Center on AIDS & Community Health.
Women and Children First: The Contribution of the Children's Bureau to Social Work Education
  • A Barbee
Barbee, A. (2013). The evolution of resource and training centers. In A. Lieberman, & K. Nelson (Eds.), Women and Children First: The Contribution of the Children's Bureau to Social Work Education (pp. 135-154). Council on Social Work Education.
Model Court Protocol: Leadership, Innovation, and Accountability
  • E Barnes
Barnes, E. (2010). Model Court Protocol: Leadership, Innovation, and Accountability. Washington, DC: National Council of Juvenile and Family Court Judges,.