Because no-difference research has been relatively unexplored by evaluators, a series of empirical studies was conducted. In this chapter distinguishing characteristics of no-difference research are examined, its acceptance by the research community is probed, and possible strategies for conducting no-difference studies are discussed. The centrality of no-difference findings within the experimental and non-experimental paradigms is also discussed.
Historical positions on the nature of evaluation are briefly characterized. A systematic approach then addresses the questions that clients need answered but which could not be answered by most of these models. The answers add up to a more general account of program evaluation and a more general view of evaluation across all disciplines.
The planning process used for developing an evaluation plan can clarify the questions to be asked, suggest the appropriate methodologies to use, and increase the value and use of the evaluation findings.
A model is developed for assessing the temporary and permanent impact of the Family Law Act, and the application and construct validity of the model are examined.
The procedures used to collect information for the Directory of Evaluation Training Programs are described, and the results of the survey are discussed. Tables list the programs identified in the United States, Canada, and Australia.
The procedures used in a successful mailed evaluation questionnaire effort are described in this chapter and guidance is given to those who are unfamiliar with the methodology.
Although there are many advantages of internal evaluation, there are also disadvantages, the most serious of which is potential misuse by managers who want evaluations to appear to objectively support decisions that have already been made.
The authors discuss four principles for taking uncertainty into account when estimating effects. They discuss ways to implement the principles, ways the principles are violated in practice, and implications for the use of multiple methods.
Evaluation specialists and auditors are working side by side to improve program performance. Auditors concentrate on comparing a condition against a criterion; evaluators concentrate on what has occurred, estimating what would have occurred without the program and comparing the two situations to determine program effects.
Two types of program theory tools—conceptual and action heuristics—can help evaluators improve programs by expanding conceptions of problems and solutions and by narrowing the focus of decision alternatives for action.
There are numerous micro-level methods decisions associated with planning an impact evaluation. Quantitative synthesis methods can be used to construct an actuarial data base for establishing the likelihood of achieving desired sample sizes, statistical power, and measurement characteristics. Improvements in both primary and meta-analysis studies will be necessary, however, to realize the full potential of this approach.
State legislatures began recognizing evaluation as an important aspect of their oversight responsibilities in the late sixties. Since that time, evaluation has become an integral part of the legislative process. This chapter delineates such key issues as how the Illinois Office of the Auditor General has addressed legislative evaluation and the challenges and dilemmas evaluators must address to enhance their future effectiveness.
State regulatory agencies responsible for water quality typically have done little to develop policy or even provide information about privatization as a means for local authorities to develop wastewater treatment projects. Absent federal mandates, privatization takes place “in the shadow of positive law,” and state agencies engage in it only informally and reactively, leaving private firms and local officials to carry out the privatization option.
Effective administrators function as applied theorists by developing and using generalizable knowledge about programs related to their responsibilities as managers and leaders in organizational settings.
Currently, no method of classifying mental health services is accepted by all service sectors. Because classification is central to understanding the service delivery system for children and families, standardization of methods must be achieved.
Recent developments in the field of children's mental health, particularly the development of community-based systems of care, are being used to improve the availability and effectiveness of services for children. This chapter highlights unique problems and challenges to evaluators studying these systems of care.
A client-level focus is needed when examining the effectiveness of mental health interventions. Three broad domains of outcomes (psychopathology, social competence, and satisfaction) are explored from the vantage points of client, professional, and societal stakeholders.
Decisions about the appropriate balance between centralized and decentralized staffing and responsibilities in multisite evaluations should be based on scientific, administrative, and political considerations.
Reflection on experiences in Brazil and elsewhere provides insight into the advisory process in international technical cooperation projects and alternative roles for evaluation advisers.
The American Evaluation Association's succinct Guiding Principles for Evaluators quietly establish new boundaries for addressing ethical problems of the profession.
Politicization and a fiscal control emphasis in a state government agency paralyzed the operation of a previously functional and well-staffed evaluation division, as described in this case study.
In complex organizations there often are a number of programs that could benefit from evaluation. But which should be selected for study when evaluation resources are limited? This chapter presents one framework that could be utilized to identify programs with the highest potential for payoff to management resulting from evaluation. Criteria for assessing evaluation need are presented as well as standards for prioritizing programs to be evaluated.
To prevent HIV infection we must influence risky sexual and drug-using practices, some of the most basic yet complex of human behaviors. No single discipline or one-shot intervention is going to solve this problem; rather, a sustained, multidimensional, interdisciplinary effort with repeat exposures to a variety of well-planned and targeted strategies is essential.
This chapter presents strategies for developing, implementing, and evaluating effective AIDS prevention programs within the confines of a partnership between an evaluator and a target community.
Three theoretical approaches to disease prevention and health promotion are discussed and an example of an AIDS prevention curriculum that used theories to formulate the intervention and to guide the evaluation research is presented.
In this chapter, the authors question current estimates of the AIDS incubation period drawn from convenience samples, citing potential difficulties with generalizability and selection bias.
Ethnographic methods complement standard treatment or control group studies by providing contextual and culturally sensitive information to administrators and service providers in AIDS prevention programs.
A variety of motivations exist for constructing mathematical models of the AIDS epidemic, including forecasting, scientific estimation of disease parameters, and policy analysis. This chapter discusses several different modeling approaches and their attendant uses, stressing the results most relevant to policy analysts and program evaluators.
Designing an evaluation of a federally funded, multisite demonstration program for homeless persons with alcohol and other drug problems presents a number of methodological and practical challenges. Despite these challenges, a great deal can be learned about human service programs for homeless persons from multisite demonstration programs.
New evaluation methods are being developed by adapting techniques from geography, philosophy, journalism, economics, film criticism, photography, and other areas.
The author views the main purpose of evaluation training programs as preparing practitioners. She suggests that evaluation training programs could consider the model provided by professional schools.
The sociotechnical framework of organizational activity and relations should be explored as a preferable alternative to the rational framework as a context from which to design and conduct evaluations in business and industry.
The dilemma inherent in all formal standards results from the tension between maintaining quality control, on the one hand, and constraining creativity, on the other.
Evaluation is an information-gathering process, and it can unravel in much the same way as a detective story. Just as Sherlock Holmes redirects his crime-solving activities as new clues arise, evaluators should be committed to adapting their evaluation activities as new information is obtained.
As the traditional keeper of a society's records, the data archivist has the difficult responsibility of protecting the individual's right to privacy and at the same time valuing society's need for knowledge.
A representative sample of studies drawn from the published program evaluation literature is systematically examined. Weak designs, low statistical power, ad hoc measurement, and neglect of treatment implementation and program theory characterize the state of the art in program evaluation. Program evaluation research under the experimental paradigm requires better planning and execution.
Limited understanding of the purposes of program evaluation can lead to misuse; evaluators must educate clients and stakeholders so that evaluation designs are refined to produce useful information.