Article

Prose and Cons about Goal-Free Evaluation

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

TRACES are what evaluators left behind—discoveries, records, tracks—which made marks on the profession of program evaluation. Published here are excerpts from our past (e.g., articles, passages from books, speeches) that show where we have been or affected where we are g(r)o(w)ing. Suggestions for inclusion should be sent to the Editor, along with rationale for their import. Photocopies of the original printed versions are preferred with full bibliographic information. Copying rights will be secured by the Editor.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... No obstante, han surgido proposiciones desde otros teóricos que ayudarían a identificar otras maneras de concebir la evaluación de políticas que resultarían útiles para las intervenciones del Estado enmarcadas en el proyecto decolonial. En principio, existen teóricos que desde un enfoque responsivo (ver Stake, 1975) o libre de objetivos (ver Scriven, 1991) quienes no comparten esta orientación de la evaluación basada en resultados. Stake (1975) enfatiza cada evaluación como un caso en el cual se involucran actores con distintos intereses, opiniones y valores. ...
... Por su parte, Scriven (1991) propone que la evaluación no debe limitarse a solo verificar el cumplimiento de los objetivos de las intervenciones públicas, pues se trata de un proceso el cual involucra diferentes intereses y actores. ...
... Al respecto, Youker (2013) reconoce que la evaluación libre de objetivos planteada por Scriven (1991) tiene potencialidad para ser utilizada en evaluaciones de programas de trabajo social. Sin embargo, empíricamente esto no ocurre ya que son incipientes los estudios que abordan este enfoque y poco el interés por parte del Estado para incorporar enfoques alternativos a las metodologías de evaluación objetivistas dominantes. ...
Book
Full-text available
Esta obra parte de la necesidad de buscar formas de conocimiento en las políticas públicas, específicamente en la evaluación de políticas y programas. Desde un inicio, el enfoque eurocéntrico desarrollista se impuso como el único camino para alcanzar el desarrollo de las sociedades y con ello culminar un proyecto civilizatorio occidentalizado. No obstante, las diferencias socioculturales en el mundo demostraron que el modelo importado por Estados Unidos y Europa era insuficiente para Asia, África, América Latina y el Caribe.
... No obstante, han surgido proposiciones desde otros teóricos que ayudarían a identificar otras maneras de concebir la evaluación de políticas que resultarían útiles para las intervenciones del Estado enmarcadas en el proyecto decolonial. En principio, existen teóricos que desde un enfoque responsivo (ver Stake, 1975) o libre de objetivos (ver Scriven, 1991) quienes no comparten esta orientación de la evaluación basada en resultados. Stake (1975) enfatiza cada evaluación como un caso en el cual se involucran actores con distintos intereses, opiniones y valores. ...
... Por su parte, Scriven (1991) propone que la evaluación no debe limitarse a solo verificar el cumplimiento de los objetivos de las intervenciones públicas, pues se trata de un proceso el cual involucra diferentes intereses y actores. ...
... Al respecto, Youker (2013) reconoce que la evaluación libre de objetivos planteada por Scriven (1991) tiene potencialidad para ser utilizada en evaluaciones de programas de trabajo social. Sin embargo, empíricamente esto no ocurre ya que son incipientes los estudios que abordan este enfoque y poco el interés por parte del Estado para incorporar enfoques alternativos a las metodologías de evaluación objetivistas dominantes. ...
Chapter
Full-text available
El pensamiento decolonial está interesado teórica y políticamente en la idea de confrontar y desvincular a las sociedades de la matriz decolonial del poder, identificada como heterarquías de múltiples relaciones raciales, étnicas, sexuales, epistémicas, económicas y de género, que los procesos de independencia dejaron intactas (Castro y Grosfoguel, 2007 p. 17). En este texto, se ha señalado al Estado, entendido como un proyecto político el cual busca organizar y ordenar la vida humana, con una centralidad histórica en la generación de las prácticas de modernidad (Dube y Banerjee, 2019 p. 23, citando a Castro). La colonialidad ha definido estas prácticas como un sustento a la matriz decolonial del poder. Este puede observarse en tres esferas concretas de operación de la colonialidad: poder, ser y saber. Más allá de resolver la cuestión ambiciosa de identificar la estructura de las prácticas del Estado el cual sustentan cada dimensión, este ensayo parte de la cuestión general sobre el papel que juega el Estado sobre la colonialidad del conocimiento o del saber mediado por el enfoque de políticas públicas.
... Program evaluation is very much a model, according to the purpose of the study, the goal free evaluation (GFE) model developed by Scriven (1991) is used, which is interpreted as a reality-based evaluation or independent evaluation (Youker, Ingraham, & Bayer, 2014). Furthermore, Scriven provides a more detailed description that in the GFE model, goals are made as an important starting point for evaluations, meaning that objectives do not have to be taken but examined and evaluated (Scriven, 1991). ...
... Program evaluation is very much a model, according to the purpose of the study, the goal free evaluation (GFE) model developed by Scriven (1991) is used, which is interpreted as a reality-based evaluation or independent evaluation (Youker, Ingraham, & Bayer, 2014). Furthermore, Scriven provides a more detailed description that in the GFE model, goals are made as an important starting point for evaluations, meaning that objectives do not have to be taken but examined and evaluated (Scriven, 1991). GFE models are also often referred to as effects model evaluations or effects models, which involve a wider scope. ...
... Through a blatant or hidden process the GFE model seeks to collect data in order to form program descriptions and identify accurate processes. Furthermore, Scriven provides a more detailed description that in the GFE model, the goal is to be an important starting place for evaluating, meaning that the goal does not have to be taken but is examined and evaluated (Scriven, 1991). The model can be seen in the Figure 1. ...
Article
Full-text available
This study aims to determine the impact of the beauty training program in PPKD of East Jakarta on students. The impact can be seen based on three criteria, namely positive impacts according to program objectives, positive impacts outside the program objectives (side effects), and negative impacts outside the program objectives. The research method used in this study is an evaluative study using the Goal Free Evaluation model. The research subjects consisted of: managers, instructors, and beauty training participants in PPKD of East Jakarta . Data was collected using observation, interviews, and documentation. The results showed that (1) the positive impact according to the program objectives was felt by almost all training participants, namely they could increase their knowledge and skills in the field of beauty and were ready to enter the workforce, (2) a positive impact outside the program objectives freelance work as a makeup artist, having confidence in work, and daring to open a business in the field of beauty, and (3) negative impacts outside the program's objectives felt by a small number of training participants who were less serious and less focused on training that they could not accept the material taught so that they are not ready to enter the workforce.
... Several evaluation models exist for investigating unintended consequences and feedback loops, including goal-free and systems evaluation (Jabeen, 2016;Scriven, 1991;Williams & Hummelbrunner, 2010). However, further methodological guidance is still needed for assessing unintended program outcomes (Jabeen, 2016;Morrell, 2005). ...
... Although this paper makes a distinction between intended and unintended outcomes, a GT approach to REM within a goal-free evaluation framework can avoid making a distinction between intended outcomes (goals) and side-effects. In task of evaluating merit, worth and significance, "what counts is not the specific intentions, but the results" (Irvine, 1979, p. 97) Therefore, if the evaluator is not biased by what was supposed to be achieved, s/he is free to make judgements about whether what was actually achieved was worthwhile (Irvine, 1979;Scriven, 1991;Youker, 2005). This puts evaluators in the role of evaluating program effects, whether they were intended or not, and avoids the pejorative language of "unintended" or "side-effects" that may contribute fear of investigating such outcomes (Jabeen, 2016;Scriven, 1991). ...
... In task of evaluating merit, worth and significance, "what counts is not the specific intentions, but the results" (Irvine, 1979, p. 97) Therefore, if the evaluator is not biased by what was supposed to be achieved, s/he is free to make judgements about whether what was actually achieved was worthwhile (Irvine, 1979;Scriven, 1991;Youker, 2005). This puts evaluators in the role of evaluating program effects, whether they were intended or not, and avoids the pejorative language of "unintended" or "side-effects" that may contribute fear of investigating such outcomes (Jabeen, 2016;Scriven, 1991). ...
Article
Several evaluation models exist for investigating unintended outcomes, including goal-free and systems evaluation. Yet methods for collecting and analyzing data on unintended outcomes remain under-utilized. Ripple Effects Mapping (REM) is a promising qualitative evaluation method with a wide range of program planning and evaluation applications. In situations where program results are likely to occur over time within complex settings, this method is useful for uncovering both intended and unintended outcomes. REM applies an Appreciative Inquiry facilitation technique to engage stakeholders in visually mapping sequences of program outcomes. Although it has been used to evaluate community development and health promotion initiatives, further methodological guidance for applying REM is still needed. The purpose of this paper is to contribute to the methodological development of evaluating unintended outcomes and extend the foundations of REM by describing steps for integrating it with grounded theory.
... The safety actions of frontline supervisors can be evaluated by the project safety personnel or their immediate supervisor. Evaluation is a systematic determination of the quality of performance through the systematic determination of merit, worth, and significance [42]. The evaluation's primary goal is to improve organizational learning, eventually creating a learning organization that acquires, creates, evaluates, and disseminates knowledge [43]. ...
... Step 4: Synthesis. Synthesis combines a set of ratings on several components into an overall rating [42]. Within this study context, the collected data can be synthesized for each action, each categorization group (i.e., C, K.L., and P.E.), and the overall rating. ...
Article
Full-text available
Construction safety measurement metrics indirectly assess safety performance. Safety performance issues may be overlooked without a direct assessment technique, leading to occupational incidents and near misses being the only way to identify safety shortcomings. Managing occupational safety in this manner is unwise. This study proposed a new approach to integrate construction safety management into the responsibilities of frontline supervisors. Thus, the effectiveness of safety management efforts can be directly evaluated by monitoring the safety-related actions of frontline supervisors. A robust research methodology was employed to develop a first-of-its-kind safety performance evaluation system. By employing literature and the Delphi method, the research team identified 19 safety actions that could be undertaken by frontline supervisors to improve overall safety performance. Furthermore, two national surveys were used to assign a significant level to each action based on its impact and importance. Additionally, a rubric was created to objectively evaluate the implementation of the 19 identified actions. As a result, an evaluation system was developed to empower frontline supervisors to contribute to site safety. In addition, the presented framework would help improve overall construction safety management by swiftly addressing safety performance issues and safety effort deficiencies. The study provides a roadmap to directly assess safety performance by focusing on the safety-related actions of frontline supervisors. It significantly contributes to the body of knowledge by illustrating the possibility of integrating safety into the daily routines of project stakeholders and promoting collaboration among them. This represents a departure from traditional safety management techniques that primarily rely on safety personnel.
... Goal-free evaluation rejects the notion that assuring programme quality improvement should be the role of evaluators. 30,31 Whereas goal-free evaluators are encouraged to produce actionable recommendations in evaluation reports so that there is the 'potential' for instrumental use, they are not required to design evaluation plans with a view to optimising instrumental utilisation. 30 Furthermore, goal-free evaluators are discouraged from relying upon programme administrators' conceptions of programme goals or programme functioning in determining the scope of an evaluation. ...
... 30 Furthermore, goal-free evaluators are discouraged from relying upon programme administrators' conceptions of programme goals or programme functioning in determining the scope of an evaluation. 30,31 Rather, the evaluator assumes responsibility for determining the scope and boundaries of the evaluation in relation to specified consumer or societal needs. 16,31 may not yet be politically acceptable or actionable. ...
Article
Context Program evaluation is perpetually mandated in health professions education. Correspondingly, there has been an expansion of prescriptive methodological guides about how to engage in various best practices in evaluation. However, what has gained less attention is an examination of the value that different stakeholders seek to gain from program evaluation. Evaluation utilization theory and research can help us understand the diversity in both the driving forces for and the impact of program evaluation. Awareness of the heterogeneity of evaluation utilization priorities has implications for evaluation practices including both methodological choices and understanding of the impact of program evaluation in our field. Methods In this article, we expound on the concept of evaluation utilization by drawing on evaluation theory and research. Evaluation utilization refers to the application of program evaluation processes and findings to influence thinking, and action. Herein, we discuss four different forms of evaluation utilization (instrumental, conceptual, process, and persuasive utilization) as well as the related concept of evaluation misuse . Furthermore, we discuss how the prioritization of different forms of evaluation utilization can influence the scope and impact of evaluation scholarship. Conclusion Program evaluation is a form of inquiry that requires more than the exercise of robust methodological techniques. Rather, it necessitates attention to the, sometimes divergent, priorities of different stakeholder groups. Though there is scant research on evaluation practices in health professions education, evaluation utilization theory can inform critical examination of evaluation practices and impact in our field. Critically, understanding this body of work can help inform those engaged in evaluation about what they are (or should be) prioritizing when they conduct program evaluation, and better align evaluation methodologies with their scholarly, curricular, and administrative intentions. Implications for future research and high quality, transparent, evaluation scholarship are presented.
... The goal-free evaluation model was put forward by Michael Scriven [3]. This evaluation model is an evaluation of the actual influence that the program wants to achieve. ...
... Michael Scriven also developed this formative and summative evaluation model; this model was carried out when the program began to be implemented (formative evaluation) and until the end of the program implementation (summative evaluation) [3]. In this model, the evaluator cannot escape from the goal. ...
... Furthermore, the outcome-based models for academic program evaluations may only focus on the intended outcome(s) of a program [2,5]. However, concurrent with this is the identification of non-intended outcomes emphasizing the need of exploring "what else is happening" (process) and "what else are the outcomes" in a program along with the intended or documented "happenings" and "outcomes". ...
... The behavior criteria are measured by observing learner performance during the process while the result criteria are based on the end result of the program by observing the benefits to the individual, institute, and society [29,30]. Most of the outcome-based models for academic program evaluations only focus on the intended outcome(s) of a program [2,5]. However, concurrent with this is the identification of non-intended outcomes emphasizing the need of exploring "what else is happening" and "what else are the outcomes" in a program along with the intended or documented happenings and outcomes. ...
Article
Full-text available
Objectives: Evaluation of this program was undertaken to gauge the alignment between learning outcomes (LOs), activities, and assessments at the course level. Simultaneously, identify the outcomes, activities, and assessments, which were not specified in documents but are parts of the program, these programmatic processes and outcomes are termed as "emergent" in this evaluation. Methods: This evaluation was accomplished by thematic analysis of the collected data utilizing a case study approach in subsequent steps; first, identifying and involving the key stakeholders to enhance the utility and credibility; second, engaging the participants who were the instructors of the program; third, collection of data through the program documents and interviews of the participants, and finally, data were analyzed in six steps using a qualitative data analysis software. Results: Study results found the courses LOs were well aligned with the activities and assessments, and the progressive nature of the program had resulted in emergent components. Conclusions: Consequently, the evaluation generated a summary report which detailed the findings and recommendations , such as emergent should be continuously evaluated to maintain alignment in the course components, and formal documentation of emergent LOs, activities, and assessment can further indicate achievements of the courses. Furthermore, this evaluation has explicit program design which will provide the foundation for future monitoring and impact evaluations of this program.
... Furthermore, the outcome-based models for academic program evaluations may only focus on the intended outcome(s) of a program [2,5]. However, concurrent with this is the identification of non-intended outcomes emphasizing the need of exploring "what else is happening" (process) and "what else are the outcomes" in a program along with the intended or documented "happenings" and "outcomes". ...
... The behavior criteria are measured by observing learner performance during the process while the result criteria are based on the end result of the program by observing the benefits to the individual, institute, and society [29,30]. Most of the outcome-based models for academic program evaluations only focus on the intended outcome(s) of a program [2,5]. However, concurrent with this is the identification of non-intended outcomes emphasizing the need of exploring "what else is happening" and "what else are the outcomes" in a program along with the intended or documented happenings and outcomes. ...
Article
Objectives: Evaluation of this program was undertaken to gauge the alignment between learning outcomes (LOs), activities, and assessments at the course level. Simultaneously, identify the outcomes, activities, and assessments, which were not specified in documents but are parts of the program, these programmatic processes and outcomes are termed as "emergent" in this evaluation. Methods: This evaluation was accomplished by thematic analysis of the collected data utilizing a case study approach in subsequent steps; first, identifying and involving the key stakeholders to enhance the utility and credibility; second, engaging the participants who were the instructors of the program; third, collection of data through the program documents and interviews of the participants, and finally, data were analyzed in six steps using a qualitative data analysis software. Results: Study results found the courses LOs were well aligned with the activities and assessments, and the progressive nature of the program had resulted in emergent components. Conclusions: Consequently, the evaluation generated a summary report which detailed the findings and recommendations, such as emergent should be continuously evaluated to maintain alignment in the course components, and formal documentation of emergent LOs, activities, and assessment can further indicate achievements of the courses. Furthermore, this evaluation has explicit program design which will provide the foundation for future monitoring and impact evaluations of this program. Keywords: Program evaluation, qualitative evaluation, course alignment, intended components, emergent components.
... On another hand, the Goal-Free evaluation model by Scriven in the 1970s differ with the CIPP evaluation model and Countenance of evaluation (Stufflebeam, 1983). Goal-free evaluation is an approach whereby the independent evaluator evaluates an outcome of a program without actually knowing the actual intended goal (Scriven, 1991). One of its major applications is in consumer-orientated evaluation whereby the goal is derived from the consumer rather than producer orientation (House, 2003). ...
... It uses formative-summative approach but places great emphasis on summative role of evaluation in which the judgement of an object is based on the accumulated evidence. In fact Scriven (1991) renders that summative evaluation is more important than formative evaluation. The Goal-Free model is incorporated in CIPP model in that CIPP model considers the attainment of intended goals which are goal-based (Tyler, 1949) and unintended goals which are goal-free. ...
Conference Paper
Full-text available
The purpose of this paper is to discuss the application of CIPP (Context, Input, Process and Product) evaluation model as a framework to guide the planning, development and implementation of the integrated STEM (Science, Technology, Engineering, Mathematics) module. The review of literature is employed to compare the context-relevance of three closely related evaluation models. The findings disclose that CIPP evaluation model can be utilized in assisting the on-going improvement process through its formative role in evaluation and its summative role in judging the outcome of the implementation of the program in order to inform decisions. Besides this model can be conducted before, during or after the implementation of a program and allow the possibility of conducting a single evaluation, or some combinations subjected to the needs of the stakeholders or audiences. The various previous studies reveal that CIPP evaluation model is useful in making decisions about the worth of an existing program as well as in the conception and development process of a program. The application of CIPP evaluation model as a framework in the planning, development and implementation of the integrated STEM module provides a systematic step by step guide of data collection, preventing failure in any phase and thus ensuring the effectiveness of the module. Therefore, the CIPP evaluation model is able to guide needs assessment, planning and development of the module, monitor the process of implementation, and provide feedback and judgement of the module’s effectiveness for continuous improvement.
... This study was guided by Michael Scriven's Goal-Free Evaluation Model and the Principle of Selective Retention (Scriven, 1991). Goal-free evaluations can be adapted for use with other evaluation approaches, models, and methods and can also be used for both quantitative and qualitative methods (Youker & Ingraham, 2013). ...
... Scriven's goal-free evaluation model looks at all the possible effects of a programme to its audience in order to further improve the programme (Scriven, 1991). Thus, the suggestions elicited from the students in the focus group could serve as a guide for further improvement of SOA programmes such as Hanep Gulay. ...
Article
Full-text available
This study determined the effect that the school-on-the-air programme, “Hanep Gulay”, had on the participants through their retention and adoption of the information. A self-administered retention test and a researcher-administered adoption test were handed out to 27 students in Majayjay, Laguna; eight of which participated in the focus group discussion. Results of the study showed that the retention and adoption of the information from “Hanep Gulay” were low to moderate. Many students had recalled information from the ‘Atsarang’ Papaya (pickled Papaya), as one participant said her own knowledge may have helped her remember the information; a minority recalled information from the Garlic Flakes episode. Many students had a low level of recall as some participants missed the discussion of the lessons as they arrived late to the watching area. More than half of the students adopted the information from the ‘Atsarang’ Papaya episode, while a minority adopted the information from the Garlic Flakes and Candied Squash episodes as the focus group participants considered them useful and can be disseminated to others. To improve their retention and adoption of the information, the students suggested the conduct of follow-up visits, continuation of the programme on television, involvement of more popular hosts, speaking at a slower pace, and improving the audio quality of the productions.
... Tampouco se quer dizer que atualmente os objetivos das políticas públicas não devam continuar como base para aferição de resultados ou que as políticas públicas devam prescindir de objetivos (Scriven, 1991), pois há correlação entre a implementação e a avaliação de políticas públicas no processo de análise de políticas públicas. ...
Thesis
Full-text available
This thesis springs from the following research question: how could the diplomatic performance of the Brazilian Foreign Affairs Ministry in the area of technological promotion be evaluated? Based on the assumption that there is no specific method for measuring these types of actions, which are essentially derived from public diplomacy, the main objective of the thesis was to bridge this gap, with the development of an original model for evaluation. The main elements to understand what should be taken into consideration when assessing the performance of public servants as regards technological promotion abroad were extracted from the Brazilian Ministry of Foreign Affairs' Innovation Diplomacy Program (PDI), which brings together actions to disseminate the image of Brazil as an innovative country abroad. Using the instrumental case study methodology, the annual reports of the program and diplomatic cables launching the 2017-2022 PDI calls were coded to identify elements of the concept of innovation diplomacy and the program's logical framework approach. Once the purpose of the PDI actions have been understood, to develop the framework to evaluate its performance, challenges were investigated, as well as methods, set forth in the reviewed literature, as well as international experiences from Australia, Canada, Denmark, the United States, the Netherlands, United Kingdom, Sweden and Spain's Autonomous Community of Catalonia, with the aim of identifying good practices that could guide evaluations of public diplomacy initiatives. The hierarchy of evaluation questions model was used to verify the feasibility of evaluating the specific PDI case. The result was the development of a model, whose data must be collected in a certain period of time (“Δ time”), in a set of countries or individual country (“∧{country(s)}”), which is derived from the analysis of the combination of the differences (“Δ”) of the following aspects: (i) opinion poll among qualified audiences; (ii) exports of the technology sector; (iii) investments in the technology sector; (iv) establishment of institutional partnerships; (v) attraction of labor force to the technology sector; and (vi) number of scientific articles and joint patents with counterparts from the country(ies) analyzed. The model is a tool to help the public manager to select the data that must still be subjected to a process of qualitative analysis, as the individual reading of quantitative information and metrics can lead to wrong conclusions.
... Goal-free evaluation has been conducted in program evaluation both by design and by default in the more than 40 years since Scriven (1972) introduced it. (See Table 1.) ...
... This study analyzed the self-perception of undergraduate students from 12 Health Science degrees on how a congress activity aimed at them and tutored by professors helped them to acquire TCs in relation to research. According to Scriven [41], in order to carry out an impartial evaluation based on the observed results, the evaluation must be completed by external experts who have no knowledge of the goal of a program. For this reason, we used the TCs established, prior to the event, by ANECA experts for undergraduate degrees in Health Sciences. ...
Article
Full-text available
Context Several curricular initiatives have been developed to improve the acquisition of research competencies by Health Science students. Objectives To know how students self-perceived of whether their participation in the XIV National Research Congress for Undergraduate Students of Health Sciences had helped them in the acquisition of 36 research-related transferable competencies (TCs) common to Health Science degrees. Methods A survey design (Cronbach's alpha = 0.924), using a self-administered questionnaire, was conducted among undergraduate students who voluntarily participated in the Congress. Data analysis was performed using SPSS 25 and Statgraphics 19. Statistical significance was considered for P < 0.05. Results Eighty-one students from 12 Health Science degree programs responded. Key findings are presented in a structured manner, using a Likert-5 scale. Twenty-five of the competencies surveyed obtained an average ≥ 4 highlighting: “Critically evaluate and know how to use sources of clinical and biomedical information to obtain, organize, interpret, and communicate scientific and health information”; “To be able to formulate hypotheses, collect and critically evaluate information for problem solving, following the scientific method”, “Critical analysis and research” and “Communicate effectively and clearly, orally and in writing with other professionals”. Significance was found in 15 competencies. The development of the competencies “Teamwork”, “Critical reasoning” and “Analysis and synthesis abilities” was considered to be of greater "personal utility" by the respondents. Conclusion Participation in this event contributed to the development of research-related TCs, critical analysis and information management and communication, especially in relation to learning the sources of clinical and biomedical information, to know, following the scientific method, how to formulate hypotheses that allow students to solve problems in their professional activity. The experience was significantly influenced by the respondents’ year, the type of participation in the event and the gender of the students. Limitations and suggestions regarding future research are discussed to encourage further exploration of the topic.
... When analysing the interview data, the analyst (typically not blindfolded) reviews what the different sources are saying, and bit by bit (inductively, iteratively) tries to identify the common elements in their narratives using causal factor labels, such as 'health' and 'amount of exercise'. While respondents are invited to share their experiences and perceptions of change across predetermined well-being domains relevant to the programme's Theory of Change, other important outcomes may emerge from their narratives which were not envisaged by the commissioner; to this extent, the QuIP can be considered a form of goal-free evaluation (Scriven, 1991). Different respondents will, of course, have different experiences and perceptions, and will not always use exactly the same phrases to express their causal narrative. ...
Chapter
Full-text available
What do the intended beneficiaries of international development programmes think about the causal drivers of change in their livelihoods and lives? Do their perceptions match up with the theories of change constructed by organizations trying to support them? This case study looks at an entrepreneurship programme aiming to economically empower rural women smallholders in Ghana. The programme provided a combination of financial services, training and peer support to improve the women’s productivity, and purchase and sale options. It was implemented by two Ghanian savings and credit organizations, Opportunity International Savings and Loans, and Sinapi Aba Savings and Loans, with support from the development organization Opportunity International UK (OIUK). We report on a mid-term qualitative evaluation of the programme that used the Qualitative Impact Protocol (QuIP) to gather stories of change directly from the programme participants. These stories were coded, analysed and visualized using a web application called Causal Map.
... April 2021 durchgeführt. 194 So setzen die zitierten Autor:innen -zumindest implizit -klare Ziele voraus, auch wenn diese nicht Ausgangspunkt der Evaluation sind (siehe dazu auchScriven 1991). Zum Verhältnis der Begriffe Wirkungsanalyse und Evaluation siehe auch den nachfolgenden Abschnitt Verschiedene Sichtweisen auf das Verhältnis von Wirkungsanalysen und Evaluationen. ...
Thesis
Full-text available
Participatory processes have been part of everyday urban development practice for decades, in Germany and internationally. They have been subject to research for just as long. Questions about effects or success of participation are asked frequently. Effective participation is variously named as goal of action. Nevertheless, empirical knowledge on the effects of participatory processes in urban development remains rare. Several handbooks, guidelines and quality criteria have been published in recent years; however, relevant impact research is constantly assumed to be in its infancy. Against this background, the work explores obstacles, levers and perspectives for the implementation of impact analyses on participatory processes in urban development. Impact analyses are not clearly defined; the theoretical approach to this work is based on evaluations. Of particular interest here, are possibilities of examining the effects of informal and inviting participatory processes in urban development, where those responsible are free in process design. The guiding question is, to what extent impact analyses can contribute to a better understanding of the effects and impact mechanisms of participatory processes in urban development. The research approach and method of investigation can be described as groundwork-oriented, exploratory, learning, multi-method and – as far as possible in the context of a dissertation – as inter- and transdisciplinary. First, possible obstacles for the implementation of impact analyses were collected by using a creativity technique and literature work and discussed with experts in a workshop. Based on this, 15 theses on controllable obstacles were developed, that were given to 90 people for weighting in an online survey. The results were compared in groups, differentiated according to the focus of the respondents' professional activities and their expertise. In a further step, suggestions from the survey were evaluated and the 15 identified obstacles were systemically examined for interactions. The research on existing concepts for the implementation of impact analyses, relevant studies and causation models complement the approach. The findings indicate that relevant preparatory work is widely scattered – across scientific disciplines, publication types and decades of dissemination. The online survey and the systemic investigation show that significant obstacles are diverse and intertwined in many ways. Setting incentives and specifications was identified as the only lever to promote impact analyses. In addition, insights on requirements and limitations of impact analyses could be achieved. As a result, their use in participatory processes in urban development appears possible, yet extremely challenging, since specific problems of planning theory and the question on how to succeed in balancing the expense and benefit of impact analyses remain unresolved, even if perspectives could be worked out. Beteiligungsprozesse gehören seit Jahrzehnten zum Alltag der Stadtentwicklungspraxis. Ebenso lang sind sie Gegenstand von Forschungsarbeiten. Wiederkehrend werden Fragen nach den Wirkungen oder dem Erfolg der Beteiligung gestellt. Wirkungsvolle Beteiligung wird verschiedentlich als Handlungsziel benannt. Dennoch bleibt empirisches Wissen über Wirkungen von Beteiligungsprozessen in der Stadtentwicklung rar. In den vergangenen Jahren sind etliche Handbücher, Leitlinien und Qualitätskriterien erschienen; einer einschlägigen Wirkungsforschung wird jedoch anhaltend unterstellt, sie stecke in den Kinderschuhen. Vor diesem Hintergrund ergründet die Arbeit Hemmnisse, Stellschrauben und Perspektiven für die Durchführung von Wirkungsanalysen zu Beteiligungsprozessen in der Stadtentwicklung. Wirkungsanalysen sind nicht eindeutig definiert; der theoretische Zugang der Arbeit erfolgt über Evaluationen. Besonderes Interesse gilt den Möglichkeiten, Wirkungen von informellen und einladenden Beteiligungsprozessen in der Stadtentwicklung zu untersuchen, bei denen Verantwortliche in der Prozessgestaltung frei sind. Untersuchungsleitend ist die Frage, inwieweit Wirkungsanalysen dazu beitragen können, Wirkungen und Wirkungsmechanismen von Beteiligungsprozessen in der Stadtentwicklung besser zu verstehen. Forschungsansatz und Vorgehen sind grundlagenorientiert, explorativ, lernend, multimethodisch sowie, soweit möglich, inter- und transdisziplinär. Zunächst wurden mögliche Hemmnisse für die Durchführung von Wirkungsanalysen mithilfe einer Kreativitätstechnik und Literaturarbeit gesammelt und mit Expert:innen diskutiert. Darauf aufbauend sind 15 Thesen zu beeinflussbaren Hemmnissen entstanden, die im Rahmen einer Online-Befragung 90 Personen zur Gewichtung vorlagen. Die Auswertung erfolgte vergleichend in Gruppen, unterschieden nach beruflichen Tätigkeitsschwerpunkten der Befragten und ihrem Sachverstand. Zudem wurden Anregungen aus der Befragung ausgewertet und die identifizierten Hemmnisse systemisch auf Wechselwirkungen untersucht. Die Recherche bestehender Konzepte, einschlägiger Anwendungsfälle und Wirkmodelle ergänzte das Vorgehen. Es zeigt sich, dass relevante Vorarbeiten weit verstreut sind – über wissenschaftliche Disziplinen, Publikationsarten und Jahrzehnte hinweg. Online-Befragung und systemische Untersuchung belegen, dass bedeutende Hemmnisse verschiedenartig sind und vielfältig ineinandergreifen. Als einzige Stellschraube, um Wirkungsanalysen zu befördern, wurde das Setzen von Anreizen und Vorgaben identifiziert. Darüber hinaus ließen sich Erkenntnisse zu Voraussetzungen und Grenzen von Wirkungsanalysen erzielen. Deren Einsatz zu Beteiligungsprozessen in der Stadtentwicklung erscheint im Ergebnis zwar möglich, aber überaus herausfordernd, denn offen bleiben planungstheoretische Probleme und die Frage, inwieweit es gelingt, Aufwand und Nutzen von Wirkungsanalysen in ein Gleichgewicht zu bringen, auch wenn hierfür Perspektiven erarbeitet werden konnten.
... By engaging external evaluators, the engineering professor hoped to gain critical and impartial opinions from outsiders, especially those with educational research experiences. To avoid conflicts of interest, they agreed upon a goal-free evaluation model (Scriven, 1991); that is, the first and second authors would observe, document, and measure the actual process and outcome of the curriculum project, intended or unintended, without being constrained by the project aims. The first author, who had over 5 years' experience working as an education ethnographer in STEM education by the time the project was initiated, conducted participant observation and led the data collection and analysis. ...
Article
Full-text available
Background The recent discussion of introducing artificial intelligence (AI) knowledge to K–12 students, like many engineering and technology education topics, has attracted a wide range of stakeholders and resources for school curriculum development. While teachers often have to directly interact with external stakeholders out of the public schooling system, few studies have scrutinized their negotiation process, especially teachers' responses to external influences, in such complex environments. Purpose Guided by an integrated theoretical framework of social constructionism, this research examined the process of how a teacher‐initiated AI curriculum was constructed with external influences. The research focused on teachers' perspectives and responses in mediating external influences into local schools and classrooms. Methods A 3‐year ethnographic study was conducted in relation to an AI curriculum project among 23 Computer Science (CS) teachers from primary schools. Data collected from ethnographic observation, teacher interviews, and artifacts, were analyzed using open coding and triangulation rooted in the ethnographic, interpretivist approach. Results Three sets of external influences were found salient for teachers' curriculum decisions, including the orientation of state‐level educational policies, AI faculty at a partner university, and students' media and technology environments. The teachers' situational logics and strategic actions were reconstructed with thick descriptions to uncover how they navigated and negotiated the external influences to fulfill local challenges and expectations in classrooms and schools. Conclusions The ethnographic study uncovered the dynamic and multifaceted negotiation involved in the collaborative curriculum development, and offers insights to inform policymaking, teacher education, and student support in engineering education.
... need to pay attention to the program's purpose (Scriven, 1991). In contrast, the evaluation in this model investigates the process of running the program through the identification of realities that show positive things and negative things 3. Formative Summative Evaluation Model; In principle, formative evaluation is an evaluation that is carried out when the implementation of a program is in progress or when the program is still in the initial stages of activity. ...
Conference Paper
Discuss and share practical and theoretical issues in the fields of Economics, Finance and related social sciences. There were submitted 77 papers from 20 different countries beyond Turkey.
... For example, scholars point to the problems that result from using intended program outcomes as a criterion. Intended outcomes tend to reflect the perspectives and interests of decision makers rather than participants and often fail to account for unintended consequences of programs (Abma et al., 2020;Kushner, 2000;Madison, 1992;Mathison, 2005;Scriven, 1991). Some scholars suggest the need to focus on the experience of program stakeholders to understand quality, arguing that looking at intended outcomes tells us little about what is actually going on in a program (Stake, 2004, p. 89). ...
Article
This article proposes a research program with two goals: (a) to support nonprofit leaders to productively engage evaluation and (b) to advance a meso-level theory of nonprofit evaluation that recognizes the diverse ways nonprofits contribute to social change. Such a research program is timely, as evaluation becomes increasingly institutionalized in the sector in ways that constrain nonprofit leaders from engaging productively with evaluation to advance their social impact. This research program brings existing nonprofit scholarship into conversation with evaluation scholarship and puts forward a research agenda organized around the practical dilemmas facing nonprofit leaders as they answer four key evaluation questions: what to evaluate, for what purpose, using which criteria, and with what evidence and methods. By anchoring a research program around these four questions, we seek to reopen the possibilities for how scholars can support nonprofit leaders in engaging evaluation to enhance their social impact.
... Simultaneously, evaluation sites began to diversify dramatically, with the federal government playing a smaller role (Fitzpatrick et al.). Several scholars developed new evaluation approaches urging evaluators to look beyond the rote application of objectives-based evaluation, and proposed goal-free evaluation which prompts evaluators to look at the program's processes and context to find unintended outcomes (Scriven, 1991). Stufflebeam (2003) developed the CIPP model in response to the need for more informative evaluations for decision makers. ...
Article
Evaluation refers to a systematic collection of data and utilization of that data to improve the program or project. It provides data to judge whether the program attained its goals and to help plan future training. This study aimed to implement an exploratory sequential research framework to evaluate the Master Trainer-Faculty Professional Development Program (MT-FPFP). MT-FPDP was a teachers training program designed for higher education faculty to develop their teaching skills. The current study was divided into two phases. In phase I, qualitative data was collected through document analysis and semi-structured interviews with coordinators of MT-FPDP. A self-developed questionnaire was used for data collection from Master Trainers (MTs) in phase II. Thematic analysis, percentages, and mean scores were used to analyze the data. The findings indicated that exploratory sequential framework was a best fit for evaluating MT-FPD. MT-FPDP was discovered to be a successful university-level teachers' training program. MTs encountered several challenges during the cascade training process at Higher Education Institutions (HEIs). This work contributes to the literature from both an academic and a practical standpoint. First, this study contributes to the literature by implementing exploratory sequential framework as an evaluation design and by providing detailed information regarding the training program's objectives, design, process, implementation, and challenges. Secondly, policy-makers and decision-makers will benefit from the results of the current study. In addition, exploratory sequential framework can aid future evaluators and researchers in systematic evaluations of teachers' training and other programs both inside and outside the education profession.
... Die Goal-Free Evaluation nach Scriven (1972Scriven ( , 1983Scriven ( , 1991 beruht auf der Annahme, dass sich Evaluationen nicht an den Zielen einer Maßnahme orientieren dürfen, sondern die Frage, wie Ziele erreicht werden, fokussiert werden muss. "It seemed to me, in short, that consideration and evaluation of goals was an unnecessary but also a possibly contaminating step. ...
Thesis
Full-text available
Wissenschaftliche Weiterbildung, als ein Teil des bildungspolitischen Konzepts des lebenslangen Lernens, erfährt in den vergangenen Jahren einen stetigen Bedeutungszuwachs. Der einschlägige Diskurs reagiert darauf mit entsprechenden Beiträgen, fokussiert bei der empirischen Bewertung entsprechender Angebote jedoch vielfach auf die Zufriedenheitsurteile Teilnehmender. Fragen nach der Wirksamkeit, insbesondere dem Theorie-Praxis-Transfer, oder weitergehende Perspektiven bleiben eher unberücksichtigt. Die vorliegende Dissertation geht daher in evaluativer Absicht folgender leitender Fragestellung nach: Wie wird wissenschaftliche Weiterbildung von ihren Akteuren bewertet? Für deren Beantwortung werden die Erkenntnisse dreier empirischer Untersuchungen herangezogen. Studie 1 validiert ein für die Wirksamkeitsbewertung eines berufsbegleitenden Masterstudiengangs entwickeltes Evaluationsmodell und -instrument. Sie leistet die methodologische Absicherung der Erkenntnisse aus Studie 2, in der ebenjenes wissenschaftlich weiterbildende Angebot einer Wirksamkeitsbewertung durch die Teilnehmenden unterzogen wird. Die dritte Studie ermittelt die bei organisationalen Adressat*innen vorliegenden Bilder von Universität und ihre Bedeutung für die wissenschaftliche Weiterbildung. Die Bewertung wissenschaftlicher Weiterbildung hängt von ihrem Zeitpunkt bzw. von der eingenommenen Perspektive ab. Die organisationalen Adressat*innen zeichnen die Universität ambivalent: als renommierte und mit großem Fachwissen ausgestattete sowie zugleich als distanzierte und praxisferne Bildungseinrichtung. In Unkenntnis eines entsprechenden Angebots werden diese Bilder auf die wissenschaftliche Weiterbildung übertragen und wirken dort teilnahmefördernd und -hemmend zugleich. Erfolgt die Bewertung wissenschaftlicher Weiterbildung nach der Teilnahme, fällt diese im untersuchten Fall positiv aus. Der berufsbegleitende Masterstudiengang erweist sich auf allen Evaluationsebenen als wirksam. Mit den Erkenntnissen einhergehende Implikationen und Limitationen werden kenntlich gemacht, bevor ein abschließender Ausblick gegeben wird.
... Scriven provides a transdisciplinary model of evaluation in which one draws from an objectivist view of evaluation (Michael Scriven, 1991a, 1991b. Scriven defined three characteristics to this model: epistemological, political, and disciplinary. ...
Chapter
Evaluation sits at the center of the instructional design model. It provides feedback to all other stages of the design process to continually inform and improve our instructional designs. In this chapter we will discuss the Why, What, When, and How of evaluation. We will explore several of the most cited evaluation models and frameworks for conducting formative, summative, and confirmative evaluations. It is important to note that instruction can occur in formal instructional settings or through the development of instructional products such as digital learning tools. Throughout this chapter we will discuss interchangeably instructional programs and/or products. Effective evaluation applies to all of these forms of instructional design. https://edtechbooks.org/id/instructional_design_evaluation
... However, actual impacts may not mirror these intentions. Furthermore, expressed goals can be distinct from 'real' goals; real goals themselves are bound to be but sub-set of anticipated effects on educational systems; and anticipated effects are at best a subset of actual intervention effects (Scriven, 1972). Furthermore, there can be disconnects between the logic and evidence that underpin the intervention's theories of change and the prescribed theories of action (Montague, 2019). ...
Article
Full-text available
As curricular reforms are implemented, there is often urgency among scholars to swiftly evaluate curricular outcomes and establish whether desired impacts have been realized. Consequently, many evaluative studies focus on summative program outcomes without accompanying evaluations of implementation. This runs the risk of Type III errors, whereby outcome evaluations rest on unverified assumptions about the appropriate implementation of prescribed curricular activities. Such errors challenge the usefulness of the evaluative studies, casting doubt on accumulated knowledge about curricular innovations, and posing problems for educational systems working to mobilize scarce resources. Unfortunately, however, there is long-standing inattention to the evaluation of implementation in health professions education (HPE). To address this, we propose an accessible framework that provides substantive guidance for evaluative research on implementation of curricular innovations. The Prescribed-Intended-Enacted-Sustainable (PIES) framework that is articulated in this paper, introduces new concepts to HPE—with a view to facilitating more nuanced examination of the evolution of curricula as they are implemented. Critically, the framework is theoretically grounded, integrating evaluation and implementation science as well as education theory. It outlines when, how, and why evaluators need to direct attention to curricular implementation, providing guidance on how programs can map out meaningful evaluative research agendas. Ultimately, this work is intended to support evaluators and educators, seeking to design evaluation studies that provide more faithful, useful representations of the intricacies of curricular change implementation.
... An intervention may also have multiple or conflicting goals (Davidson, 2005;Mark et al., 2000;Shipman, 2012). In addition, an exclusive focus on program goals can result in evaluators overlooking unintended outcomes (positive or negative), as well as the extent to which intervention aims are aligned with the needs of intended beneficiaries (Davidson, 2005;Deutscher, 1977;Scriven, 1972). ...
Article
Full-text available
Evaluative criteria define a “high quality” or “successful” evaluand and provide the basis for judgment of merit and worth, yet they are often assumed and implicit in the evaluation process. This article presents an empirically supported model that describes and integrates two aspects of criteria: domain and source. Domain identifies the focus or substance of a criterion, while source describes the individual, group, or document from which it is drawn. Developed from a synthesis of evaluation literature and empirical analysis of evaluation reports, the model defines 11 criteria domains and 10 sources and reveals the relationships among them. In this integrated model, the two dimensions can be used together as a thinking tool to guide evaluators in specifying criteria, in empirical research on the valuing process, and as a conceptual framework and language for theorists prescribing criteria selection.
... Schalock (2001) has discussed outcome based evaluation as an effective approach as he believes that quality revolution, customer focus, accountability demands and practical program evaluation patterns require that they adopt an evaluative approach that responds to the needs of all stakeholders. Scriven (1991) asserts that evaluating goals/intended outcomes can be important for a proposal but cannot help to evaluate a product. ...
Article
Full-text available
Critics of the value of the Executive MBA program have not adequately considered the perceptions of Executive MBA students. This paper evaluates performance of an Executive MBA program by exploring students’ preferred developmental outcomes and perceptions about the effectiveness of their Executive MBA program towards delivering the targeted outcomes. Interviews, focus groups and survey were conducted with program’s directors, staff, and current and graduated students in a large privately run university in Punjab province of Pakistan. As a result of a rigorous process, the study identified twenty-seven critical outcomes under two categories namely “personal outcomes” and “professional outcomes” which students consider important and urge their Executive MBA program to deliver. On the whole students appear to be satisfied with their Executive MBA program; however, the effectiveness of their program is below their expectations. Identification of the exact outcomes in this study provide directions for Executive MBA administrators to make their curriculum and pedagogical/andragogical techniques more relevant and value-oriented for their students. Based on these findings, it is inferred that Executive MBA programs’ planning should consider students as the protagonist of their programs’ planning process.
... There are numerous curriculum design and evaluation models, such as the CIPP model (Stufflebeam, 2003), the model by Kirkpatrick and Kirkpatrick (Kirkpatrick & Kirkpatrick, 2009), Scriven's Goal Free model (Scriven, 1991), Stake's model (Stake, 1967) and Tyler's model (Tyler, 1949). In particular, Tyler's model had been adopted in our department curriculum review for the mathematics content in the undergraduate degree programme at the National Institute of Education in Singapore (Tay & Ho, 2015). ...
Article
Full-text available
In this work, we report the emergence of the Hands, Head and Heart framework that arose within the curriculum review for subject knowledge courses for primary school pre-service teachers in the National Institute of Education, Singapore. Through an initial grounded analysis of a survey of pre-service teachers and faculty focus group meeting data, the responses were broadly categorised into hands, head and heart domains and these formed an initial framework for discussions in the review committee meetings. By revisiting the data from the survey, an analysis through a complexity lens revealed the emergence of a characteristic nested self-similarity of the framework. Over the course of several committee meetings, further self-similarity was discovered. We conjecture that the Hands, Head and Heart framework and its self-similarity property provide a potential basis for a holistic approach to curriculum review. We used this framework to revise the learning objectives of the subject knowledge curriculum by resolving perspectives which previously seemed contradictory.
... Specifically, a goal-free evaluator collects data without any particular knowledge of, or reference to, stated or predetermined goals and objectives, and then compares the observed outcomes with the actual needs of the programme participants (Scriven 1991a) with a view to making a judgement of the merit or worth of that programme. Scriven (1991b) believes that the task of evaluation should be to determine exactly what effects a programme actually produced, and not to be too concerned with whether or not those effects were intended. Thus, without being cued up to what a programme is actually trying to do, a goal-free evaluator looks for what the programme is actually doing. ...
Article
Many leadership development studies consider developing leadership as a dynamic process that takes time. However, few evaluative inquiries examine the effects of time on leadership development outcomes. As the concept of time has begun to receive the attention it deserves in leadership research, we present a case for including temporal dimensions in leadership development outcomes research. We review conceptual evaluation frameworks and published empirical evaluations in order to highlight the fact that scholars have paid scant attention to time-related considerations in programme evaluation. By using a goal-free evaluation of healthcare leadership development programme as a case example, we illustrate six types of outcomes such as a symbol, rejuvenation, discovery, change, engagement and transformation and reveal their different temporal dimensions. Based on the findings, we argue that, for evaluations to be rigorous and more meaningful to key stakeholders, adopting a time-sensitive approach may be critical.
... In the years before the 1970s, most evaluators focused on the extent to which a programme achieved its stated objectives, a model that had been promoted by Tyler (1942) in the context of evaluation of school programmes. However, this tendency changed in the early 1970s as new models were advocated that emphasized the need to extend evaluations to other programme outcomes, intended and unintended, as well as the intervening processes that lead to outcomes (Scriven, 1972). The so called "theory-driven evaluation" model (Weiss, 1972) was developed as a result of these new insights. ...
Article
Full-text available
WORKING PAPER As knowledge and capacity development (KCD) increasingly gets acknowledged as a strategy for water sector development, the need to evaluate its development impact and cost effectiveness increases, too. However, evaluating KCD in practice remains a challenge. Notably, this is due to the difficulty to define capacity operationally (due to its complex nature) and, thus, the lack of reliable indicators to measure its impact. This paper aims at synthesizing the current wisdom on the topic of KCD evaluation in the water sector. We discuss the leading approaches to KCD in the water sector (and beyond), i.e. positivist and complex adaptive systems, two associated KCD evaluation paradigms, the major challenges facing KCD evaluation as well as the methodological progress made in that area. Overall, the paper offers a sense of direction on where the debate on KCD evaluation is heading to, and provides insights to KCD practioners that can help them improve their KCD evaluation practice.
... Su desventaja tal vez se concrete en una cierta inviabilidad, en determinadas circunstancias, de esta característica que precisamente le diferencia de todos los demás enfoques, debido sobre todo a razones presupuestarias y de amplitud y altura de miras: desvelar todos los efectos. Uno de los autores más fervientes defensores de este planteamiento es Scriven (1974). ...
... Scholars of evaluation theory will see the similarities in this approach with Michael Scriven's seminal work on goal-free evaluation (e.g. Scriven 1991) Patton and Patrizi refer favourably to Wehipeihana and Davidson's (2010) recent work on strategic policy evaluation in which they argue that what distinguishes strategic evaluation from just policy evaluation is its contribution to painting a big picture and answering macro-level and cross-project questions (p. 22). ...
... The methodological approach of Body et al's research used financial analysis and survey data to provide statistical trend level data as an exploratory study on this topic. Case study analysis provides the 'polar opposite' ( Scriven, 1991), facilitating the close examination of particular events or activities, in order to provide analytical understanding, alongside descriptive and detailed data ( Dyer, 1995). With only 2% of the primary schools successfully securing over £50k per annum ( ), closer examination of the practices, challenges and experiences of these schools as 'instrumental case studies' ( Stake, 1994) provides valuable insight into fundraising in primary schools that has not been captured elsewhere. ...
Article
In response to depleting budgets and intensified performance pressures, primary schools are increasingly turning to fundraising as one mechanism for combatting ongoing challenges. Although research identifies that two thirds of primary schools are actively trying to increase their fundraised income, some primary schools are significantly more successful in attracting additional funds than others, whereas many struggle to focus fundraising efforts “beyond the school gates.” This article focuses on 3 case study schools and the individuals tasked with the role of fundraising, which have each adopted different approaches in a successful attempt to increase their fundraised income. The findings propose that when primary schools proactively focus on their fundraising, invest in people in terms of both time and their skills, and create a positive fundraising narrative that embraces both the schools and local communities' needs, primary schools can succeed in attracting significant philanthropic support, which can be transformative for the school community.
Article
Full-text available
This research undertakes a thematic analysis of discussion threads on social media forums to determine women’s perceptions of quality of obstetric care under TRICARE Prime and TRICARE Select. Following an open coding process and thematic analysis, themes arose regarding obstetric care as experienced by active-duty women as well as female spouses of active-duty members. Themes surrounding active-duty perceptions of quality of obstetric care concerned proximity to care deferred deliveries, lack of care options, deficient on-base care, better civilian care experiences, and lamenting having less options than spouses. Spouses generated themes of positive connotation with TRICARE Select, including low costs, freedom of choice, and proximity to care and negative connotation with TRICARE Prime, such as difficulty getting care, process bureaucracy, and deficient care. Ample evidence pointed to a strong spouse preference for TRICARE Select over TRICARE Prime, but there was not enough evidence to indicate if active-duty women were pursuing out-of-pocket care to circumvent TRICARE Prime restrictions. Overall, women’s discussions point to a need to improve the Military Health System and concerningly suggest active-duty women are confined to care within a system plagued with issues that impact obstetric care.
Article
Sigtet med artiklen er dels at vise, hvordan evalueringer i skolen er mere end teknikker og redskaber. Der er i stedet tale om, for det første at forskellige evalueringsteknikker og redskaber bliver del af en praksis, der i forvejen er både fælles og differentierende på forskellige måder, og at evalueringer får forskellige betydninger for forskellige deltagere. For det andet er evalueringer meget andet end teknikker og praksisser men måder at være sammen på, skabe betingelser for hinanden på og indgå i fælles selvforståelsesprocesser på. Dette vises gennem artiklens analyser. Til sidst præsenteres et alternativ bud på en evalueringspraksis, nemlig den udvidende evaluering, der tager alvorligt, at evalueringer kan være engagementer i fælles læreprocesser.
Article
The author identifies a need for greater understanding of alternative evaluation approaches available in higher education. Five basic definitions of evaluation are identified: (1) evaluation as measurement, (2) evaluation as professional judgment, (3) evaluation as the assessment of congruence between performance and objectives (or standards of performance), (4) decision-oriented evaluation, and (5) goal-free/responsive evaluation. Their basic assumptions, distinguishing characteristics, principal advantages, and disadvantages are presented. Criteria for selecting an evaluation methodology appropriate to specific circumstances, are summarized in the concluding paragraphs.
Thesis
Full-text available
This thesis analyses the institutionalization process of evaluation in the Portuguese development cooperation, between 1994 (the year in which the evaluation was integrated in the cooperation agency) and 2012 (the year in which IPAD merged with the Camões Institute). The key research question is: how the institutionalization of evaluation took place in the Portuguese development cooperation? To answer this question, a literature review and a desk analysis have been conducted. A survey and several semi-structured interviews with Portuguese cooperation actors have also been carried out. Based on the policy transfer theory, it starts from the hypothesis of an incomplete transfer, driven by external actors. Although there is a broad research on evaluation use, there is a research gap regarding the Portuguese reality. There is also no research based on policy transfer theory. This research seeks to contribute to fill these gaps, developing a model that identifies the factors that influence the institutionalization of evaluation in the Portuguese development cooperation. The results show that, despite the internal and, above all, the external determinants, the nature of the policy, the organizational/institutional context and the evaluation system adopted had an influence on the institutionalisation of the evaluation, in particular its use in the decision-making process. The results of this research are a contribution to the understanding of the institutionalization of evaluation in development cooperation organizations and a basis for future research. The conclusions can provide evidence to the cooperation professionals, guide the evaluation practice and promote its use in the decision-making process.
Article
Evaluative conclusions are grounded in implicit and explicit criteria that describe a successful or high-quality intervention. Most often, evaluative criteria are drawn from program objectives that reflect the values and priorities of program designers and funders. Yet, an exclusive focus on program goals risks overlooking the values of program participants, the extent to which their actual needs and priorities are addressed, and, in certain types of programs, the choices participants make and agency they exercise. This article presents concepts and methods to guide evaluators in drawing some of the criteria used in an evaluation from program participants. The article outlines a typology of evaluative criteria and seven methods for drawing outcomes-focused criteria from program participants. The article concludes with a discussion of implications and future directions for research and practice.
Article
Full-text available
Research shows that almost half of children with intellectual disabilities (ID) experience mental ill-health at any given time point. However, traditional cognitive behaviour therapy (CBT) may not be appropriate for children with ID due to the cognitive deficits associated with their diagnosis. The Fearless Me! © CBT program for anxiety is adapted to accommodate the cognitive abilities of children with ID. The aim of the current study was to provide the first qualitative evaluation of the Fearless Me! © program by exploring participant experiences. Eight mother–child dyads were interviewed using a semi-structured protocol. The responses were transcribed and analysed using thematic analysis. Identified codes and themes were cross-checked with an independent researcher and discrepancies were resolved. Parents found the program to be positive and useful for acquiring knowledge. They commented on features of the program, significance of inter- and intra-personal factors and whether the program suited the capability of their child. They also discussed features of treatment outcomes. The qualitative results highlighted that experiences of the program varied. Themes identified included those relating to barriers and facilitators to participation and treatment-related change. The themes provide guidance for program revisions and can inform future delivery of the Fearless Me! © program.
Article
Full-text available
The study's aim was to consider the motivations of workers who work part-time, work the gig economy, work informally and may have entrepreneurial aims in UK and Thailand. The notion of a side hustle to the main work, included where that main work was seen as, for instance, being a parent or other carer. Main work was not classified by level of earnings but by participant perception. In some cases there was no main job just "side" work, often whatever participants could get during the COVID era. Discussion with participants proceeded online and face to face in person. Some participants also completed a questionnaire, so producing a clear objectification for a core of participants, descriptive statistics. Otherwise the study was firmly qualitative in approach. The core descriptive statistical approach was very focused in an extensive Likert scale on motivations. Participants considered money the main motivation, whichever country and whatever the demographic of the participant. Sociability was generally seen as the lowest motivator, fifth of five potential motivators offered in the Likert Scale. Wider discussion with the core participants and others covered partly the same ground as the Likert, but participants introduced other themes for consideration, to include women's empowerment and parenting.
Chapter
This chapter is aimed at students and researchers who will use questionnaire surveys in their research. It describes the basics of conducting questionnaire surveys. It begins by exploring some reasons why a researcher or another may want to conduct a survey. It then describes different types of surveys, describing how they can be differentiated by time and delivery method, and explains that most are cross-sectional (one-off) and self-administered. It then describes the necessary steps in survey research, particularly aligning the survey to the research question, and identifying the audience at which the survey is aimed (sample selection). The chapter then details the various aspects relevant to survey and question design (closed and open questions), including how not to write survey questions. Good question design is integral to a successful survey and this is where most surveys fall short. The different types of closed survey questions, such as dichotomous, nominal, rank order and Likert scale, are discussed, with examples of each. Some logistics around distributing surveys are given, concentrating on online survey distribution, as this is the most common method used nowadays. Finally, the chapter concludes with a brief piece on survey analysis.
Article
Full-text available
Los gobiernos democráticos de los partidos políticos Copei y Acción democrática se alternaron el poder en Venezuela entre 1969-1998, y fue precisamente en esta etapa que emerge el paradigma neoliberal educación. En primer lugar, nos proponemos develar que las teorías del Curriculum recibieron su impacto y tipificaron un modelo educativo que fue privilegiando al llamado “darwinismo social” en el que se privilegia a los sectores de mejores ingresos económicos y marginando los sectores más vulnerables. El presente estudio se conecta con una línea de investigación que analiza las historia del currículum en Venezuela, cuyos resultados han sido presentados a la comunidad científica en informes precedentes, (Mora-García, 2004; Mora-García, 2013; Mora-García, 2014; Rojas & Mora-García, 2019). Se realiza una revisión nueva de la literatura especializada en la relación neoliberalismo y curriculum, con la finalidad de establecer propuestas y conclusiones válidas. Son aproximaciones que nos dan nuevas luces sobre un tema que requiere ser analizado recurrentemente.
Article
The paper explores a phenomenon of growing divisions within the Polish football supporters, namely between the hooligans and the rest of devoted supporters. There are three primary aims of the paper: 1) to explain the origin of the more apparent division in the devoted suppor-ters' community; 2) to characterize the division, and thus-3) to analyse its potential consequences. The paper draws on 96 interviews with football supporters and a desk research conducted within two different research projects. The analysis shows that divisions occurred as a result of the exploitation of supporters' cultural, symbolic and economic resources by hooligan groups. What developed as a consequence was a sense of distinct interests between hooligans and the rest of devoted supporters. We interpret this arising awareness using Marx's categories describing transformation from 'class in itself'" to "class for itself".
Article
Since the 1990s, universities in Japan have been obliged to implement selfevaluation of their activities, especially undergraduate education. As a result of various reforms, such as the introduction of student rating of teaching and the obligation to undergo third-party evaluation, universities began to use a variety of evaluation methods. This article focuses on one of the emerging methods, namely Outcomes-Based Approach(OBA), and discusse 3 questions concerning OBA on the basis of a national survey administered to 1,871 faculties in Japanese universities. Question 1 : To what extent is OBA adopted in the self-evaluation of undergraduate education? Among several evaluation approaches, OBA is the most difficult to use because of the need to assess educational outcomes before making use of it. As outcome assessments in Japanese universities are a fairly new trend, it is anticipated that OBA is less popular than any other approach. The data from the national survey show that the number of faculty using OBA differs according to the focus of the evaluation, but that broadly speaking, it is used less frequently than other approaches. Question 2 : What kind of condition facilitates the adoption of OBA? To answer this question, logistic regression is selected as an analytic method. The dependent variable is the adoption of OBA, and the independent variables comprise 4 concerned with faculty traits and 4 concerned with evaluation conditions. The result indicates that the possibility of adoption increases significantly when a faculty has a clear relationship with a specific occupational field. Question 3 : Is OBA more effective in the improvement of educational conditions/activities than other evaluation approaches? Logistic regression is used again, but this time, the dependent variable is whether or not the improvement occurred. The independent variables are those for Q. 2 and the usage of 4 evaluation approaches. It is found that OBA is effective particularly for the evaluation of faculty organization, admission, and educational methods.
Article
Ongoing colonial power has long been ascribed to government bureaucracy and institutions of higher learning. By consequence, Indigenous communities today are still experiencing challenges regarding the function, foundation and fabric of research that impacts Indigenous peoples, including in the arena of social work education. Writing as an Indigenous scholar and Director of a Master of Social Work programme at a university in the Pacific region, the author’s goal in this article is twofold. On the one hand, he aims to contribute to critical self-reflection of Western research methodologies, while on the other hand offering a reconceptualisation of research tools and techniques that empower the researched and create reciprocal learning opportunities. Through discussion of Indigenous and allied or “co-conspirator” partnerships, and drawing on the example of a model called strengths-enhancing evaluation research (SEER), the author outlines observations regarding the tensions between Indigenous and non-Indigenous researchers and processes. He challenges the established norms of social science research, and offers theoretical and practical examples and questions – including the notion of the researcher as a guest –, that demonstrate how higher education institutions and Indigenous and non-Indigenous collaborations can provide critical responses to historical tensions regarding research and Indigenous peoples. The conduct and behaviour of researchers can have long-lasting, unintended consequences on communities at multiple levels of well-being. The author argues that both Indigenous and non-Indigenous researchers must collaboratively work with communities for change.
Article
The research explored the approaches used by government agencies (as client organizations) to drive occupational health and safety (OHS) performance improvements in publicly funded infrastructure construction projects in Australia. Semi-structured interviews were conducted with 32 representatives of clients and contractors with direct and recent experience of delivering large public infrastructure projects. Interviews explored the procurement approaches taken, and the use of incentives and performance measurement. Data was subjected to inductive analysis to identify emergent concepts and themes relating to the way that New Public Management (NPM) influences the commercial management of infrastructure construction projects, with particular reference to OHS impacts. The concept of institutional logics was utilised as a theoretical frame to understand clients’ behaviour in the commercial management of infrastructure projects. Client behaviour was consistent with elements of NPM and reflected a managerialist logic in the pursuit of efficiency, the use of targets, incentives and performance measurement. However, a strong professional service logic was also found to drive active client behaviour in relation to the management of OHS. Understanding the institutional logics driving client OHS practices is an important theoretical development that can stimulate reflexive practice which may create an impetus for change.
Article
Brandl et al. explores the benefits and challenges for educators seeking to implement a novel holistic evaluation approach proposed by Rojas et al.
Article
An evaluation is essential in an attempt to assess the relative effectiveness of one approach against the other. Many communication channels have been used in agricultural extension services and it is through evaluation that appropriateness and effectiveness of such channels can be determined. To facilitate the task of evaluation among researchers, it is necessary to put in place a model such as the one being proposed by this paper. In the model, a professionally designed and packaged extension messages in form of improved agricultural technologies emanate from an extension agency which constitutes the communication source. The next stage in the model is the use of communication channels. This entails decision by the target audience to use a particular channel, amount of content, and in a particular manner, which can be determined by many socio-economic variables. Effect of an extension programme on the target audience through a particular channel can be operationalized through observable changes in their attitude, knowledge and practice of the innovations leading to enhanced technical competence and increased farm output.
Article
Recently, systems thinking and systems science approaches have gained popularity in the field of evaluation; however, there has been relatively little exploration of how evaluators could use quantitative tools to assist in the implementation of systems approaches therein. The purpose of this paper is to explore potential uses of one such quantitative tool, agent-based modeling, in evaluation practice. To this end, we define agent-based modeling and offer potential uses for it in typical evaluation activities, including: engaging stakeholders, selecting an intervention, modeling program theory, setting performance targets, and interpreting evaluation results. We provide demonstrative examples from published agent-based modeling efforts both inside and outside the field of evaluation for each of the evaluative activities discussed. We further describe potential pitfalls of this tool and offer cautions for evaluators who may chose to implement it in their practice. Finally, the article concludes with a discussion of the future of agent-based modeling in evaluation practice and a call for more formal exploration of this tool as well as other approaches to simulation modeling in the field.
Article
This article provides a descriptive review of four goal-free evaluations (GFE). GFE is an evaluation model where the evaluator conducts the evaluation without knowledge of or reference to the evaluand's stated goals. The four non-randomly sampled evaluation approaches represent articulated evaluation models in which the evaluators ignore the goals of the intervention or project. Data collection consisted of document analyses supplemented by semi-structured interviews with the models' creators. The findings from these case studies include descriptions of the evaluation models, the models' relationship to GFE, and eight commonalities shared among the four models. The conclusion of this study is that these GFEs are similar to other GFEs described in the literature in that they examine outcomes as reported by the intervention's consumers, focus on collecting qualitative data, and use their evaluations to supplement a larger goal-based evaluation strategy.
ResearchGate has not been able to resolve any references for this publication.