Article

Program evaluation use and its mechanisms: The case of Cohesion Policy in Polish regional administration

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Purpose: The study concentrated on the process of evaluation of public programs currently implemented with the support of European Union funds in Poland. The aim was to show how the evaluation practice was adopted in the regional administration within programming and implementation of Regional Operational Programs 2007–2013 (ROP). The author analysed what types of decisions are primarily supported by evaluation and what functions evaluation serves. Methodology: The quantitative analysis was based on data drawn from documentation of the full population of ROP evaluations completed in 2007 to 2012, which was acquired from 16 ROP evaluation units. Findings: The practice of evaluation was well adopted in regional administration and has grown rapidly in recent years. 236 studies, costing more than 16 million PLN, were completed by the end of 2012. However, most studies were of limited value as they concentrated on the implementation process, not on the effects and justification of intervention. Implications: This study focused on quantitative aspects of the knowledge production process (evaluation reports). It omitted the question of actual evaluation use, which together with evaluation process quality and development of evaluation culture should be a subject of further investigation. Originality: This study was the fi rst review of ROP evaluations in Poland. It went far beyond the scope of data collected previously by the Ministry of Regional Development and proposed novel categorizations of evaluation subjects that may be useful for other than ROP evaluations.
Article
Full-text available
The article presents an analysis of usability of regional operational programs 2007-2013 (ROP) evaluations. Among studied elements of usability were: quality, relevance and credibility of the evaluation assumptions. The study comprised desk research of evaluation reports representative sample (n=71) completed between 2008 and 2012. Study results show that many reports do not contain important recommendations and do not answer key research questions. In most cases evaluations fail to answer the question about the impact of ROP on socio-economic development of regions. Limited usefulness of evaluation reports leads to conclusions about negative trends in the development of ROP evaluation system. System is focused on providing simple information, reports production, and does not respond to actual demand for knowledge.
Article
Full-text available
Article proposes a universal framework for a holistic analysis of management systems of public policies. This framework consists of three main entities: a system, an environment of the system and fl ows between system and its surroundings as well as within system, between its elements. Article claims that every policy management system consists of typical elements: stocks of resources (staff, structures and procedures, fi nancial and technical resources) and groups of processes (strategic, operational and learning processes). The proposed framework is well grounded in the empirical research of the Cohesion Policy in Poland. Defi nitions and examples are provided together with a discussion on advantages and challenges in using this framework for analysis of public policies.
Article
Full-text available
Past literature has identified several putative precursors of use, as well as alternative forms of use. However, important shortcomings still exist in previous work on use. In particular, inadequate attention has been given to the underlying processes that may mediate the effects of evaluation on attitude and action. In essence, a key part of the theory of change for evaluation itself is missing. To help fill this gap, we describe a framework designed to capture key mechanisms through which evaluation may have its effects. The framework includes change processes that have been validated in various social science literatures. It identifies three levels of analysis (individual, interpersonal and collective), each with four kinds of processes (general influence, attitudinal, motivational and behavioral). With a more comprehensive view of the mechanisms underlying evaluation’s influence, the field can move forward in relation to its understanding and facilitation of evaluation’s role in the service of social betterment.
Article
Full-text available
This paper reviews empirical research on the use of evaluation from 1986 to 2005 using Cousins and Leithwood’s 1986 framework for categorizing empirical studies of evaluation use conducted since that time. The literature review located 41 empirical studies of evaluation use conducted between 1986 and 2005 that met minimum quality standards. The Cousins and Leithwood framework allowed a comparison over time. After initially grouping these studies according to Cousins and Leithwood’s two categories and twelve characteristics, one additional category and one new characteristic were added to their framework. The new category is stakeholder involvement, and the new characteristic is evaluator competence (under the category of evaluation implementation). Findings point to the importance of stakeholder involvement in facilitating evaluation use and suggest that engagement, interaction, and communication between evaluation clients and evaluators is critical to the meaningful use of evaluations.
Article
Full-text available
The first part (Leonardi; Mairate) provides overviews from different perspectives on the rationale for EU Cohesion policy and its effects, including 'added value'. The second part (Batterbury; Bradley; Martin and Tyler) provides a critical assessment of Cohesion policy evaluation, focusing on the current regulatory framework. It examines contrasting macro-based approaches for determining the effect of policy, comprising a model-based approach to ex-ante evaluate the Structural Funds and a shift-share, residual-based method to evaluate interventions in Objective 1 regions. This part offers a new estimate of the employment effect of Cohesion policy. The third part (Florio; Baslae#169;; Blazek and Vozab; Eser and Nussmueller; Armstrong and Wells) examines the evaluation experience in a variety of circumstances, including different policy measures (Structural Funds, Community Initiatives and Cohesion Fund) and different kinds of evaluation (ex-ante, mid-term and ex-post).3 It explores other issues such as the use of cost-benefit analysis, the evaluation of community economic development (CED) initiatives and the preparation of programming documents in a new Member State. Finally, the fourth part has shorter contributions (Barca; Huber; Jakoby; Raines) that take a policy-maker perspective on evaluation culture in a variety of EU regions and countries.
Article
As part of a larger effort by members of the American Evaluation Association (AEA) Topical Interest Group on Evaluation Use (TIGEU), we undertook an extensive review and synthesis of literature in evaluation use published since 1986. We observe several recent developments in theory, research and practice arising from this literature. These developments include: the rise of considerations of context as critical to understanding and explaining use; identification of process use as a significant consequence of evaluation activity; expansion of conceptions of use from the individual to the organization level; and diversification of the role of the evaluator to facilitator, planner and educator/trainer. In addition, understanding misutlilization has emerged as a significant focus for theory and to a limited extent, research. The article concludes with a summary of contemporary issues, particularly with regard to their implications for evaluation practice.
Article
This paper reviews empirical research conducted during the past 15 years on the use of evaluation results. Sixty-five studies in education, mental health, and social services are described in terms of their methodological characteristics, their orientation toward dependent and independent variables, and the relationships between such variables. A conceptual framework is developed that lists 12 factors that influence use; six of these factors are associated with characteristics of evaluation implementation and six with characteristics of decision or policy setting. The factors are discussed in terms of their influence on evaluation utilization, and their relative influence on various types of use is compared. The paper concludes with a statement about implications for research and practice.
Article
The term "evaluation" encompasses a rich and dynamic set of approaches some of which are interesting counterpoints to the idea of a "happy marriage" between evaluation and new public management. The dual relation between evaluation and public management is reflected both in the content and in the design of this article. This article moves between the two, showing where they are in alignment, and where they are not. The first part of the article describes the evaluation wave, which has hit most western countries during the last decade or two. It shows that different explanations of the current interest in evaluation are possible, and that each type of explanation leads to different views on evaluation and different expectations about its promises and pitfalls. Next, it presents different approaches to evaluation. With the risk of unduly reducing the complexity of the field, the article argues that goal-oriented evaluation, theory-based evaluation, and responsive/participatory approaches to evaluation are useful labels to identify significant and often competing schools of thought in evaluation.
Article
This article is about evaluation use. It focuses on the well-known paradox that evaluation is undertaken to improve policy, but in fact rarely does so. Additionally, the article also finds that justificatory uses of evaluation do not fit with evaluation's objective of policy improvement and social betterment. The article explains why the paradox exists and suggests applying organizational institutional theory to explain evaluation use. The key argument is that in order to explain all types of evaluation uses, including non-use and justificatory uses, the focus needs to be on the evaluating organization and its conditioning factors, rather than the evaluation itself.
Article
Although use is a core construct in the field of evaluation, neither the change processes through which evaluation affects attitudes, beliefs, and actions, nor the interim outcomes that lie between the evaluation and its ultimate goal—social betterment—have been sufficiently developed. We draw a number of these change mechanisms, such as justification, persuasion, and policy diffusion, from the social science research literature, and organize them into a framework that has three levels: individual, interpersonal, and collective. We illustrate how these change processes can be linked together to form “pathways” or working hypotheses that link evaluation processes to outcomes that move us along the road toward the goal of social betterment. In addition, we join with Kirkhart (2000) in moving beyond use, to focus our thinking on evaluation influence. Influence, combined with the set of mechanisms and interim outcomes presented here, offers a better way for thinking about, communicating, and adding to the evidence base about the consequences of evaluation and the relationship of evaluation to social betterment.
Article
This article contributes to the discussion of evaluation use. It argues for a social practice approach to the analysis of evaluation use that enables a discerning and fine-grained understanding of how evaluations might be used by real people in real time. It suggests a distinction between two dimensions of the way an evaluation might be used. It offers an interpretation of ‘use’, which focuses on the context and the capacity of the organizational setting in which evaluation outputs are used; and ‘usability’, which emphasizes the extent to which the evaluation design itself militates against or encourages the use of its outputs in the broadest sense. The two dimensions are distinct yet closely interrelated. The article concludes with a consideration of various approaches and tools that highlight the dimensions of use and usability from a social practice perspective.
Article
This article presents some of the results from a study in progress, focusing on the influence of the institutional distance between evaluators and evaluees on the utilization of evaluations. The basis for the results presented here is an analysis of ten case studies from Switzerland. These cases involve evaluations that were carried out in different institutional contexts, with widely varying institutional distances between evaluators and evaluees. ‘Qualitative Comparative Analysis’ (QCA) has been used to interpret the cases, in order to allow a combination of case-and variable-centred comparisons. The analysis indicates that, under certain conditions, the institutional distance between evaluators and evaluees has no influence on the use of evaluations. In particular, formative objectives can be achieved quite independently of distance. When interpreting the results, however, one should not neglect the fact that they are solely based on a systematic evaluation of ten case studies with QCA. Generalization is not possible on this basis, nor is this the aim of the present article. On the contrary, the objective is to continue developing the debate about the influence of the institutional distance between evaluators and evaluees on the utilization of evaluations.
Article
Research has identified a wide range of factors that affect evaluation use but continues to be inconclusive as to their relative importance. This article addresses the complex phenomenon of evaluation use in three ways: first, it draws on recent conceptual developments to delimitate the examined form of use; second, it aims at identifying conditions that are necessary but not necessarily sufficient for evaluation use; third, it combines mechanisms of evaluation use, context conditions, and actor perceptions. The study reported here examines the use of 11 program and project evaluations by the Swiss Agency for Development and Cooperation (SDC). The article makes use of qualitative comparative analysis (QCA), a method that is well suited to the study of context-bound necessity. It is concluded that the analysis of conditions that are necessary to trigger mechanisms of evaluation use in certain contexts is challenging, but promising to face the complexity of the phenomenon.
Article
The inability of either bottom-up or top-down strategies, by themselves, to provide suitable conditions for school improvement warrants a shift to school-focused knowledge use as a way to encourage such improvement. This study examines the influence of a large array of factors, derived from a knowledge utilization framework, on educators' use of several types of information for improvement purposes. Two hundred and thirty-three elementary school principals and 155 central board office staff members completed a survey instrument inquiring about most useful sources of information for improving curriculum and instruction in grades 4 to 6. Results provided support for categories of knowledge use factors in explaining variation in use of information for school improvement—characteristics of the source of information, characteris tics of the improvement setting, and interactive processes. These results are used to illustrate how a school-focused knowledge use perspective combines many of the strengths and avoids critical weaknesses of bottom-up and top-down strategies for change.
Article
Metaevaluations are systematic reviews of evaluations to determine the quality of their processes and findings. The knowledge about evaluation quality that results from metaevaluation of multiple evaluations can be used to inform researchers' decisions about which studies to include in evaluation syntheses. Metaevaluations of multiple studies are also used to identify strengths and weaknesses in evaluation practice in order to develop evaluation capacity. This article discusses the multiple ways in which quality can be defined, the political and cultural contexts of metaevaluation, and issues surrounding use and misuse. A metaevaluation of evaluations of international agricultural research centers illustrates these topics.
Article
Provides a conceptual framework for considering the distinction between evaluation use and evaluation influence and the relationship between evaluation use and knowledge use. Clarifies and expands some of the established definitions of concepts related to evaluation use. (SLD)
Article
This chapter recasts evaluation use in terms of influence and proposes an integrated theory that conceptualizes evaluation influence in three dimensions—source, intention, and time.
Article
Implicit evaluation utilization process-models were constructed from evaluation theorists, ideas, and explicit evaluation utilization process-models (i.e. already developed models) were located in the literature. The meta-model (i.e. a model developed from other models) was developed from the implicit and explicit process-models and from important ideas reported in recent research on evaluation use (e.g. participation, organizational development and complexity). The model depicts evaluation use as occurring in an internal environment situated in an external environment. The three sets of variables in the theoretical model are the background variables, the interactional or social psychological variables and the evaluation use variables. It is contended that evaluation-for-use will result in longer term effects when ideas from complexity theory, organizational learning and organizational design are employed. The meta-model reported here should be viewed as a theoretical model, offered in an attempt to promote theory development in the evaluation utilization literature.
Article
Participatory evaluation (PE) turns out to be a variably used and ill-defined approach to evaluation that, juxtaposed to more conventional forms and approaches, has generated much controversy in educational and social and human services evaluation. Despite a relatively wide array of evaluation and evaluation-related activities subsumed by the term, evaluation scholars and practitioners continue to use it freely often with only passing mention of their own conception of it. There exists much confusion in the literature as to the meaning, nature, and form of PE and therefore the conditions under which it is most appropriate and the consequences to which it might be expected to lead. In spite of this confusion, interest in collaborative, empowerment, and participatory approaches to evaluation has escalated quite dramatically over the past decade as evidenced in a bourgeoning literature on such topics. This interest, in my view, is testament to the promise such collaborative approaches hold for enhancing evaluation utilization and bringing about planned sustainable change.
Article
Growing interest in the institutionalization of evaluation in the public administration raises the question as to which institutional arrangement offers optimal conditions for the utilization of evaluations. Institutional arrangement denotes the formal organization of processes and competencies, together with procedural rules, that are applicable independently of individual evaluation projects. It reflects the evaluation practice of an institution and defines the distance between evaluators and evaluees. This article outlines the results of a broad-based study of all 300 or so evaluations that the Swiss Federal Administration completed from 1999 to 2002. On this basis, it derives a theory of the influence of institutional factors on the utilization of evaluations.
Study on the use of evaluation results in the Commission
  • Epec
Quod erat demonstrandum? Ewaluacja polityki regionalnej
  • J Bachtler
Adaptation: A Promising Metaphor for Strategic Management
  • B S Chakravarthy
Praktyka ewaluacji efekt�w program�w rozwoju regionalnego -studium por�wnawcze
  • K Olejniczak
The Metaevaluation Imperative
  • D L Stufflebeam
Dowody naukowe w zarz?dzaniu publicznym
  • S Mazur