ArticlePDF Available

Abstract and Figures

Public managers require different types of knowledge to run programs successfully. This includes knowledge about the context, operational know-how, knowledge about the effects, and causal mechanisms. This knowledge comes from different sources, and evaluation studies are just one of them. This article takes the perspective of knowledge users. It explores to what extent evaluation is a useful source of knowledge for public managers of cohesion policy. Findings are based on an extensive study of 116 Polish institutions: surveys with 945 program managers, followed by 78 interviews with key policy actors. The article concludes that: (a) utility of evaluation studies, in comparison to other sources of knowledge, is limited, (b) evaluation reports are used to some extent as a source of knowledge on effects and mechanisms, however, (c) "effects" are shallowly interpreted as smooth money spending, not socio-economic change. In conclusion this article offers practical ideas on what evaluation practitioners could do to make evaluation more useful for knowledge users in policy implementation.
Content may be subject to copyright.
Evaluační teorie a praxe; https://www.evaltep.cz
OLEJNICZAK, K., KUPIEC, T., NEWCOMER, K. (2017) „Learning from evaluati-
on the knowledge users' perspective“. Evaluační teorie a praxe 5(2): 49–74
* Karol Olejniczak, European Centre for Regional and Local Studies
(EUROREG), University of Warsaw, Warsaw, Mazowieckie, Poland,
k.olejniczak@uw.edu.pl
Tomasz Kupiec, Evaluation for Government Organizations s.c. (EGO s.c.),
Warsaw, Mazowieckie, Poland, t.kupiec@evaluation.pl
Kathryn Newcomer, The Trachtenberg School of Public Policy and Public Admin-
istration, George Washington University, Washington, DC, USA,
newcomer@gwu.edu
49
Learning from evaluation – the knowledge users'
perspective
Karol Olejniczak, Tomasz Kupiec, Kathryn Newcomer*
Abstract
Public managers require different types of knowledge to run programs
successfully. This includes knowledge about the context, operational
know-how, knowledge about the effects, and causal mechanisms. This
knowledge comes from different sources, and evaluation studies are just
one of them.
This article takes the perspective of knowledge users. It explores to what
extent evaluation is a useful source of knowledge for public managers
of cohesion policy. Findings are based on an extensive study of 116 Polish
institutions: surveys with 945 program managers, followed by 78 inter-
views with key policy actors. The article concludes that: (a) utility
of evaluation studies, in comparison to other sources of knowledge, is
limited, (b) evaluation reports are used to some extent as a source
of knowledge on effects and mechanisms, however, (c) "effects" are shal-
lowly interpreted as smooth money spending, not socio-economic change.
In conclusion, this article offers practical ideas on what evaluation prac-
titioners could do to make evaluation more useful for knowledge users
in policy implementation.
Evaluation Theory and Practice Articles
50
Keywords
Evaluation use, policy implementation, cohesion policy, evidence-based
policies, knowledge utilization
Funding
The data analyzed in this study were collected as a part of a more compre-
hensive study of the management and implementation of cohesion policy
in Poland. The study was commissioned by the Polish Ministry of Regional
Development National Evaluation Unit, co-financed by the European
Union European Regional Development Fund, and executed by the re-
search company Evaluation for Government Organizations (EGO s.c.).
1. Introduction
The ultimate goal of evaluation is “social betterment” (Henry, Mark
2003; Christie 2007). It should be achieved by providing policy actors
with research-based knowledge that will provide them with better un-
derstanding and improve targeting, designing, and implementing
of public policies and programs. Ultimately, such evidence-informed
policies and programs should be more effective in serving citizens.
In practice, this logic tends to be challenged by the complexity of policy
implementation systems in at least four ways. First, actors engaging
in policy implementation are highly diverse in terms of their back-
grounds and objectives (politicians, bureaucrats, NGOs, media, experts),
and positions in multi-level governance system (international agencies,
national and regional actors). They have different goals, and they natu-
rally have different knowledge needs.
Second, the spectrum of knowledge types required for running success-
ful policies and programs spans from knowledge about the context
in which the program is implemented, through technical know-how,
to knowledge about the effects, and explanatory knowledge about causal
mechanisms of socio-economic change (Ekblom, 2002; Nutley et al.,
2003). No one type of evaluation inquiry can address this broad spec-
trum. Gathering these varied types of information must be tackled
by different research approaches and cumulative evidence (Petticrew
& Roberts, 2003).
Evaluation Theory and Practice Articles
51
Third, evaluation is just one of the sources of knowledge that policy
actors use.1 Policy actors can gain insights from many sources such as,
to name just few, controls, audits, monitoring of programs, performance
analysis, and informal contacts with beneficiaries, and knowledge ex-
change networks of public managers. These sources sometimes comple-
ment, but often compete for the policy actors' attention (Davies et al.,
2010; Newcomer & Brass, 2016; Nutley et al., 2007; Weiss, 1980).
Last, evaluative insights, even of high relevance and quality, are not
always incorporated in policy learning processes. Individual and organi-
zational actors absorb information and learn in complex, non-linear
ways (Argyris, 1977; Leeuw et al., 1994; Lipshitz et al., 2007; Olejniczak,
Mazur, 2014; Weiss, Bucuvalas, 1980).
The challenge of aligning the production of evaluation studies
with the knowledge needs of decision-makers has been the focus of both
the theory and practice of evaluation utilization. The alignment challenge
has been explored for over a decade (Weiss, Bucuvalas, 1980; Shulha,
Cousins, 1997; Johnson et al., 2009). However, so far there has been limited
attention given to the extent to which evaluation complements or com-
petes with other sources of knowledge in complex systems of public pro-
gram implementation. Therefore, the question addressed here is:
How useful are evaluation studies as a vehicles for promoting
learning for actors involved in designing and implementing
complex public polices?
To address this question we take the user-centered perspective. We
frame evaluation as a service provided to "knowledge users", decision-
makers involved in the implementation of the public policy and pro-
grams. We begin our article by providing a framework that positions
evaluation practices in the complex system of multi-level policy imple-
mentation. We discuss the main types of knowledge that can be provid-
ed to different policy actors.
Second, we present findings on the main sources of learning for the staff
responsible for implementation of a complex policy. We use the case
1 As quoted by Weiss et al. (2005) “Evaluation is fallible. Evaluation is but one source
of evidence. Evidence is but one input into policy…”
Evaluation Theory and Practice Articles
52
of the cohesion policy implemented in Poland (2007-13 programming
period). We focus on the role of evaluation in the spectrum of knowledge
sources about processes, effects and mechanisms of program delivery.
Third, we discuss the implications of our findings for evaluation practi-
tioners who want to operate effectively in complex policy systems. We
lay out key trade-offs that have to be addressed, and we draw upon ex-
perience of evaluation units in shaping the learning about cohesion poli-
cy in Poland and across the European Union.
2. Learning in complex policy and program implementation sys-
tems
2.1 Framework for understanding policy and program implementa-
tion
Program and policy implementation is a complex process that has been
discussed in the management literature for many years, well before
the introduction of the European Union cohesion policy (May 2003). We
offer a general logic of public program and policy implementation in Figure 1.
Figure 1: The universal logic of program and policy implementation
Source: Inspired by Ostrom (2005).
Evaluation Theory and Practice Articles
53
The basic assumptions are that public funds are transferred, in the form
of monetary aid or service activities, through a policy implementation
system to certain target groups. The assumption is that if this aid is
a receptive context, will trigger in a target groups desired behaviors,
and the behaviors should eventually lead to a positive, sustainable socio-
economic change. Thus, the ultimate goal of policy is desirable socio-
economic change that addresses local challenges and problems, that
the public funds are used to modify behaviors of those target groups
that can bring this positive change.
The system of policy implementation is institutional, and involves pro-
cedural machinery of public agencies responsible for targeting promis-
ing beneficiaries and delivering aid smoothly (legally and on time).
As we see in Figure 1, public institutions involved in this policy imple-
mentation system can engage in three groups of processes. First, "strate-
gic planning", provides strategic documents, objectives and targets
for interventions. It entails activities such as: (a) diagnosis and planning,
(b) consultation and negotiations, (c) and coordination and alignment
with the changing environment. In a cohesion policy terminology this is
the domain of agencies assigned responsibilities of Coordinating Bodies
and programming units within Managing Authorities.
Second, "operational processes", focus on spending and absorbing fi-
nancial aid coming from the European Union. Operations cover sub-
processes of (a) information and promotion given to beneficiaries po-
tential applicants of the projects, (b) application and selection of the most
promising beneficiaries, (c) and financial management. In a cohesion
policy terminology this is the domain for agencies called Financing Au-
thorities, and Intermediate & Implementing Bodies.
Finally, "knowledge delivery" involves activities designed to produce
knowledge to improve the system's operations (single loop learning),
and to gain better understanding of socio-economic phenomena that are
addressed by cohesion policy (double loop learning) (Argyris, Schon
1995; Fiol, Lyles 1985). Knowledge production encompasses evaluation,
monitoring, performance auditing, and purchase and other expertise
elaborated fully below. It cohesion policy it is assigned to monitoring
and evaluation units, audit and control bodies.
Evaluation Theory and Practice Articles
54
The outcomes of policy implementation are typically measured by three
indicators. The ultimate success indicator is positive socio-economic
change. However, the observable effects of change are often delayed
in time. Assessing policy implementation by measuring final outcomes is
difficult. Thus, policy actors use more process-oriented indicators such
as the level of funds absorption. They assume that the timely and legal
use of public funds by beneficiaries is a proxy for successful of the policy
implementation. In practice, this indicator is more likely to measure
the efficiency of operational processes of the implementation system but
will not measure actual rationality of strategic policy direction, nor
the utility of the policy for local beneficiaries. The third indicator
measures "knowledge gains" on lessons learned and mistakes that have
been corrected or avoided over time. Such gains can be used in future
planning for the next generation of policies and programs.
Stakeholders are expected to assess policy delivery and provide feed-
back to institutions of the implementation system. In the case of cohe-
sion policy these stakeholders include the countries that are net payers
of the policy, public opinion of the EU member states, media, interest
groups and other European institutions, such as the European Commis-
sion and European Parliament.
2.2 Types of knowledge for public policy
In our analysis, we are especially interested in knowledge use and learn-
ing in policy delivery. Thus, we now focus on knowledge delivery pro-
cesses within the system of policy implementation. In broad terms, there
are five types of knowledge that may be produced in this setting (Nutley
et al., 2003; Olejniczak et al., 2016):
Knowledge about policy issues (know-about) information about
the spatial and temporal distribution of the socioeconomic prob-
lems, the needs, expectations and characteristics of targeted
population;
Knowledge about policy stakeholders (know-who) awareness
of which actors should be involved in the policy process to de-
velop and implement solutions;
Evaluation Theory and Practice Articles
55
Knowledge about effects (know what) evidence on what policy
approaches worked, what solutions and strategies produced de-
sired outcomes in the past;
Knowledge about change mechanisms (know why) insights in-
to why things work, and the causal mechanisms that lead to de-
sired outcomes, as well as side effects;
Knowledge on operational issues (know-how) technical, opera-
tional knowledge about effective implementation procedures,
activities and processes.
These five types of knowledge can be provided by different sources,
such as evaluation studies, policy expertise, monitoring activities,
and performance audits, etc. The actual producers of knowledge could
be external actors to the policy implementation processes such as inde-
pendent experts, research agencies, and audit companies or units within
the policy implementation systems such as evaluation and monitoring
units, and internal audit teams.
3. Learning in cohesion policy empirical findings
3.1 Scope and method of the study
Our research deals with the issue of the utility of evaluation studies
as vehicles for providing learning for actors involved in designing
and implementing public policy. We focus on knowledge delivery ac-
tivities that are within the system of implementation. In addition, we
explore how these activities inform staff of the public institutions re-
sponsible for two other types of implementation activities strategic
planning and implementation processes (Figure 1).
Out of the five types of knowledge discussed above, we concentrate
on three: know-what, know-how, and know-why. The rationale behind
limiting attention to these three is that in theory evaluation may poten-
tially provide these three types of knowledge. The other two (know-
about and know-who) are the domain of different types of disciplined
inquiry, especially policy analysis (Lincoln and Guba 1986).
Evaluation Theory and Practice Articles
56
As a case for our study, we have chosen the cohesion policy implementa-
tion system in Poland, in the programming period 2007 to 2013. Learn-
ing in the context of cohesion policy has been discussed extensively
in number of publications (Batterbury, 2006; Rodrigues-Pose & Novak,
2013; Hojlund, 2014; Neacsu Petzold, 2015), however those studies do
not explore the perspective of potential, individual users of knowledge.
Cohesion policy is especially helpful to analyze knowledge use for sev-
eral reasons. First, cohesion policy included number of multi-sectoral
programs, ranging from labor market, trainings, institutional support,
through enterprise innovation, to hard infrastructure. That broad scope
makes its experience relevant to other public policies, as well as aid pro-
grams, across the world.
Second, there have been extensive evaluation activities undertaken
to assess cohesion policy in Poland. A total of 976 evaluation studies
have been completed through programming period (MIR 2014), making
evaluation in Poland, at least in terms of volume, and an ample oppor-
tunity for policy learning.
Finally, European regulations guiding cohesion policy are standard
across countries and regions of the European Union. Thus, the Poland
case provides an opportunity for undertaking comparative studies
to support generalization across national cases.
Our findings are based on a mixed-method descriptive research design
executed on an extensive scale. The study covered all 116 institutions
within the Polish cohesion policy system and it was part of the bigger
ex post evaluation of cohesion policy implementation system in Poland.2
Our basic method was an online survey with public servants involved
in program management. Heads of each institution received a link
to the survey with a kind request to provide this link to experienced
employees defined as "having at least 3 years of involvement in cohesion
policy implementation, and 3 years of employment in the particular
2 The full report EGO s.c. (2013) "Ocena systemu realizacji polityki spójności w Polsce
w ramach perspektywy 2007-2013" is available on the National Evaluation Unit database:
https://www.ewaluacja.gov.pl/media/24655/ggov_290.pdf
Evaluation Theory and Practice Articles
57
institution". In total, 945 responses were collected, with the majority
of these from senior agency staff and heads of programs units. Referring
to Figure 1, these were representatives of agencies running programs'
strategic and operational processes (13 from Coordinating Bodies, 470
from Managing Authorities, 154 from Intermediate Bodies, and 308
from Implementing Authorities).
The survey respondents were asked to assess with 5-point scale
(from strongly agree to strongly disagree) statements about the role
of several potential knowledge sources for learning in their organization.
Ten different potential sources of information were included in the sur-
vey, in addition to our subject of interest evaluation studies. They
were:
1) monitoring of physical progress,
2) monitoring of financial progress,
3) findings from project controls,
4) external controls (Supreme Court, tax office),
5) trainings, postgraduate studies,
6) conferences related to the area of respondents' work,
7) everyday contacts with program beneficiaries,
8) cooperation with other entities in National Strategic Reference
Framework system (NSRF),
9) cooperation with national and international actors outside NSRF
system, and
10) press articles.
Separate answers were given for each source with respect to each
of three types of knowledge: implementation processes, program impact,
and mechanisms of change (Questions are provided in Appendix A).
To calculate the utility of each source we summed up the number select-
ing “strongly agree” and “agree” for each source.
Evaluation Theory and Practice Articles
58
The survey was complemented by a series of interviews (n=78) with key
players in the system (usually department directors of leading institu-
tions), and experts (mostly professors dealing with cohesion policy sys-
tem or public administration). Referring to Figure 1, these were actors
involved in strategic planning processes of the policy implementation
and directors of departments involved in operational processes.
The interviews were designed to examine further the role of evaluation
in learning. We asked interviewees about their main sources of infor-
mation during different aspects of implementation processes (strategic,
operational). We inquired if they remember particular studies that
helped them in decision-making. All interviewees were asked to assess,
on the scale 2 to 5 (the old Polish school grade system in which
2=unsatisfactory, 5=excellent), the utility of evaluation, monitoring, au-
dits and controls for decision-making. Apart from grading, they provid-
ed justifications for their assessments and illustrated them with exam-
ples. We also asked about perceived improvement of knowledge deliv-
ery activities between two programing periods (2004–06 vs. 2007–13).
For interviewees' answers, we have applied the magnitude coding
method, to reduce qualitative data, and represent the interview data
quantitatively (Saldaña, 2013).
The survey and interviews were conducted between April and June 2013
as part of more comprehensive study of the management and implemen-
tation of cohesion policy in Poland commissioned by the Polish Ministry
of Regional Development.
3.2 Findings
In the EU programming period 20072013 Poland was the largest benefi-
ciary of cohesion policy in the EU with an allocation of 67 bln €
from a total budget of 347 bln € (19.3%). The entire cohesion policy pack-
age given to Poland was divided into five national Operational Pro-
grams and 16 regional Operational Programs. Each program had a dis-
tinctive structure of strategic goals and targeted groups of beneficiaries.
Evaluation Theory and Practice Articles
59
To deliver such an extensive and complex aid package to final benefi-
ciaries, the largest national CP implementation system in Europe was
established. The delivery system consisted of around 116 public institu-
tions, and almost 12,000 civil servants involved strategic planning, oper-
ational processes and knowledge delivery (MIR 2013). In a cohesion
policy terminology the implementing agencies were divided into Financ-
ing Authorities, Managing Authorities, Intermediate & Implementing
Bodies.
Applying a user-center perspective, we assume that all of those 12,000
public agency staff involved in the cohesion policy implementation sys-
tem could have been potential users of evaluation. The larger part of this
population dealt with operational processes of Operational Programs
(information and promotion given to beneficiaries, project selection
and financial management), while a smaller group was responsible
for strategic issues including programs design, modification, and fi-
nancial reallocations. However, since the competencies of these manag-
ers often overlap, we refer to both those groups together as “program
managers”.
Regarding the production of evaluative knowledge, within the CP im-
plementation system 59 evaluation units were responsible for planning
and conducting evaluations (mostly commissioning execution
of the studies to external contractors). Evaluation units were located
in the structures of Managing Authorities, Intermediate & Implementing
Bodies (National Evaluation Unit 2011). A total of 976 evaluation studies
were completed through 2014, and the average number of studies com-
pleted per year in the period 20082014 was over 140 (MIR 2014). It is
estimated that 40% of those studies were of a strategic nature,
and the rest examined operational processes (EGO 2013). Given the large
number of evaluations completed, users could have gained a substantial
amount of useful knowledge. Yet, as we present below it was not neces-
sarily so.
Let us now have a closer look at the users of evaluation the program
managers of cohesion policy in Poland. Out of 945 surveyed program
managers, a little less than one third declared that they had learned
about implementation processes from evaluation studies (Figure 2).
Evaluation Theory and Practice Articles
60
The most popular source of knowledge about program implementation
appears to be everyday contacts with program beneficiaries and appli-
cants. Two other sources were indicated by more than half of respond-
ents: on-site project controls (performed by managing / implementing
authority representatives), and rather surprisingly, training and post-
graduate studies.
It is worth mentioning that in our survey we distinguished monitoring
of physical progress from monitoring of financial progress3. Combined
together they put monitoring at the top of the list of popular sources
of information about program implementation process.
Figure 2: In our department/team we learn about implementation PROCESS
from (% of answers “strongly agree” and “agree”)*
Note: *Marked in black three top answers
As one would have expected, the declared role of evaluation is greater
when it comes to gaining knowledge about program effects, with over
41 % of respondents declaring it is useful (Figure 3). Evaluation studies
3 We believe it is justified, as such division is popular among program implementing
units. It was also interesting to know which type of monitoring respondents had in their
minds when they declared to learn from it about programme effects.
32,8%
41,8%
42,3%
51,7%
42,4%
50,3%
34,6%
57,1%
33,4%
14,6%
16,0%
0% 10% 20% 30% 40% 50% 60% 70% 80%
EVALUATION STUDIES
Monitoring of physical progress
Monitoring of financial progress
Project controls
External controls (Supreme Court, tax office)
Trainings, postgraduate studies
Conferences related to the area of our work
Everyday contacts with program beneficiaries
Cooperation with other entities in NSRF system
Cooperation with actors outside NSRF system
Press articles
Evaluation Theory and Practice Articles
61
were only fifth most popular among 11 analyzed sources for learning
about program effects.
As in the case of gaining knowledge about implementation processes,
respondents most frequently report that their knowledge about program
effects comes from feedback from beneficiaries. Second source indicated
by more than a half of respondents are project controls, and third most
used was monitoring of physical progress.
Figure 3: In our department/team we learn about program IMPACT from
(% of answers “strongly agree” and “agree”)*
Note: *Marked in black three top answers
The survey results for knowledge about mechanisms of change are very
much the same as in the case of knowledge about effects (98 % correla-
tion). Evaluation studies fell 4th in terms of use, with less than 40 % of the
respondents agreeing one may learn from evaluation studies about
mechanisms (Figure 4). Everyday contacts with program beneficiaries
were the most often used source again, and are the only option indicated
by more than a half of the respondents.
41,4%
48,0%
45,3%
52,0%
40,1%
32,4%
36,1%
52,9%
28,3%
14,8%
24,5%
0% 10% 20% 30% 40% 50% 60% 70% 80%
EVALUATION STUDIES
Monitoring of physical progress
Monitoring of financial progress
Project controls
External controls (Supreme Court, tax office)
Trainings, postgraduate studies
Conferences related to the area of our work
Everyday contacts with program beneficiaries
Cooperation with other entities in NSRF system
Cooperation with actors outside NSRF system
Press articles
Evaluation Theory and Practice Articles
62
Figure 4: In our department/team we learn about MECHANISM of change
from (% of answers “strongly agree” and “agree”)*
Note: *Marked in black three top answers
The findings from interviews provide slightly more favorable pictures
of evaluation than the survey. Our interviewees were asked to assess
overall utility of evaluation on the old Polish school grade scale (from 2
to 5, where 2 is unacceptable and 5 is excellent). The average of this as-
sessment was 3.6, which means evaluation as a source of knowledge
"passed the utility exam", but only slightly above the acceptable mini-
mum. This score is comparable to audit and control, but visibly lower
than monitoring (4.0)4.
Evaluation was the most often mentioned source of knowledge about
effects, and the majority of our respondents noticed improvements
in evaluation over time (in comparison to 200406 period). Yet objections
concerning the quality of evaluation reports are still frequent. Interview-
ees specifically mentioned the lack of analytical added value, simple
repeating of monitoring data, ignoring organizational and legal limita-
tions and obligations in policy system. Conclusions were often perceived
as trivial, and recommendations deemed hardly useful, and often not
meeting information needs.
4 Only these three sources of information were discussed during interviews.
39,5%
40,2%
36,6%
47,5%
37,1%
30,3%
32,9%
50,7%
25,4%
15,9%
19,4%
0% 10% 20% 30% 40% 50% 60% 70% 80%
EVALUATION STUDIES
Monitoring of physical progress
Monitoring of financial progress
Project controls
External controls (Supreme Court, tax office)
Trainings, postgraduate studies
Conferences related to the area of our work
Everyday contacts with program beneficiaries
Cooperation with other entities in NSRF system
Cooperation with actors outside NSRF system
Press articles
Evaluation Theory and Practice Articles
63
Some respondents identified variations in evaluation utility depending
on the type of the evaluation study. The most useful studies were those
of diagnostic character, and involving program managers5. However,
mid-term studies were considered more as routine obligations than re-
sponding to actual information needs. Most respondents could not recall
any ex-post evaluations studies.
The usefulness of contacts with beneficiaries as a leading source
of knowledge for program managers is interesting when combined
with the finding that cooperation with actors outside of the cohesion
policy system was the least useful source in all three cases, with evalua-
tion studies falling in the middle. The managers in cohesion policy sys-
tem seem to be inward looking, and rely on simple feedback signals
generated from beneficiaries. Lack of interest in contacting and sharing
knowledge with academia or officials involved in other public policies
might suggest that actors in the cohesion policy system are not much
interested in the impact of their programs in a wider socio-economic
perspective. That observation corresponds with the fact that the leading
sources are quite similar for all three types of knowledge. Managers tend
to use information on implementations and even when asked about
learning on program effects, respondents interpreted them as financial
matters, implementation barriers, compliance with rules, and very basic
outcomes.
4. Discussion and implication for evaluation practice
4.1 Discussion
For knowledge users, defined as staff of the agencies responsible
for implementation of cohesion policy in Poland, evaluation studies
were viewed as a limited source of knowledge. The main source for op-
erational knowledge, as well as knowledge on what works and why,
were everyday contacts with beneficiaries and project controls. These
findings are not unique to cohesion policy. Others have found that eve-
ryday unstructured contact with beneficiaries provides preferred feed-
back on performance for public managers (Kroll, 2013). However, this
5 In fact those studies might not fit into the definition of evaluation and resemble more
policy analysis.
Evaluation Theory and Practice Articles
64
source brings the risk of "availability heuristic," when managers build
their understanding of the overall situation of the program on vivid sto-
ries of outspoken beneficiaries.
The fact that evaluation studies do not constituted one of the top choices
when information is needed, may explain the recently observed phe-
nomenon that, despite the large number of studies conducted, evalua-
tion studies have almost no influence on decision-making process
(Kupiec, 2016). If most decisions are not informed by evaluation find-
ings, we may assume they are supported by other sources of knowledge.
The findings of this study also correspond well with other research
on the cohesion policy evaluation system in Poland. One of the reasons
why evaluation studies are not strong sources of learning for program
managers may be the amount of time it takes to complete an evaluation
study. Based on a sample of 235 studies Kupiec (2015) calculated that it
takes seven months on average from data collection to the completion
of an evaluation study. Program managers interviewed as part of that
research reported:
“evaluation takes too long to have real impact. When it comes
to operational management, evaluation is useless, because it on-
ly repeats what we have already known, or what we have al-
ready changed. Waiting for evaluation recommendations is
worse than making even bad decisions”
“if there is a problem, and the answer is needed immediately, it
is not possible to get it from evaluation, for procedural reasons.”
As one can see the time, pressure is most evident in the case of informing
operational decision-making. That is probably why the utility of evalua-
tion studies was lowest in providing knowledge about implementation
processes. However, it also raises a question why the majority of com-
missioned studies examined implementation issues (over 60 %). This
question becomes even more intriguing when we realize that in the other
40 % of studies, which were supposed to deal with strategic problems,
the vast majority of recommendations also focused on implementation
processes (Kupiec, 2014). The lack of strategic recommendations may
also account for the fact that evaluation studies are not found among
Evaluation Theory and Practice Articles
65
the more useful sources of knowledge about program effects and mech-
anisms leading to those effects.
4.2 Implications
These rather sobering findings urge us to ask what evaluation practi-
tioners can do to make evaluation more visible and useful for knowledge
users in the policy and programs implementation processes. In order
to address this challenge we think it would be beneficial to identify
trade-offs that evaluation practitioners should address in order to pro-
vide useful information to decision-makers. In this section we especially
focus on evaluation units, since they are central agents of evaluation
activities in cohesion policy. However, our discussion may be also rele-
vant for analysts and analytical units that serve other public policies.
Knowledge utilization might be usefully differentiated along two di-
mensions. First, there are different types of knowledge. On the one hand,
evaluation studies may focus on bringing more strategic, broader
knowledge about effects of policy and programs, and the mechanisms
that produce successful outcomes. On the other hand, studies may focus
on very technical, procedural and processual issues, thus providing poli-
cy practitioners with fine-grained operational knowledge.
The second dimension relates to the primary audience and evaluation
objectives. Evaluation can be intended to inform the actors in the imple-
mentation system. In that case its primary function is learning, under-
stood as improving strategic and operational activities over time.
Or, evaluation can be intended to inform external audiences such
as policy stakeholders. In the case of cohesion policy these are the Euro-
pean Commission, EU net payers, public opinion and the media.6 In that
situation, evaluation may be used to hold policy implementation staff
accountable to the stakeholders.
Those dimensions create four clear options for the evaluation units (Fig-
ure 5). Let us consider how the findings discussed above relate
the framework.
6 It may as well be representatives of domestic authorities of particular country, if they are
not involved in managing the programme being subject of evaluation.
Evaluation Theory and Practice Articles
66
Figure 5: Different uses of knowledge learned from evaluation
Source: own work
In Figure 5 studies in cell A focus on accountability for timely and legal
spending. We believe that evaluation studies offer little additional value
here because this area is well covered by control activities, as well
as extensive monitoring systems developed over at the regional, national
and European level of cohesion policy.
Studies in cell B focus on accountability for effects. Ex post evaluations are
undertaken by European Commission. There is opportunity here
for activities of the national evaluation units, as they could be aimed
at showing to the public and main stakeholders the value for money of EU
co-financed interventions. However, two issues could potentially limit
evaluation units' actions in that area. First, stakeholders (especially the
media and the public) could perceive the units as not fully independent
and therefore not impartial, since they are located within
the implementation system producing outcomes they try to measure.
That, in turn, could render evaluation studies less credible in the eyes
of the knowledge users. Second, assessing long-term effects means
the studies must extend beyond one programming period. Longer-term
Evaluation Theory and Practice Articles
67
evaluations require institutional continuity of evaluation units. This is
often not the case for cohesion policy since the units are parts of Managing
Authorities assigned to the particular Operational Program. With new
programming periods, new implementation structures are introduced.
Studies in cell C are promising for evaluation units as they can provide
managers with balanced and objective views on on-going program im-
plementation. Evaluation can help in tackling the managers' heuristics
of availability which means not making assumptions based on single
stories from beneficiaries, but creating a more balanced and representa-
tive picture of reality. In that case, evaluation units could also, using
a spectrum of organizational learning tools, analyze data to inform sys-
tematic data-driven reviews (Hatry, Davies, 2011; Olejniczak, 2015). Dur-
ing such sessions, held regularly, evaluation officers inform program
managers, raise explanatory questions, and search for mechanisms that
explain current implementation bottlenecks.
Finally, cell D is, in our view, the most promising for evaluation units.
Evaluation studies could provide program managers both strategic
and operational staff, with insight on the actual effectiveness of theories
of change that underlay certain interventions. And that in turn, would
allow managers to correct interventions "on the go", providing them
with data on target populations and change mechanisms so they could
improve programs. For this purposes only evaluation undertaken
by national and regional evaluation units could do the job, because those
units are close enough to program managers to provide timely input.
However, evaluations in cell D also require evaluation units to tackle
additional challenges. First, evaluation units need to educate their users
in the implementation system. Our research shows that program manag-
ers frequently confuse products with effects. Evaluation units need
to explain the difference, and convince managers of usefulness of look-
ing beyond checklist of products, at the strategic goal of social change.
In addition, evaluation units need to raise awareness among managers
on the importance of knowing why and how interventions work
the mechanisms that change beneficiaries´ reactions to the provided aid.
This knowledge is crucial for the eventual success of implemented pro-
grams.
Evaluation Theory and Practice Articles
68
Secondly, evaluation units will have to work on the timing of their eval-
uations. They need to deliver timely explanations of mechanisms,
and findings of first effects of interventions, in order to provide enough
time for managers to react and incorporate the data to improve pro-
gramming.
In the reality of cohesion policy, evaluation units will try to cover more
than one option. However, it is important to be aware of the trade-offs
and potential tensions since each of these options requires different lev-
els of certain resources and skills, as well as demanding different roles
from evaluators and their supervisors in evaluation units. Therefore, we
encourage evaluation units to undertake strategic reflection and choose
their primary focus. This would allow units to be more effective in their
support of learning.
5. Conclusions
We have applied user-centered perspectives to the analysis of evaluation
as a vehicle for promoting learning for actors involved in designing
and implementing complex public policies. This means we relied
on the declaration (surveys and interviews) of staff of public agencies
and key actors involved the implementation of cohesion policy in Po-
land. The collected data shows that: (a) utility of evaluation studies,
in comparison to other sources of knowledge, is limited, (b) evaluation
reports are used to some extent as a source of knowledge on effects
and mechanisms, however, (c) "effects" are shallowly interpreted
as smooth money spending, not socio-economic change.
In our opinion, the crowded landscape of evidence sources as discussed
above can be treated not only as a challenge but also as an opportunity
for evaluation. Evaluation units across the cohesion policy system have
experience, due to the scope of their work, with understanding social
research, and speaking the languages of both policy and research. This
comparative advantage gives them a unique opportunity to evolve from
being mere contractors of isolated reports into real knowledge brokers
providing information to lead reflexive policy learning among deci-
sion-makers of cohesion policy.
Evaluation Theory and Practice Articles
69
We suggested that the limited resources of evaluation units in complex
policy delivery system should be primarily focused on serving
knowledge users who are responsible for policy implementation both
strategic and operational activities. An especially promising role would
be increasing knowledge on mechanisms that drive programs' perfor-
mance (what works and why). In terms of improving operational
knowledge, evaluation units could support learning sessions that are
based on monitoring data. Finally, evaluation units could explore more
the synergies with other evidence-based sources of information, by syn-
thesizing different knowledge sources and building policy arguments
based on evidence. Such reorientation could hopefully lead to a situation
where evaluation and other sources of knowledge complement each
other, while conclusions from evaluation studies have visible utility
in policy decision-making process.
Acknowledgements
The authors would like to express their gratitude to the members
of the research team involved in data collection: Bartosz Ledzion, An-
drzej Krzewski, Anna Borowczak, Marek Kozak, Paweł Kościelecki,
Katarzyna Seferyńska, Paweł Śliwowski, Anna Domaradzka and Łukasz
Widła. The authors would also like to thank Piotr Strzeboszewski
and Stanislaw Bienias from Polish National Evaluation Unit, and two
anonymous reviewers of this article for their critical comments.
Evaluation Theory and Practice Articles
70
Appendix 1 survey questions
The following sets of survey questions used to measure evaluation use
has been an excerpt from a bigger survey that explored all three aspects
of Cohesion Policy implementation system (strategic, operational
and knowledge delivery). Survey has been administered on-line.
In our department/team we learn about
implementation process from:
Strongly
agree
Agree
Neither
agree nor
disagree
Disagree
Strongly
disagree
Evaluation studies
Monitoring of physical progress
Monitoring of financial progress
Project controls
External controls (Supreme Court, tax
office)
Trainings, postgraduate studies
Conferences related to the area of our work
Everyday contacts with program beneficiar-
ies
Cooperation with other entities in NSRF
system
Cooperation with (inter)national actors
outside NSRF system
Press articles
In our department/team we learn about
program impact from:
Strongly
agree
Agree
Neither
agree nor
disagree
Disagree
Strongly
disagree
Evaluation studies
Monitoring of physical progress
Monitoring of financial progress
Project controls
External controls (Supreme Court, tax
office)
Trainings, postgraduate studies
Conferences related to the area of our work
Everyday contacts with program beneficiar-
ies
Cooperation with other entities in NSRF
system
Cooperation with (inter)national actors
outside NSRF system
Press articles
Evaluation Theory and Practice Articles
71
In our department/team we learn about
mechanism of change from:
Strongly
agree
Agree
Neither
agree nor
disagree
Disagree
Strongly
disagree
Evaluation studies
Monitoring of physical progress
Monitoring of financial progress
Project controls
External controls (Supreme Court, tax
office)
Trainings, postgraduate studies
Conferences related to the area of our work
Everyday contacts with program beneficiar-
ies
Cooperation with other entities in NSRF
system
Cooperation with (inter)national actors
outside NSRF system
Press articles
Evaluation Theory and Practice Articles
72
Bibliography
[1] ARGYRIS, C. and D. A. SCHON. Organizational Learning II: Theory,
Method, and Practice. Reading, MA: FT Press, 1995.
[2] ARGYRIS, C. Double-loop learning in organizations. Harvard Business
Review, Vol. 55, No. 5, pp. 115-125.
[3] BATTERBURY, S. Principles and Purposes of European Union Cohe-
sion policy Evaluation. Regional Studies, Vol. 40, No. 2, pp. 179-188.
[4] CHRISTIE, C. A. Reported Influence of Evaluation Data on Decision
Makers’ Actions: An Empirical Examination. American Journal of Evalua-
tion, Vol. 28, No. 1, pp. 8-25.
[5] DAVIES, H., S. NUTLEY and I. WALTER. Using evidence: how social
research could be better used to improve public service performance.
In: WALSHE, K., G. HARVEY and P. JAS (ed.). Connecting Knowledge
and Performance in Public Services: From Knowing to Doing. Cambridge:
Cambridge University Press, 2010. pp. 199-225.
[6] EGO. Ocena systemu realizacji polityki spójności w Polsce w ramach perspek-
tywy 2007-2013. Warszawa: Ministerstwo Rozwoju Regionalnego, 2013.
[7] EKBLOM, P. From the Source to the Mainstream is Uphill: The Chal-
lenge of Transferring Knowledge of Crime Prevention Through Replica-
tion, Innovation and Anticipation. Crime Prevention Studies, Vol. 13, pp.
131-203.
[8] FIOL, M. and M. LYLES. Organizational learning. Academy of Manage-
ment Review, Vol. 10, No. 4, pp. 803-813.
[9] HATRY, H. and E. DAVIES. A Guide to Data-Driven Performance Reviews.
Washington D.C.: IBM Center for The Business of Government, 2011.
[10] HENRY, G. T. and M. M. MARK. Beyond Use: Understanding Evalua-
tion's Influence on Attitudes and Actions. American Journal of Evaluation,
Vol. 24, No. 3, pp. 293-314.
[11] HOJLUND, S. Evaluation use in the organizational context - changing
focus to improve theory. Evaluation, Vol. 20, No. 1, pp. 26-43.
[12] JOHNSON, K., L. GREENSEID, S. TOAL, J. KING, F. LAWERNZ and B.
VOLKOV. Research on Evaluation Use: A Review of the Empirical Lit-
erature from 1986 to 2005. American Journal of Evaluation, Vol. 30, No. 3,
pp. 377-410.
[13] KROLL, A. The Other Type of Performance Information: Nonroutine
Feedback, its Relevance and Use. Public Administration Review, Vol. 73,
No. 2, pp. 265-276.
Evaluation Theory and Practice Articles
73
[14] KUPIEC, T. Użyteczność ewaluacji jako narzędzia zarządzania region-
alnymi programami operacyjnymi. Studia Regionalne i Lokalne, Vol. 56,
No. 2, pp. 52-67.
[15] KUPIEC, T. Ewaluacja regionalnych programów operacyjnych
w warunkach prawa zamówień publicznych i finansów publicznych,
Samorząd Terytorialny, 10/2015, pp. 27-39.
[16] KUPIEC, T. Program evaluation use and its mechanisms: The case
of Cohesion Policy in Polish regional administration. Zarządzanie Pub-
liczne, Vol. 33, No. 3, pp. 67-83.
[17] LEEUW, F. L., R. C. RIST and R. C. SONNICHSEN. Can governments
learn?: comparative perspectives on evaluation & organizational learning.
New Brunswick; London: Transaction Publishers, 1994.
[18] LINCOLN, Y. S. and E. G. GUBA. Research, Evaluation, and Policy
Analysis: Heuristics for Disciplined Inquiry. Policy Studies Review, Vol.
5, No. 3, pp. 546-565.
[19] LIPSHITZ, R., V. J. FRIEDMAN and M. POPPER. Demystifying Organi-
zational Learning. Thousand Oaks: Sage Publications, Inc, 2007.
[20] MAY, P. Policy Design and Implementation. In: PETERS, B. G. and J.
PIERRE, eds. Handbook of Public Administration. London: Sage Publica-
tions, 2003.
[21] MIR. Potencjał administracyjny system instytucjonalnego Narodowych Stra-
tegicznych Ram Odniesienia na lata 2007-2013 (stan na 30 czerwca 2013 r.).
Warszawa: Ministerstwo Infrastruktury I Rozwoju, 2013.
[22] MIR. Process of evaluation of the Cohesion Policy in Poland 2004-2014. War-
saw: Ministry of Infrastructure and Development, 2014.
[23] National Evaluation Unit. Process of Cohesion Policy Evaluation in Poland.
Warsaw: Ministry of Regional Development, 2011.
[24] NEACSU, M. and W. PETZOLD. Policy learning and transfer in EU
Cohesion Policy: the impact of events. Paper presented at the Regional
Studies Association Conference "Cross-national policy transfer in re-
gional and urban policy", 19 January, Delft, The Netherlands.
[25] NEWCOMER, K. and C. BRASS. Forging a Strategic and Comprehen-
sive Approach to Evaluation Within Public and Nonprofit Organiza-
tions: Integrating Measurement and Analytics Within Evaluation. Amer-
ican Journal of Evaluation, Vol. 37, No. 1, pp. 80-99.
[26] NUTLEY, S., I. WALTER and H. T. O. DAVIES. From Knowing to Do-
ing. A Framework for Understanding the Evidence-Into-Practice Agen-
da. Evaluation, Vol. 9, No. 2, pp. 125-148.
Evaluation Theory and Practice Articles
74
[27] NUTLEY, S. M., I. Walter and H. T. O. Davies. Using Evidence: How
research can inform public services. Bristol: Policy Press, 2007.
[28] OLEJNICZAK, K. and S. MAZUR (ed.). Organizational Learning.
A Framework for Public Administration. Warsaw: Scholar Publishing
House, 2014.
[29] OLEJNICZAK, K. Focusing on Success: A Review of Everyday Practices
of Organizational Learning in Public Administration. In: BOHNI
NIELSEN, S., R. TURKSEMA. and P. van der KNAAP (ed.). Success
in Evaluation. New Brunswick: Transaction Publishers. 2015. pp.99-124..
[30] OLEJNICZAK, K., E. RAIMONDO and T. KUPIEC. Evaluation units as
knowledge brokers: Testing and calibrating an innovative framework.
Evaluation, Vol. 22, No. 2, pp. 168-189.
[31] OSTROM, E. Understanding institutional diversity. Princeton, N.J.; Wood-
stock: Princeton University Press, 2005.
[32] PETTICREW, M. and H. ROBERTS. Evidence, hierarchies, and typolo-
gies: horses for courses. Journal of Epidemiology and Community Health,
Vol. 57, No. 7, pp. 527-529.
[33] RODRÍGUEZ-POSE, A. and K. NOVAK. Learning processes and eco-
nomic returns in European Cohesion policy. Investigaciones regionales,
Vol. 25, pp. 7-26.
[34] SALDAÑA, J. The coding manual for qualitative researchers. London-
Singapore: Sage Publications, 2013.
[35] SHULHA, L. M. and B. J. COUSINS. Evaluation Use: Theory, Research,
and Practice Since 1986. Evaluation Practice, Vol. 18, No. 3, pp. 195-208.
[36] WEISS, C. H., E. MURPHY-GRAHAM and S. BIRKELAND. An Alter-
nate Route to Policy Influence: How Evaluations Affect D.A.R.E. Ameri-
can Journal of Evaluation, Vol. 26, No. 1, pp. 12-30.
[37] WEISS, C. H. and M. J. BUCUVALAS. Truth Tests and Utility Tests:
decision-makers’ frame of reference for social science research. American
Sociological Review, Vol. 45, no. 2, pp. 302-313.
[38] WEISS, C. H. Knowledge Creep and Decision Accretion. Science Com-
munication, Vol. 1, No. 3, pp. 381-404.
... Policy evaluation is an applied process of inquiry for collecting and synthesizing evidence that results in conclusions about a policy's value and merit (Mathison, 2004). Remarkably, evaluation is "but one source of evidence" (Weiss et al., 2005) and does not often imply a reliable judgment (Olejniczak et al., 2017). This limitation must be recognized. ...
... First, they are essential for carrying out evaluation studies. Second, they stimulate policy learning via information sharing with relevant experts and stakeholders (Maybin, 2015;Olejniczak et al., 2016Olejniczak et al., , 2017. The internal capacity to use knowledge in the policy process interacts with the external pressure on policymakers to be accountable, especially for "relevant" policy under scrutiny (Raudla et al., 2018;Rimkutė, 2015;Schrefler, 2010). ...
... Therefore, the picture is quite mixed. The OECD appraised the methodological quality of evaluations (not restricted to CP) in different EU countries through its "Regulatory Policy Outlook" (OECD, 2021). ...
Article
Full-text available
The European Union (EU), especially in the context of Cohesion Policy (CP), has played a crucial role in developing and promoting policy evaluation practices across its Member States. Evaluation systems across the Member States have been established to assess CP investments. Remarkably, the use of evaluation research and its contribution to stimulating policy learning has remained a “black box.” To address this issue, this article aims to develop a novel framework centered around four conditions for evaluation‐based policy learning, namely: (1) policy relevance, (2) resources and organizational settings, (3) quality of evaluation, and (4) evaluation culture. These conditions are retrieved from the existing literature on policy evaluation and applied to the six‐country cases across the EU. The findings suggest how loosening the formal EU evaluation requirements could affect policy learning in the Member States.
... The findings revealed a positive association between evaluation and project outcomes. Olejniczak, Kupiec and Newcomer (2017) also indicated that learning from evaluation results considerably increased the efficacy and efficiency of project performance. They argued that evaluation practices may aid organizations in learning from past mistakes and enhancing the success of upcoming projects. ...
... Implying that the best way to assessed whether a particular project has met its intended purpose is through evaluation, hence the need for managers to ensure that evaluation activities are done at the right time with the appropriate tools to prevent the project activities from deviating from it intended purpose. The findings are also consistent with the findings of Zhang and Yang (2018), who discovered that incorporating evaluation practices into the project management process increased project success rates, Blackwood et al. (2018), also revealed a positive association between evaluation and project outcomes, likewise, Olejniczak, Kupiec and Newcomer (2017) who indicated that learning from evaluation results considerably increased the efficacy and efficiency of project performance Also, the study simple slope analysis reveal an interaction between business environment, monitoring practices and project outcome, however, it was not statistically significant. This could be justified by the fact that irrespective of the actors in the business environment, tech start-ups continue to practice their monitoring activities. ...
Article
Full-text available
Issues relating to Monitoring and Evaluation (M&E) have been established as a key and fundamental tool for the successful implementation of projects regardless of the industry. The study therefore sought to address the following questions: what effect do monitoring and evaluation practices have on tech start-ups project outcomes, as well as the role that business environment play in the relationship between M&E and project outcomes. The study followed a positivist mind-set, relying only on quantitative methods and an explanatory research design. Primary data via structured questionnaire was obtained from 317 respondents in managerial positions in the tech industry and analysed using inferential and descriptive tools. The study found that monitoring practices had a positive significant effect on project outcome. Evaluation practices also had a positive significant effect on project outcome. Business environment was found to have a dampening significant moderating effect in the relationship between evaluation practices and project outcome. However, business environment did not have any significant effect in the relationship between monitoring practice and project outcome. These findings will enable project practitioners understand the dynamics of monitoring and evaluation and the business environment when it comes to project execution. It will further enable project managers, personnel, and donors recognize how significant M&E tools are when creating policies and managing performance. Moreover, tech start-ups should create policies that recognize the integration of M&E in their operations and business functions.
... This has not been a faster process because of the diversity of practices, skills, and expectations of evaluation providers and evaluators (Head, 2016). Moreover, political willingness also plays an important role (Olejniczak et al., 2017). Furthermore, if the quality of evaluations is low (e.g., Mastenbroek et al., 2016), politicians may be unwilling to consider the evaluation recommendations. ...
Book
This engaging and topical book comprehensively explores the complexities surrounding the EU Cohesion Policy, which has been addressing regional and urban development across Europe since the 1980s. Adopting a multidisciplinary approach, it not only considers the goals of this long-term investment policy, which is to reduce territorial disparities between Member States and their regions, but also considers the role it plays in the European integration process and the challenges the EU will face in its future.
... This has not been a faster process because of the diversity of practices, skills, and expectations of evaluation providers and evaluators (Head, 2016). Moreover, political willingness also plays an important role (Olejniczak et al., 2017). Furthermore, if the quality of evaluations is low (e.g., Mastenbroek et al., 2016), politicians may be unwilling to consider the evaluation recommendations. ...
... Four potential functions of the evaluation system. Source: own elaboration based onOlejniczak et al. (2017). ...
Article
Full-text available
Evaluation practice is vital for the accountability and learning of administrations implementing complex policies. This article explores the relationships between the structures of the evaluation systems and their functions. The findings are based on a comparative analysis of six national systems executing evaluation of the European Union Cohesion Policy. The study identifies three types of evaluation system structure: centralized with a single evaluation unit, decentralized with a coordinating body and decentralized without a coordinating body. These systems differ in terms of the thematic focus of evaluations and the targeted users. Decentralized systems focus on internal users of knowledge and produce mostly operational studies; their primary function is inward-oriented learning about smooth programme implementation. Centralized systems fulfil a more strategic function, recognizing the external audience and external accountability for effects. Points for practitioners Practitioners who design multi-organizational evaluation systems should bear in mind that their structure and functions are interrelated. If both accountability and learning are desired, the evaluation system needs at least a minimum degree of decentralization on the one hand and the presence of an active and independent coordination body on the other.
Article
La pratique de l’évaluation est essentielle pour la responsabilisation et l’apprentissage des administrations qui mettent en œuvre des politiques complexes. Cet article explore les relations entre les structures des systèmes d’évaluation et leurs fonctions. Les conclusions sont basées sur une analyse comparative de six systèmes nationaux chargés d’évaluer la politique de cohésion de l’Union européenne. L’étude identifie trois types de structures de système d’évaluation : centralisées avec une seule unité d’évaluation, décentralisées avec un organe de coordination et décentralisées sans organe de coordination. Ces systèmes diffèrent en termes d’orientation thématique des évaluations et d’utilisateurs ciblés. Les systèmes décentralisés se concentrent sur les utilisateurs internes des connaissances et produisent principalement des études opérationnelles ; leur fonction principale est l’apprentissage orienté vers l’intérieur pour une mise en œuvre harmonieuse du programme. Les systèmes centralisés remplissent une fonction plus stratégique, et tiennent compte du public externe et de la responsabilité externe des effets. Remarques à l’intention des praticiens Les praticiens qui conçoivent des systèmes d’évaluation multi-organisationnels doivent garder à l’esprit que leur structure et leurs fonctions sont interdépendantes. Si l’on vise à la fois l’imputabilité et l’apprentissage, le système d’évaluation a besoin d’un degré minimum de décentralisation d’une part, et de la présence d’un organe de coordination actif et indépendant d’autre part.
Article
Full-text available
Artykuł przedstawia praktyczne problemy pojawiające się w procesie zamawiania ewaluacji regionalnych programów operacyjnych: zbyt długie oczekiwanie na wyniki badań, nadmierne znaczenie ceny przy wyborze oferty i koncentrację badań w cyklu rocznym. Problemy te są związane z obowiązującymi jednostki samorządu terytorialnego przepisami prawa zamówień publicznych i finansów publicznych, ich zaś skutkiem jest ograniczenie użyteczności i wykorzystania ewaluacji w procesie podejmowania decyzji. W podsumowaniu autor wskazuje propozycje działań mogących zmienić tę niekorzystną sytuację.
Article
Full-text available
This article is about evaluation use in the area of EU operational programs implemented by Polish regional administration, which is an uncharted territory in this context. The analysis is based on the assumption that evaluation is a long-term process producing a stream of knowledge, supporting management decisions throughout the program lifetime. 3 cases of regional programs, their managing authorities and 44 evaluation studies completed by them between 2007-2012 were analyzed. The degree of evaluation use was found unsatisfactory and limited to minor modifications of implementation process. Main barrier to the evaluation use was poor quality of evaluation studies, obvious and insignificant conclusions, reports missing answers to key questions. That resulted from other problems: incompetence of evaluators and inappropriate research methodologies.
Chapter
Full-text available
The chapter presents review of twenty novel practices used by public managers in ten OECD countries to support organizational learning and increase performance. It assesses to what extent those practices could be oriented on success and learning from “what works.” They are grouped according to their operative principles—the way they contribute to organizational learning. These are: (1) structuring feedback, (2) communication, (3) experimentation and (4) issue-oriented analysis.
Conference Paper
Full-text available
In this paper, we reflect on policy learning and transfer, in particular with regard to EU regional policy, and the way in which the annual European Week of Regions and Cities – OPEN DAYS has contributed to this. We understand that the EU Cohesion Policy community is rather hybrid and that its policy learning and change are dependent on time, aid intensity and EU and domestic contexts. A hybrid and 'open' event would therefore be best placed to create opportunities for transnational exchange and learning. Provided that the community is ready to learn, both along the policy cycle and from one event to the next, then there is scope for (horizontal) policy learning. According to interviewed partners and the systematic evaluation from participants, the OPEN DAYS succeeds in reflecting the wide thematic spectrum of EU Cohesion Policy and the interest of the event partners and participants. As the most important annual event for the community, it has served as a platform for spreading good practice, discussing ideas and learning more about EU Cohesion Policy. Notably in times of policy reform, the OPEN DAYS has been instrumental in disseminating knowledge about reform options and positions. Feedback from partners also suggests that developing events such as the OPEN DAYS into a tool for policy transfer and change would require political will and coordination at the level of the EU institutions. Moreover, it would make open communication and coordination among them and with Member States and regions necessary, and thus deviate from the usual policy process. Last, but not least, it would also require a more comprehensive understanding of the role of various stakeholders in the policy's delivery in domestic contexts.
Article
Evaluation units, located within public institutions, are important actors responsible for the production and dissemination of evaluative knowledge in complex programming and institutional settings. The current evaluation literature does not adequately explain their role in fostering better evaluation use. The article offers an empirically tested framework for the analysis of the role of evaluation units as knowledge brokers. It is based on a systematic, interdisciplinary literature review and empirical research on evaluation units in Poland within the context of the European Union Cohesion Policy, with complementary evidence from the US federal government and international organizations. In the proposed framework, evaluation units are to perform six types of brokering activities: identifying knowledge users’ needs, acquiring credible knowledge, feeding it to users, building networks between producers and users, accumulating knowledge over time and promoting an evidence-based culture. This framework transforms evaluation units from mere buyers of expertise and producers of isolated reports into animators of reflexive social learning that steer streams of knowledge to decision makers.
Book
This book presents a solid, research-based conceptual framework that demystifies organizational learning and bridges the gap between theory and practice. Using an integrative approach, authors Raanan Lipshitz, Victor Friedman and Micha Popper provide practitioners and researchers with tools for understanding organizational learning under real-world conditions.
Article
Introduction Public service organisations are preoccupied with understanding how good performance can be achieved: what matters is what works. But delivering high-quality services requires a far wider array of evidence than just that on effectiveness – we need, for example, knowledge about the scale, source and structuring of social problems; practical ‘know-how’ to support effective programme implementation in local contexts; and insights into the relationships between values and policy directions. Research can make an important contribution to the development of public services and policy programmes, and it can enrich debates about the nature of social problems and what works in addressing them. However, such positive research impacts are far from routine, and the impact of research is not always positive. Negative impacts may, for example, arise in situations where tentative or highly specific findings are seized upon too readily or applied too widely. Despite this, the overzealous use of research is not normally considered to be the main problem. Quite the opposite; researchers and others are often disappointed that clear findings are overlooked or ignored when decisions are made about the direction and delivery of services. This view is supported by many studies that have found that practice often lags behind the best available evidence about what works and that it may remain out of step for quite some time.
Article
This article is about evaluation use. It focuses on the well-known paradox that evaluation is undertaken to improve policy, but in fact rarely does so. Additionally, the article also finds that justificatory uses of evaluation do not fit with evaluation's objective of policy improvement and social betterment. The article explains why the paradox exists and suggests applying organizational institutional theory to explain evaluation use. The key argument is that in order to explain all types of evaluation uses, including non-use and justificatory uses, the focus needs to be on the evaluating organization and its conditioning factors, rather than the evaluation itself.