Conference PaperPDF Available

Evaluation Units as Knowledge Brokers: testing and calibrating an innovative framework

Authors:

Abstract and Figures

The Cohesion Policy of the European Union has been one of the most intensively evaluated policies. Evaluation units perform a key role in this process. Despite the massive production of evaluation reports, the utilization of knowledge remains limited. The current evaluation literature does not explain well the reality of evaluation use and role of evaluation units in complex programming and institutional settings. The article offers an empirically tested framework for the analysis of the role of evaluation units as knowledge brokers. It is based on a systematic, interdisciplinary literature review on knowledge brokering and empirical research of evaluation units in Poland with complementary evidence from a range of institutional context (US federal government and international organizations). The proposed framework transforms evaluation units from mere buyers of expertise and producers of isolated reports into animators of "reflexive social learning" that steer streams of knowledge to decisionmakers.
Content may be subject to copyright.
1
Evaluation Units as Knowledge Brokers:
testing and calibrating an innovative framework
Karol Olejniczak - Centre for European Regional and Local Studies, University of
Warsaw, k.olejniczak@uw.edu.pl
Estelle Raimondo - George Washington University, eraimondo@ gwu.edu
Tomasz Kupiec - EGO s.c. - Evaluation for Government Organizations,
t.kupiec@evaluation.pl
Paper presented at European Evaluation Society Biennial Conference
Dublin, 1-3 of October 2014
For conference presentation see:
https://prezi.com/igblcaxcepdp/ees-knowledge-brokers_2014_09_20/
!
ABSTRACT'.........................................................................................................................................................'2!
INTRODUCTION'...............................................................................................................................................'2!
1!THE'RESEARCH'APPROACH'.................................................................................................................'5!
2!KNOWLEDGE'BROKERS:'WHO'THEY'ARE'AND'WHAT'THEY'DO.'FINDINGS'FROM'
LITERATURE'REVIEW'...................................................................................................................................'7!
2.1!KNOWLEDGE!BROKERS:!A!SPECTRUM!OF!DEFINITIONS!....................................................................................!7!
2.2!ACTIVITIES!OF!KNOWLEDGE!BROKERS!.............................................................................................................!12!
3!A'FRAMEWORK'FOR'KNOWLEDGE'BROKERING'IN'EVALUATION'.......................................'17!
3.1!OVERVIEW!OF!THE!FRAMEWORK!......................................................................................................................!17!
3.2!THE!KNOW=DO!GAP!IN!THE!EVALUATION!CONTEXT!.......................................................................................!20!
3.3!BROKERING!EVALUATIVE!KNOWLEDGE!...........................................................................................................!21!
3.4!MECHANISMS!AND!EXPECTED!CHANGE!............................................................................................................!28!
4!CONCLUSION'..........................................................................................................................................'30!
REFERENCES'..................................................................................................................................................'33!
2
Abstract
The Cohesion Policy of the European Union has been one of the most intensively
evaluated policies. Evaluation units perform a key role in this process. Despite the
massive production of evaluation reports, the utilization of knowledge remains limited.
The current evaluation literature does not explain well the reality of evaluation use
and role of evaluation units in complex programming and institutional settings.
The article offers an empirically tested framework for the analysis of the role of
evaluation units as knowledge brokers. It is based on a systematic, interdisciplinary
literature review on knowledge brokering and empirical research of evaluation units in
Poland with complementary evidence from a range of institutional context (US federal
government and international organizations). The proposed framework transforms
evaluation units from mere buyers of expertise and producers of isolated reports into
animators of "reflexive social learning" that steer streams of knowledge to decision-
makers.
Introduction
Twenty-eight countries of the European Union (EU) build an "ever closer union"
through, among other things, joint policies (Dinan, 2010). One of the main collective
policies, underpinning this union, is the Cohesion Policy that aims at socio-economic
development and uses a third part of whole EU budget (around 351,8 billions for
2014-2020). It is co-financed by the EU and implemented by national and regional
administrations in the form of operational programmes, priorities and activities that
range from investment in heavy infrastructure, through R&D, to human resources.
For each program and activity, conducting an evaluation study is a legal obligation.
As a result of this regulatory architecture, the Cohesion Policy has become one of the
most intensively evaluated policies in the European Union (Bachtler, Wren, 2006).
Evaluation units perform a key role in the process of the Cohesion Policy evaluation.
Located within the central or regional government and assigned to particular
Operational Programmes, they commission studies and convey their findings to
institutions responsible for planning and implementing Cohesion Policy interventions.
3
Those units are usually small (1 to 7 people) and they often share their
responsibilities between and monitoring assignments as well as, less commonly,
strategic planning.! Currently there are around 270 evaluation units across 28
member states of the European Union.
Poland is one of the exemplar of evaluation practice in the framework of the
Cohesion Policy given that the country is one of the main beneficiaries of the policy.
Since 2004, Poland has implemented a package of over 100 priorities and actions
worth 100 billion of EUR. As a result of the extensive evaluation mandate
surrounding the Cohesion Policy, Poland has also invested in evaluation capacity by
forming a network of 57 evaluation units in charge of contracting out over 900
evaluation studies (Olejniczak, 2013).
While evaluation is increasingly ubiquitous, a significant challenge lies in ensuring its
usage in decision-making processes. Recent studies indicate a visible gap between
on the one hand, the production of evaluation reports and, on the other their limited
use by public, decision-makers or even program managers (Kupiec, 2014;
Olejniczak, 2013). Furthermore, some decision-makers are under the impression that
the proliferation of evaluation reports can overload them with meaningless
information and redundant studies (EGO, 2010).
This situation raises a practical question for evaluation units: how can they combine
the inflow of single reports into streams of knowledge that in turn would address
knowledge needs of decision-makers and managers? This practical question is
connected with the more general issue of finding effective mechanisms of evaluation
use and influence in complex institutional environments (Rist, Stame, 2011; Hojlund,
2014a).
The current evaluation literature does not explain well the reality of evaluation use in
complex program and institutional settings, such as the Cohesion Policy. Firstly, the
dominant approach is to analyse use from the perspective of single reports,
! Organizational arrangements differ across countries. In some countries - like Poland, each
Operational Program has separate evaluation unit, while National Unit plays the role of coordinator
and initiator of horizontal studies. In other countries (e.g. Hungary), there is only one, National
Evaluation unit and a network of staff from each Operational Programme, assigned to evaluation
responsibilities.
4
underestimating that change in decision-making is cumulative, triggered by evidence
streams trickling down into programmes or organizations (e.g. (Ferry, Olejniczak,
2008; Hojlund, 2014a; 2014b; Leeuw & Furubo, 2008)
Secondly, the evaluation literature explores relations between evaluators (producers)
and direct users. It overlooks the fact that government units that contract out
evaluations are usually not the final users of the reports but rather brokers between
knowledge producers (evaluators) and actors involved in policy decision (Johnson et
al., 2009; Laubli Loud and Mayne, 2013 ) .
Finally, there is little literature on the role of evaluation units in general, and a dearth
of empirical evidence on their work as intermediary between producers and users in
particular.
A new conceptual framework is therefore needed to fully explain the role of
evaluation units in steering knowledge flows from producers to users.
This article proposes a framework for analysing and reshaping evaluation units' role
by taking a new perspective. Firstly, the use of evaluation is analysed from a systems
perspective - as flows of information and knowledge streams (Rist, Stame, 2011).
Evaluation units are in turn portrayed as Knowledge Brokers, playing the role of
intermediaries between producers and users in charge of accumulating and steering
knowledge flows.
Secondly, it is assumed that the current challenge faced by evaluation units in
enhancing evaluation use is a particular manifestation of a more general problem of
the know-do gap which is quite common across different disciplines and policy fields
(e.g., gap between universities research vs. its business applications, laboratory
discoveries vs. general health-care practice, scientific evidences on environmental
change vs. actual policy decisions). The article thus seeks to bring insight from
organizational theory as well as a wide literature on knowledge transfer and
management into the evaluation literature with the view to better understand how
different disciplines deal with know-do gaps through knowledge brokering. We argue
that a key missing piece in the evidence-based decision making theory is the idea
that bringing credible and rigorous evidence to decision-makers is not sufficient, the
evidence needs to be "brokered". This research sets out to define what brokering
5
knowledge means for evaluation units and what changes to their strategy they need
to pursue to be successful in promoting knowledge use.
The article contributes to evaluation practice in complex institutional settings. Experts
who are architects of evaluation systems will gain understanding of inter-institutional
design flaws that hamper systemic knowledge use. Evaluators can draw lessons on
effective knowledge brokerage strategies.
The structure of the article consists of four sections. In the next section research
questions and methods are presented. It is followed by the discussions of the main
findings from the systematic literature review. The spectrum of definitions of
knowledge brokers emerging from bibliometric and content analysis is provided
together with a typology of brokers’ main functions and activities.
In the third section, the article presents a reconstructed framework of evaluation units
as knowledge brokers. Each part of the framework is explicated and illustrated with
the findings emanating from survey and interview data. We conclude with a
discussion of the main strengths and limitations of the framework as well as
perspectives for its practical applications and directions for future research.
!
1 The Research Approach
The article addresses two questions:
(1) How is "Knowledge Broker" defined in the literature, across different disciplines
and policy fields?
(2) How could reframing the role of evaluation units as knowledge-brokers improve
their performance in the Cohesion Policy?
A three-tiered mixed-method approach has been applied to tackle these questions.
First, an initial conceptual framework was built on the basis of a literature review
covering multiple strands of studies, from evaluation utilization to knowledge transfer,
brokering, and evidence use in policies (115 article from Web of Science, 20 books
on evidence use in public policies, literature on evaluation use). A content analysis
was performed to provide initial characteristics and functions of knowledge brokers.
6
Second, the activities of knowledge brokering in the context of evaluation were
explored on a network of Polish evaluation units. This case was deemed particularly
relevant for its substantial endowment in EU assistance (over 100 billion euro since
2004) and its intensive evaluation activities (over 900 evaluations in the last 10
years). The empirical inquiry included a comprehensive survey (n=57 evaluation
units, response rate = 80%) and interviews with leading evaluation units (n=6).
The third phase of the research aimed to refine the typology of knowledge brokering
activities towards more generalizability, that is extending the framework beyond the
current practice of Cohesion Policy evaluation units. This was done with three
methods. A second systematic literature review was conducted, this time with using
the SCOPUS database. It covered 931 articles on knowledge brokering from peer-
reviewed journals across 10 disciplines: (1) Social Sciences, (2) Environmental
Science, (3) Business, Management and Accounting, (4) Economics, Econometric
and Finance, (5) Medicine, (6) Decision Science, (7) Psychology, (8)
Multidisciplinary, (9) Nursing and (10) Health Professions. After content analysis of
abstracts, 254 articles were selected for in-depth full text content analysis. Again,
systematic coding was applied (structural and process coding) and the spectrum of
knowledge brokering definitions and activities was extended. This was followed by
additional interviews executed with evaluation units in a variety of organizations,
spanning a number of US federal agencies and international organizations (n=5).
Finally, an additional set of focus groups, discussions and commentaries with
evaluation practitioners were performed. On the basis of all the evidence collected,
the framework for understanding evaluation units as knowledge brokers was
recalibrated. It is well grounded in observation of civil servants' hands-on experience.
!
7
2 Knowledge brokers: who they are and what they do.
Findings from literature review
In this section, based on literature review we explore in depth the concept and
interdisciplinary practices of knowledge brokering (KB). We describe contexts in
which KB emerges, analyze KB roles and variations across sectors and tis basic
characteristics. We close with reviewing typical KB activities.
2.1 Knowledge brokers: a spectrum of definitions
Environment/context, which requires brokering
Knowledge brokers (KB) are broadly defined as pivotal actors in networks where
knowledge is transferred (Kauffeld-Monz, Fritsch, 2013). Other members of such
network are referred to as knowledge / information / evidence producers and users.
Brokering emerges in environments where research and evidence-based decision
making is favored over managerial decision-making (Lomas 2007), and where a
plurality of disciplines and views exists (Cash et al., 2003; Holmes and Clarke, 2008;
Naylor et al., 2012). In the majority of cases knowledge producers refer to
researchers/scientist and knowledge users refer to practitioners/decision makers
(Taylor et al., 2014; Berbegal-Mirabent et al., 2012; Dilling, Lemos, 2011; Meyer,
2010; Traynor et al., 2014, Heiskanen et al., 2014; Willems et al., 2013; Partidario,
Sheate, 2013; Cooper, 2013; Michaels, 2009). While the distinction between
knowledge producers and users is a useful heuristic, in reality there is also a number
of examples that blurs the line. Practitioners may stand on the production side and
produce experience-based knowledge (Conklin et al., 2013). Likewise, scientists
conducting applied research are users of knowledge generated in basic research
(Morgan et al., 2011). That said, across disciplines, industries, and policy areas there
is increasing evidence that knowledge flows within interconnected networks require
an intermediary, a connector, a node in the system that facilitates transfer. This role
has been characterized as knowledge broker (KB).
The literature consistently justifies a rationale for knowledge brokerage as a gap
between knowledge producers and users and the need to bridge it (Taylor et al.,
2014; Mavoa et al., 2012; Willems et al., 2013; Cooper, 2013; Cameron et al., 2011;
8
Ferguson et al., 2013; Schlierf, Meyer, 2013; Waqa et al., 2013; Shaw et al., 2010).
This gap results from the fact that decision makers and researchers inhabit two
different worlds, separate sides, ‘two solitudes’ (Schlierf, Meyer, 2013) each with its
own professional culture, resources, imperatives, and time frames (Huusko, 2006),
each based on varying beliefs, values, incentive systems and practices (Klerkx et al.,
2012, Ward et al., 2009). All that hinders productive communication (Schlierf, Meyer,
2013) and requires a facilitator.
Basic roles and their variations across sectors
The spectrum of possible roles has been summarized in table 1. Brokers basic role is
to connect knowledge users and knowledge producers (Holzmann, 2013; Schlierf,
Meyer, 2013; Lowell et al., 2012; Naylor et al., 2012). Other description of these
connecting roles include: linking disconnected pools of ideas (Hargadon, Sutton,
2000; Bergenholtz, 2011), bridging multiple domains (Hargadon, 2002), distinct
teams/clusters or groups (Yousefi-Nooraie et al., 2012). By bringing people together
KB facilitate building new relationships that help exchanging ideas, research,
knowledge, shared needs and interests (Taylor et al., 2014; Berbegal-Mirabent et al.,
2012; Lin, 2012; Schlierf, Meyer, 2013; Russell et al., 2010). Positioned at the
interface between those possessing knowledge and those seeking it (Taylor et al.,
2014; Berbegal-Mirabent et al., 2012; Schlierf, Meyer, 2013; Ward et al., 2009)
brokers act as an intermediary (Blackman et al., 2011; Dilling, Lemos, 2011; Am,
2013) and is a human force behind knowledge transfer / exchange (Taylor et al.,
2014; Yousefi-Nooraie et al., 2012; McAneney et al., 2010; Meyer, 2010; Rydin et al.,
2007; Partidario, Sheate, 2013; Conklin et al., 2013).
However, as some authors state, perceiving brokers simply as passive intermediaries
would be too narrow (Abbate et al., 2011; Cooper, 2013). In the ‘evidence based
practice’ environment knowledge brokers are more than connectors, they play an
integral part in the translation of knowledge from a discipline-specific and technical
lingua into plain language, understandable by end-users. They are also facilitators,
helping people to make sense of and apply information, making it relevant for them
(Naylor et al., 2012, Klerkx et al., 2012). This leads us to defining "brokered
knowledge" as knowledge that is, filtered, assessed, reframed/structured/ de- and
9
reassembled, synthesized/condensed and disseminated (Ward et al., 2009; Meyer,
Kearnes, 2013; Meyer, 2010; Jinnah, 2010; Klerkx et al., 2012). Through this
metamorphosis, brokered knowledge is more robust, accountable, usable, and
appropriate to serve locally at a given time (Meyer, 2010).
A slightly different understanding of KB can be found in the business literature where
this term is popular especially in the context of generating innovations. KB are first
and foremost connectors between the past, the present and the future. In this
domain, a brokered idea is an old idea that is applied to a new domain where they
represent innovative new possibilities, and can be used in new ways, new situations
and new combinations. (Hargadon, Sutton, 2000; Hargadon, 1998). KB thus bridge
disconnected ideas (Bergenholtz, 2011), seeks external ideas from people in a
variety of industries, disciplines, and contexts (Nair et al., 2012) with an eye towards
their usefulness in different and yet unknown situations. They recombine those past
experiences in new ways, into the new context, and for new audiences (Hargadon,
2002). Brokers may also facilitate access to other resources essential for innovation,
such as capital, political support, business development services and material
resources. (Klerkx et al., 2012).
In politics and policymaking, brokers are known to play a more collaborative role
amongst a broad set of stakeholders. They are diplomats and negotiators (Cooper,
2013). They engage in the policy process, relate existing knowledge to policy
questions and explore possible alternatives and their implications. Their goal is to
clarify and expand the scope of choices available to policy makers depending on their
value judgments (Pesch et al., 2012; McAllister et al., 2014). Brokers promote ideas
and attempt to push them onto the public/government agenda, ‘soften’ climate for
particular alternatives (Cooper, 2013) or even use knowledge to package reform and
gain political support built pro-reform coalition (Gutierrez, 2010). In this field
brokering is a two-way process: 1) influencing policy to be more responsive to
research and 2) stimulating scientists to present their findings in a form meaningful to
policy makers (Van Kammen et al., 2006 ).
10
Characteristics, competence and skills of KB
The literature describes knowledge brokers as eminently versatile and at ease in both
the knowledge users and producers worlds (Dilling, Lemos, 2011). Brokers ought to
be fluent, even bilingual in both the language of research and the language of action
and decision (Schlierf, Meyer, 2013). In other words, brokers must understand both
the research process and the users’ decision-making process. (Jacobson et al.,
2003; Cooper, 2013), have a solid understanding of the political, economic and other
factors influencing a decision (Naylor et al., 2012). That is only possible if brokers
move back and forth between different social worlds (Meyer, Kearnes, 2013).
Brokering can be carried out by individuals (Berbegal-Mirabent et al., 2012;
Blackman et al., 2011; Ward et al., 2012) – project members, experts, outside
specialists (Holzmann, 2013), health care professionals (Shaw et al., 2010),
international assignees (Reiche, 2009), or by organizations consulting firms,
research oriented organizations, think tanks (Holzmann, 2013; Meyer, 2010;
Hargadon 2002), venture capitalists (Zook, 2004), companies generating innovations
(Hargadon, Sutton, 2000). It may also refer to interactive settings (Bielak et al., 2008;
Turnhout et al., 2013), structures (Sheate, Partidario, 2010).
Table 1. Popular framing of knowledge broker in various contexts
Context
Typical framing
Examples of definitions
General
Connector
Networker
Broker is an individual or organization who acts as an
intermediary between at least two other parties or
communities of practice (Blackman et al., 2011)
Knowledge brokers are people or organizations that
move knowledge around and create connections
between researchers and their various audiences
(Meyer, 2010)
Knowledge brokering is gathering, synthesizing,
processing, and disseminating information (Jinnah,
2010)
Practice & Policy
Managerial
decisions, ideas,
strategies for
problem solving,
based on evidences
Typical examples
Translator
Facilitator
Architect of
arguments
Capacity builder
Someone who helps to support the implementation of
evidence into policypractice and the evaluation of
policypractice to build the evidence-base (Armstrong,
2007)
[KB] links researchers and decision makers, facilitating
their interaction so that they are better able to
understand each other’s goals and professional culture,
influence each other’s work, forge new partnerships and
11
are health care,
environment,
education,
agriculture
use research-based evidence (Traynor et al., 2014)
Someone who is capable of bringing researchers and
decision makers together, facilitating their interaction so
that they are able to better understand each others’
goals and professional culture, influence each others
work, forge new partnerships, and use research-based
evidence (Russell et al., 2010)
Politics & Policy
Negotiations of
policy options,
deciding on values,
policy alternatives or
strategic directions
of the policy
Politics, policy,
environment -
resource divisions
Facilitator
Diplomat and
negotiators
Coalition builder
Promoter of
alternatives
‘Brokers’ engage in the policy process, and, in
interaction with policy, they communicate existing
knowledge, relate this to policy questions or knowledge
demands, and explore possible alternatives and their
implications. Their goal is not to eliminate options, but to
expand the scope of choices available to policy makers
depending on their value judgments (Pesch et al., 2012)
[KB] plays a more collaborative role amongst a broad
set of stakeholders in the hope of clarifying the scope of
policy alternatives and quite possibly increasing the
number of alternatives for discussion and associated
sense of uncertainty (McAllister et al., 2014)
Research brokers make ideas matter and use their
intellectual authority to verify certain forms of knowledge
as more accurate, persuasive or objective [...] promote
ideas and attempt to push them onto the
public/government agenda (‘soften’ the climate of
opinion towards particular alternatives). (Stone, Maxwell
& Keaton, 2001, p.35) (Cooper, 2013)
Business innovation
generation
Explorer
Knowledge seller
Knowledge brokering refers to the process of bridging
disconnected ideas from at least two distant
organizations. Simultaneously it involves some form of
transformation of these ideas, into the new context
(Bergenholtz, 2011)
Knowledge brokers, then, are those individuals or
organizations that profit by transferring ideas from
where they are known to where they represent
innovative new possibilities. They transfer these ideas in
the forms of new products or processes to industries
that had little or no previous knowledge of them
(Hargadon, 1998)
Third parties who connect, recombine, and transfer
knowledge to companies in order to facilitate innovation
(Cillo, 2005)
Source: own elaboration
!
12
2.2 Activities of knowledge brokers
As shown above, the literature reflects a rather wide cross-disciplinary agreement on
what the role of KB is, notwithstanding some nuances across domains. Similarly, the
literature provides a surprisingly clear picture of "what KB do". In order to categorize
the different activities in which KB are typically involved we proceeded in two rounds
of coding. First we scanned all of the articles and coded the relevant segments with a
master code "KB activities". We subsequently went through a secondary coding of
the 232 identified segments with a view to refine our typology and come up with a
sense of saliency of each category as represented by the literature. Out of these 232
segments, 98 were considered either too discipline-specific or too definitional to be
included in the typology of what "KB do", leaving us with 134 valid coded segments
describing brokering activities. Table 2 summarizes our findings. In this section we
describe each type of activities.
Identifying /targeting knowledge areas and needs:
A third of the coded statements emphasized that KB identify knowledge needs and
match existing knowledge to current needs. To fulfill this role, KB engage in
"scanning the horizon" to remain current with the latest available evidence. They
strive to build a repository of knowledge in a particular content area and store ideas
that might be relevant for future project. This range of activities is otherwise
commonly known as "knowledge management". Nonetheless, KB go beyond storing
knowledge by actively searching for problems to solve, rather than adopting a more
passive attitude and waiting for problems to arise. Many KB engage in facilitation
activities to help knowledge users formulate specific research questions and frame
various design options. These facilitation activities also help KB target precisely the
knowledge that needs to be transferred by identifying the area of compatibility and
relevance between existing knowledge and the priorities of the users. Relatedly, KB
can engage in testing particular ideas.
Facilitating knowledge creation:
About one tenth of the coded segments described KB as engaged in the knowledge
creation process. Some of these activities pertain to finding resources and funding to
conduct a particular research, identifying the right knowledge-producers, providing
13
quick feedback to the research team, making them aware of issues and concerns.
Relatedly, KB can sometimes engage in direct knowledge production by creating
"boundary objects" that are valued by both knowledge producers and users such as
writing briefs, summaries, or reviews of the knowledge produced to synthesize
findings at a more aggregated levels. Finally, the literature also describes a range of
activities that fall into a quality-assurance function. KB evaluate the relevance,
credibility and usability of the knowledge before deeming it worthy of translation and
transfer.
Translating/adapting knowledge:
The literature also commonly describes KB as engaging in knowledge translation,
adaptation and interpretation activities. About one third (33%) of the coded segments
pertained to this type of activities. KB can take on various translation tasks including
interpreting evidence stemming from one field or discipline to fit another, translating
past research into current scenarios, or seeking analogies between different context
to identify whether knowledge can be adopted in new settings. KB are thus often
"managing retrospective learning, which refers to generating knowledge from past
projects, as well as prospective learning that refers to transferring knowledge from
past experience to future projects. That said, the most common type of translation
found in the literature consists in translating from scientific to policy-oriented
language. Oftentimes, researchers produce knowledge through empirical methods
with a jargon that is not easily accessible to decision-makers. Therefore a key
component of KB tasks is to develop tailored knowledge translation and exchange
strategies for decision makers.
The simplest translation activities consist in paraphrasing research findings to make
them more relevant or understandable to the target audience. However, translating
knowledge is often described as a more demanding activity, which requires tailoring
knowledge to a particular audience, packaging knowledge in an attractive manner,
interpreting a body of evidence to focus on what Weiss called "key ideas" (Weiss,
1998), that is, actionable messages.
14
Negotiating and convincing:
Closer to the first meaning of the word "brokers", about 15% of the coded segments
described KB as negotiators in charge of bringing opposite sides to a common
ground. KB thus routinely engage in forums bringing multiple groups of stakeholders
together to enable the convergence of interests, ideas, and disciplinary languages.
This set of activities strengthen the vision of a KB as a mediator of the knowledge
creation and innovation process who resist particular knowledge paradigm to open up
a space for dialogue and fruitful exchange. One particularly salient aspect of KB as
negotiators is to facilitate the participation of users in the research process, with the
rationale that decision-makers are more likely to consider research findings if they
have been actively involved in the production process. Through one-on-one
interactions and group consultation, KB attempt to address particular barriers to
change and ensure a catalytic roles in applying knowledge for effective projects.
Outreach and Networking:
The most common type of activities described in the literature pertains to
communication, outreach and networking. About 40% of the coded segments
referred to KB as engaging in active linking and communication at every step of the
knowledge creation and dissemination cycle. The literature refers to three types of
outreach and networking efforts: "push activities", "pull activities", and "exchange
activities". The first set consists in disseminating and promoting existing research to
targeted users by demonstrating the usefulness of various research findings through
different media and communication channels. Pull activities consist in triggering
interests within the targeted audience for a particular research or knowledge agenda.
In this sense, a KB is first and foremost a relation builder.
The literature emphasized that the collaboration between users and producers, as
well as among various types of users should occur over a long period of time, involve
diverse types of exchanges and requires a sustained and intensive efforts from KB
who need to leverage personal contacts as well as constitute larger network. KB are
typically involved in multiple networks, including: (i) experts with particular
competencies to be able to recommend and bring in qualified knowledge producers;
(ii) other KB, with whom they exchange strategies, discuss experiences, access
15
resources; (iii) networks of users. This exchange effort is tantamount to building,
engaging in and maintaining communities of practice. Beyond partnership
developments, KB constantly have to find new ways of maintaining relationships
through various linkages and exchange mechanisms.
Capacity building:
Another key set of activities that knowledge-brokers tend to carry out pertains to the
domain of training, teaching and capacity-building. About one third (31%) of the
coded segments fell into this category. This capacity-building work largely focuses on
building research literacy among decision-makers seeking evidence-based policy. In
this sense, KB typically organize interactive workshops and seminars to increase use
of a particular knowledge product, to create appetite among policy makers for a
particular type of research and to help them use research evidence in policy
decisions. Some workshops can focus on how to establish the methodological quality
and credibility of a particular report. KB also provide ongoing support, such as
knowledge clinics, which are seen as useful to ensure the sustainability of use of the
knowledge product. Less common, but not absent from the literature, is the role of KB
as building the capacity of researchers on how to engage with policy-makers.
Making use of knowledge
Finally, only a small proportion of the coded segments (4%) referred to KB as directly
making use of knowledge into a particular organizational context.
Table 2. Activities of knowledge brokers as described in the literature
/0&
1&
2345%&'(&)*+,-,+,%.&
67'+%.&
<=&
>=1&
89%4+,(#,45&?4'@A%95%&4%%9.&
B*344,45&%C,.+,45&?4'@A%95%&
D4'@A%95%&;%$'.,+';#&349&E3435%E%4+&
F3*,A,+3+%&G7%.+,'4&$H;3.,45&
IJ%3;4,45& 3K'7+& ;%.'7;*%.& ,4& '4%&
*'4+%C+& 349& ,4+;'97*,45& +H%E& ,4&
'+H%;.L& +H%#& 3$$%3;& 349& 3;%L&
,44'-3+,-%I&
MN3;539'4L&>OO>P&!
!Q&
!!1&
R3?,45&+H%&*3.%&(';&3&$3;+,*7A3;&
;%.%3;*H&&
89%4+,(#,45&(749,45&
F,49,45&+H%&;,5H+&?4'@A%95%S$;'97*%;&
F%%9K3*?&+'&$;'97*%;.&+'&-',*%&4%%9.&
349&*'4*%;4&&
673A,+#&*H%*?&
T;%3+,'4&'(&K'7493;#&'KU%*+.&
I"H%& @,AA,454%..& '(& 3AA& E%EK%;.& '(&
+H%& @';?,45& +%3E& +'& *'497*+&
;%.%3;*H& @,+H& *'4+,47%9&
*'EE74,*3+,'4&349&*A3;,V*3+,'4&@3.&
%..%4+,3A& (';& +H%& $;'97*+,'4& '(&
WK;'?%;%9& ?4'@A%95%X& A%39,45& +'&+H%&
9%A,-%;#& '(& H,5H& G73A,+#& 3*39%E,*&
349L& 7A+,E3+%A#L& 7.%(7A& 3$$A,%9&
.*,%4*%YI&MR%#%;L&>O!OPL&
16
ZZ&
<<1&
";34.A3+,45&(;'E&'4%&(,%A9&+'&34'+H%;&
";34.A3+,45&(;'E&'4%&*'4+%C+&+'&34'+H%;&
";34.A3+,45&$3.+&%C$%;,%4*%&(';&*7;;%4+&
4%%9.&
";34.A3+,45&.*,%4+,(,*&%-,9%4*%&+'&$'A,*#S
;%A%-34+&A345735%&
I& F;'E& +H%& $%;.$%*+,-%& '(& $'A,*#&
3*+';.& 349& %-%;#93#& $'A,*#& $;3*+,*%L&
+H%&,E$';+34+&+;34.(';E3+,'4& ,.& +H3+&
(;'E& %C$%;+& ?4'@A%95%& +'& 7.3KA%& ';&
K7;%37*;3+,*& ?4'@A%95%Y& B7*H&
?4'@A%95%&%43KA%.&$'A,*#&$;3*+,*%&+'&
'$%;3+%& '4& +H%& K3.,.& '(& +H%& 4%@A#&
%EK%99%9&?4'@A%95%I&
M2#9,4&%+&3AYL&>OO[P&&
!=&
!Z1&
F3*,A,+3+,45&*'4-%;5%4*%&'(&,9%3.&
F3*,A,+3+,45&$3;+,*,$3+,'4&'(&7.%;.&,4&
$;'97*+,'4&$;'*%..&
\;'7$&*'4.7A+3+,'4.&
)99;%..,45&K3;;,%;.&+'&*H345%&
I];'?%;.& $%;(';E& E%9,3+,'4L&
*3+3A#+,*L& ';& (3*,A,+3+,'4& ;'A%.&
%CH,K,+%9& ,4& +H%& @3#& ?4'@A%95%& ,.&
E3435%9& +'@3;9& %((%*+,-%& 349&
H3;E'4,^%9&'$%;3+,'4.I&!
(Kingiri, 2012)
Q>&
<=1&
`,..%E,43+%&;%.%3;*H&(,49,45.&
T;%3+%&349&.7.+3,4&4%+@';?.&'(&
?4'@A%95%&7.%;.&
T;%3+%&349&.7.+3,4&4%+@';?.&'(&
?4'@A%95%&$;'97*%;.&
T;%3+%&349&.7.+3,4&4%+@';?.&'(&
?4'@A%95%&K;'?%;.&
J,4?&7.%;.&349&$;'97*%;.&
I)& D]& $;'-,9%.& 3& A,4?& K%+@%%4&
;%.%3;*H&$;'97*%;.&349&%49&7.%;.&K#&
9%-%A'$,45& 3& E7+73A& 749%;.+349,45&
'(& 5'3A.& 349& *7A+7;%.L& *'AA3K';3+%.&
@,+H&%49&7.%;.&+'&,9%4+,(#&,..7%.&349&
$;'KA%E.& (';& @H,*H& .'A7+,'4.& 3;%&
;%G7,;%9I&MD,45,;,L&>O!>P&
!
Z!&
<!1&
F'.+%;,45&;%.%3;*H&A,+%;3*#&3E'45&
9%*,.,'4SE3?%;.&
a;'-,9,45&b?4'@A%95%&*A,4,*.b&
ID]&3..%..&%49&7.%;.L&+'&,9%4+,(#&
+H%,;& .+;%45+H.L& ?4'@A%95%L& 349&
*3$3*,+#& (';& %-,9%4*%& ,4(';E%9&
9%*,.,'4& E3?,45L& ,4& ';9%;& +'& K%++%;&
+3,A';& D]& ,4+%;-%4+,'4.& +'& +H%,;&
.$%*,(,*&4%%9.I&&
(Berbegal-Mirabent et al., 2012)
d&
Z1&
)*+,-%A#&3$$A#&%C,.+,45&;%.%3;*H&+'&
$3;+,*7A3;&';534,^3+,'43A&*'4+%C+&
)++%E$+&+'&,4+%5;3+%&%C+%;43A&@,+H&
,4+%;43A&?4'@A%95%&
I8(& %C+%;43A& ?4'@A%95%&,.& 3& *;7*,3A&
;%.'7;*%& (';& ';534,.3+,'4.L& 349& ,(&
%C+%;43A& ?4'@A%95%& ;%.,9%.& ,4&
.$%*,3A,.%9& (';E.& '7+.,9%& +H%&
';534,.3+,'4L& +H%4& %C+%;43A&
?4'@A%95%&,4+%5;3+,'4&,.&34&%..%4+,3A&
';534,.3+,'43A&
T3$3K,A,+#I&!
(Blackman et al., 2011
Source: Authors
!
17
3 A framework for knowledge brokering in evaluation
Many of the descriptors of KB and the activities they typically engage described
above will surely have resonated with readers working in evaluation units. However
the evaluation literature is surprisingly silent on the framing the role of evaluation
units as knowledge brokers. The emphasis is often placed on the role of evaluation
units as an independent voice "speaking truth to power", or an oversight function in
charge of holding an organization accountable to its main stakeholders (Mayne,
2008). In the framework of the Cohesion Policy, evaluation units have primarily been
conceived in this accountability function. Yet, the issue of lack of usage is not new
(Cousins & Leithwood, 1986; Weiss, 1998) and has been diagnosed repeatedly in
different policy areas. In this section we propose a theory of change that describes
how evaluation units acting as knowledge brokers can ultimately improve decision-
making. The theory of change has been iteratively calibrated using the findings from
the literature review and evidence collected among evaluation units.
3.1 Overview of the framework
In the context of the Cohesion Policy, the mission of an evaluation unit as a
knowledge broker is to help the main actors gain and utilize credible knowledge,
ultimately leading to a better design and implementation of public interventions. In the
evaluation context, simply stated, the main idea underlying knowledge brokering is:
that decision-makers who are exposed to brokered evaluative knowledge are more
likely to embark upon interventions in such a way that serves citizens well. This idea
goes beyond the underlying assumption stemming from the evidence-based
literature, by emphasizing that credible knowledge from research is not sufficient for
driving on-going dialogue on policy issues (Prewitt et al., 2012; van der Knaap,
1995), it needs to be "brokered". While, in theory the use of research evidence can
help policy actors craft better policies and improve delivery mechanisms (Sanderson,
2002; Nutley et al., 2007; Shillabeer et al., 2011), in reality this relationship is not
direct. It needs to be mediated by a range of activities that fall into the realm of
knowledge brokering. In line with this perspective, we strongly advocate for
enhancing the learning function of evaluation units, even if this results in downplaying
the perception of independence of evaluation units and their accountability function
18
that currently dominates evaluation practice within the Cohesion Policy (Batterbury,
2006).
The causal assumptions about knowledge brokers’ impact on policy-making can be
presented in the form of "theory of change". This is a well-established tool for
articulating public interventions' logic (e.g., Leeuw, 2003; Rogers, Funnell, 2011). A
simple version of such "theory of change" for evaluative knowledge brokering can be
formulated as follows: actors involved in running public policies have certain
knowledge needs at different stages of the policy process. IF evaluation units
acting as knowledge brokers perform certain brokerage activities THEN they
can contribute to triggering desired behaviours among knowledge users -
namely better understanding and better decision making related to public
intervention THEREBY enhancing the quality of public interventions
A more detailed theory of changed is presented in Figure 1. The following sections
discuss the various components of this framework.
19
Figure 1. Framework for knowledge brokering
Public
interventions are
better designed
and more likely
to successfully
serve citizens
Knowledge needs and
knowledge users
ACTIVITIES
IF knowledge broker brokers perform certain
activities...
Public interventions
aim to address
certain socio-
economic issues
Those interventions
are designed and
implemented by
different public
policy actors
Those actors have
certain
KNOWLEDGE
NEEDS at different
stages of
intervention cycle
MECHANISM
…THEN they trigger desired
behaviours of knowledge
users...
CHANGE
…AND THEN
positive effect will
occur
INSTITUTIONAL DETERMINANTS
of decision-making process
PSYCHOLOGICAL DETERMINANTS
Human heuristics and biases
Acquiring
knowledge
Building networks with producers and users
of knowledge
Accumulating knowledge over time
Defining
knowledge
needs
Feeding
knowledge
to users
Users
change
their
mental
models
Users
understand
message
Users use
knowledge
in practice
Building evidence-based culture
20
3.2 The know-do gap in the evaluation context
In order to address different socio-economic issues government agencies undertake
public interventions. They can take the form of regulations, single projects,
programmes and policies (Howlett, 2011; Tucker, 2005).
Typical stages of the policy cycle cover (Bardach, 2008; Clar et al., 2013; Howlett,
2011): agenda setting, problem analysis and design of an intervention (in Cohesion
Policy called: programming), implementation and, finally evaluation of its effects. In
practice of course, the process is often iterative and even messy (Kingdon, 1995;
Tyler, 2013). From the perspective of evaluation units, different policy actors get
involved at different stages of the policy and evaluation process. These actors span a
large spectrum: from the public and media, politicians, through different interest
groups, experts and academics to high-level civil servants and public managers.
These actors express different evaluative knowledge needs as presented in Table 3.
Table 3. Types of knowledge
Type of knowledge need
Description
Know-about...
...the policy issue, structure and spatial distribution of the
socio-economic problems and its trends in time, needs,
expectations and characteristics of targeted population
Know-who...
... should be involved, the key stakeholders, including
beneficiaries of services, needed to develop and implement
solutions
Know-what...
...works, solutions and strategies that produce positive
effects, or have produced desired outcomes in the past
Know-why...
...things work, the causal mechanisms that lead to desired
outcomes
Know-how...
...to implement strategies and activities, operational
knowledge on effective implementation
Source: Based on (Ekblom, 2002, p.142; Nutley et al., 2003)
For example "know about" and "know who" is in high demand at the agenda setting
and problem definition stages. "Know what" and "know why" are most valuable for
designers of particular intervention, "know how" is needed by managers during
implementation, while "know why" is also required for the reflection of senior
decision-makers supervising interventions in their advanced stage of implementation.
21
Knowledge Brokers by recognizing those needs, contribute to the process of public
decision-making and effective delivery of interventions.
The challenge is that actors are not always fully aware of their knowledge needs. As
Leviton points out (2003, p.527-528) actors who are aware of their knowledge deficits
express uncertainty. Those who are not aware of knowledge deficit bring into policy
process "flawed assumptions". Skilful knowledge brokers have to identify and reach
both groups.
In the case of the Cohesion Policy, evaluation units as knowledge brokers provide
knowledge mainly to group of internal actors - decision makers such as politicians
heading ministries and regional offices, senior civil servants supervising interventions
and public managers running projects. The policy cycle is multi-layered2, and in so
being brings both opportunities and challenges to the evaluation process. On the one
hand, evaluations are conducted ex-post at the highest-level of the Operational
Programmes (OPs), which are shaped within the multi-annual calendar of the
European Union budget. However, within this six to eight year cycles there are many
other opportunities to conduct more targeted evaluations at the level of a single
project, a type of projects, a nested program, a thematic priority. Each of these
various units of analysis may appeal to different audiences, and feeding information
within these tighter feedback loops does represent a significant challenge for
evaluation units as knowledge brokers.
3.3 Brokering evaluative knowledge
Evaluation units operating as knowledge brokers can perform six groups of activities:
(1) identifying knowledge needs, (2) acquiring and translating knowledge, (3) feeding
knowledge to users, (4) building networks with producers and users, (5) accumulating
knowledge over time and (6) promoting evidence-based culture. The first three
activities are sequential, tied to certain policy issues on which evaluation units
2 Once interventions are designed at the beginning of programming period they are implemented for
the whole period (e.g. for 2014-2020, OPs were designed since 2014 till mid-2015 and they will be run
till 2020 or even 2022). However Operational Programmes consist in packages of interventions. Within
them smaller interventions are nested as priorities - thematic strands of projects and single projects.
They can be designed, run and evaluated according to their own cycle in a much shorter time frame.
22
currently work. The other three are horizontal, performed continuously. We discuss
each grouping in more details below.
(1) Identifying knowledge needs
As mentioned in section 3.2, the starting point of the activities of KB is the
identification of information gaps related to the particular public intervention and
knowledge needs of its decision-makers.
Evaluation units can take two different approaches at this stage. If they follow a
reactive tactic, the units may wait to be approached by decision-makers or simply
follow the requirements of the Cohesion Policy regulation that oblige them to perform
certain ex ante and on-going evaluation activities. The alternative is a proactive
approach whereby units initiate contacts with programme managers, observe
developments in the intervention's implementation or scan the policy horizon for
possible upcoming debates. Knowledge brokers anticipate needs of users and
sometimes even raise their awareness of certain policy areas that need insight from
research. Surveyed Polish evaluation units (n=46) demonstrate a rather proactive
approach. Ideas for research and evaluation topics come from their own observations
of programme or policy field (86%), their participation in the meetings with various
institutions of the Cohesion Policy system (67%) and their inquiries with merit units3
(63%). On the other hand, only 17 out of 46 units declared that following political and
public debates mattered.
Regardless of the chosen approach, the targeting of knowledge needs encompass
one key activity - translating general, often vague knowledge needs into the language
of research questions. All interviewed evaluation units (Polish and international)
have been investing a substantial amount of time into: persuading knowledge users
to adopt a particular wording of research questions, working collaboratively with
users and merit units to come up with research questions, complementing research
proposals submitted by the merit units with additional research areas, scoping and
selecting priority research questions when the information needs are vast and
dissonant across actors.
3 These are units that design, manage or implement programmes.
23
Knowledge brokers operating in the context of the Cohesion Policy have a certain
leeway in identifying needs. However, they also have to abide by certain formal
constrains in reacting to emerging knowledge needs.
(2) Acquiring knowledge
The large majority of evaluation units in Poland contract out evaluation studies to
external consultancies, research institutions or experts' teams (92%). They play an
active role in the supervision of the evaluation studies. When considering new
evaluation studies, the majority of evaluation units declared that they tend to check
whether the research questions has not already been addressed within existing
knowledge sources (76%).
On the other hand, Polish evaluation units rarely conduct evaluation studies
themselves (21%). They also rarely produce internal syntheses that summarise the
findings from different studies (34%). Systematic reviews or rapid-fire reviews are
also relatively scarce (23%). This pattern stands in sharp contrast with the recent
practices of international evaluation units. The latter often synthesize existing
knowledge in forms of synthesis papers (US Environmental Protection Agency
evaluation unit), systematic reviews (Independent Evaluation Group in World Bank)
or peer reviews (Millennium Challenge Corporation monitoring and evaluation unit).
The prospects and benefits of building on existing body of knowledge are well
discussed in the evaluation literature. Different sources indicate realist syntheses
(Pawson, 2002, 2013), systematic reviews (see: The Campbell Collaboration, The
Cochrane Library) and meta-analyses as effective ways of tracing what works for
whom and in what context.
When purchasing information from consulting firms, the Polish evaluation units
typically face two challenges: timeliness and quality. The importance of timely
information delivery for decision-making has been widely discussed in the literature.
For knowledge broker timely delivery means matching the type of knowledge to the
appropriate stage of an intervention and delivering this knowledge to users within a
narrow time window when a particular study can be influential (e.g. delivering
information on the performance of a pilot project to the designers of the full-fledge
programme, or delivering the findings from the study on effectiveness of project
24
selection procedures to programme managers in charge of the next round of project
selection). Of course some situations will require quicker feedback loops (e.g. "know-
how" needs of managers during implementation) while others will allow more time but
may require advanced planning (e.g. assessments of effects using (quasi)
experimental designs).
The second challenge lies in acquiring knowledge that is credible.. In the Polish
context, evaluation units try to pre-empt poor quality product by being quite specific in
the evaluations' Terms of Reference, in particular with regards to methodology.
However, this approach is not fully effective, especially because rigor in evaluation
does not reside in a particular methodological design but rather in the capacity to
match an evaluation design to a particular question (the so called "platinum
standard") (Donaldson et al., 2008; Petticrew, Roberts, 2003; Sanjeev, Craig, 2010).
(3) Feeding knowledge to users
Feeding evaluative evidence to users is one of the most challenging tasks for
evaluation units as KB. The challenge lies in finding the right mode of presentation
and the most effective knowledge delivery channels in a particular institutional
context. With regards to the former, the most popular presentation format in Poland
remains full reports, often accompanied with a power point presentation. The practice
of providing shorter documents that succinctly present key information to decision-
makers (e.g., memos, summaries, briefs) is surprisingly rare) Only half of the
evaluation units declared preparing this type of summary papers. Interestingly, when
executive summaries are included in reports, they have been found to be even less
user-friendly than the longer report and cater to audiences that have at least a
doctorate (Broda et al., 2011). The role of evaluation units as translator is thus critical
in the Polish context. .
Notwithstanding this example, communication formats remain rather formal in Poland
and evaluation units have not yet fully leveraged the recent progress in developing
useful data visualization tool for public policy practitioners such as: arguments maps
(allowing to show a visual structure of evidences that support and contradict
particular issue), animated logic models, GIS, dashboards (Azzam et al., 2013;
25
Olejniczak, 2015; Smith, 2013). Much work is thus needed in terms of finding the
right platform and format to present and disseminate knowledge to ultimate users.
When it comes to knowledge delivery channels, a useful heuristic is to distinguish
between diffusion (passive and largely unplanned, uncontrolled, and primarily
horizontal efforts or mediated by peers), dissemination (communicating research
results by targeting and tailoring the findings and the message to a particular target
audience) and implementation (even more active process than dissemination;
involves systematic efforts to encourage adoption of the research findings by
identifying and overcoming barriers to their use) (Lomas, 1993; Gagnon, 2011). The
most prominent communication channels between evaluation units and end users
remain e-mail exchange, web page publications or, in some cases blogs (e.g., the
World Bank IEG "what works" blog). Almost half of the Polish evaluation units (46%)
organize and animate discussions about research findings, while 62% declare
referring to their studies when participating in programme meetings. The Cohesion
Policy procedures create a number of opportunities for formal exchange.
Presentations of evaluation findings are regular point on the meeting agenda of
programmes' monitoring committees. However a majority of these meetings remain
quite formal and the knowledge exchange that takes place within these forums are
circumscribed to official "dissemination" as opposed to active engagement and
implementation. Using academics, experts or policy advisors, as channels for
promotion of findings is rather rare in Poland. Also quite rare are examples of
"implementation" - when evaluation units take a direct and active role in assisting
users in their decision-making process. However, in a recent wave of ex ante studies
some evaluation units applied an on-going, participatory approach. As a result, they
become actively involved in the process of programming of new interventions.
Both literature and practice indicate that different decision-makers favour different
forms and ways of communication (Torres and Preskill, 2001; Torres et al., 2005).
For knowledge broker that means matching ways of knowledge feeding to the type of
primary user. For example, interviews with Polish evaluation units provided profiles of
four typical users in Cohesion Policy. These are: policy makers, directors of merit
units, programme designers from programming bodies, and managers who run
interventions. Policy makers are interested in horizontal topics, policy issues, and not
26
single reports. They prefer concise information in plain language, briefs, summary of
conclusions and clear implications for decisions. From the perspective of evaluation
unit these actors are difficult to approach directly, thus it is good to use their staff,
recognised experts and advisors for passing the message. Directors of merit units
prefer short presentation with small audience followed by discussions and set of
detailed materials. They appreciate insight into programme details (stories) combined
with bigger picture (statistics). Programme designers share similar characteristics
although they appreciate visual statistics, trends and spatial comparisons that give
basis for strategic decisions. They prefer regular working sessions. Managers like
extensive reports, often written in technical jargon, they are interested in details of the
researched projects. They prefer direct, regular contact, exchange of comments and
cycle of discussion meetings.
(4) Building networks with evaluation producers and users
As aforementioned, connecting knowledge users and producers is the most basic
role, and rationale for KB (Holzmann, 2013; Schlierf, Meyer, 2013; Lowell et al.,
2012; Naylor et al., 2012). Broker builds (Taylor et al., 2014; Sheate, Partidario,
2010), weaves (Am, 2013), links (Reiche et al., 2009), possesses (Boyer et al.,
2009), is primary node of the networks (Warner et al., 2011). They act as linkage
agent (Mavoa et al., 2012; Waqa et al., 2013; Ward et al., 2009).
When brokers are parts of organizations using knowledge (as it is in the case of
evaluation units) their practical challenge is to build relations between their
organizations and producers mostly experts. That could be done by: active
participation in conferences, joining particular associations, using various forums
where producers are gathered to relay the information needs of brokers' institution,
suggesting how particular activities of producers could fulfill them, and searching
common interests.
Recently, the institutionalization of evaluation professional associations has played a
particularly important role in connecting various actors of the larger policy process
network. Networks such as the European Evaluation Society, the American
Evaluation Association, the United Nations Evaluation Group, etc. and their
associated conference tend to bring together knowledge producers, user and
27
brokers. Our interviewees have mentioned these forums as a major source of
dissemination, capacity building, and networking.
Another way of connecting knowledge producers and users is to hire in public
institutions individuals with scientific background, research experience, preferably
employed simultaneously at university or research institution. Such person plays a
role of a bridge connecting two social networks. Around 20% of Polish evaluation
units declare pursuing such practice.
(5) Accumulating evaluative knowledge over time
There is ample evidence that a single evaluation report is rarely a game changer.
What influences a particular course of action is the accumulation of evidence
stemming from multiple heterogeneous studies (Leviton, 2003). The use of evidence,
especially conceptual use, is a cumulative process (Rich, 1977; Weiss et al., 2005).
Therefore building the institutional capability to accumulate evidences is crucial.
For evaluation units, the simplest way to accumulate knowledge overtime is to store
the units' own reports and to make them accessible to knowledge users. A more
advanced accumulation strategy consists collecting relevant knowledge produced by
others and providing a platform, also called a "clearing house" with advanced search
options to extract a particular body of evidence. In that case the evaluation unit
becomes a "knowledge memory for a particular organization. The most advanced
practice of knowledge accumulation covers explicit knowledge (in codified forms e.g.
reports) and tacit knowledge, that is know-how and individual experience of the
personnel (Polanyi, 1966). In order to elicit the existing tacit knowledge, evaluation
units can work on creating and sustaining communities of practice (such as the World
Bank Results, Measurement & Evidence Stream) and organizing team reflection (e.g.
after action reviews, data driven reviews). In that case knowledge accumulation
activities overlap with knowledge feeding activities. The difference is that the former
is continuous and horizontal while the latter focuses on a particular study.
The majority of evaluation units that we consulted for this research had built a
database with their own reports as well as additional resources. For example, Poland
runs a unified national database of all evaluation reports, although its current search
engine remains limited. The current challenge for Cohesion Policy evaluation unit is
28
generalizability of evaluation findings in terms of subjects (beyond single intervention)
and time (across programming periods). For many years, evaluation reports were too
focused on highly technical and project specific issues, with very limited scope for
stimulating learning across projects and overtime. To remedy this challenge,
evaluation questions are increasingly focused on effects and mechanism of types of
interventions, as opposed to specific procedures Evaluation units have also started to
team up with territorial observation units to analyse types of intervention in the
context of the functional space of a region.
(6) Promoting evidence-based culture.
The final group of activities identified across evaluation units was also horizontal in
nature and takes the most time to bear fruit. It can be perceived as awareness raising
about the benefits of using evidence in decision-making as well as fostering a
favourable environment for knowledge supply. Polish evaluation units are quite active
in that field with the following undertakings:
organizing ad hoc and systematic trainings (since 2008 the core instrument
has been the Academy of Evaluation - a post-graduate elite study program for
senior civil servants on evaluation of socio-economic interventions),
running annual national and regional conferences that promote evaluation
results and creates forum for discussion on successful public policies
(engaging regional, national and international decision-makers, academics
and experts),
publishing books and manuals on knowledge use in public policies (e.g. series
on advances in evaluation run by Polish Agency of Enterprise Development),
providing overviews of evaluation activities of evaluation units,
arranging national contests for best evaluators and best evaluation study.
In Poland evaluation units from different regions tend to scrutinize each others' efforts
and often compete with ideas on promotion of evaluation.
3.4 Mechanisms and expected change
A theory of change is not complete without laying out the assumptions underlying the
change process. In this section we review the change mechanisms that need to be
29
'fired up' for knowledge brokering to make a difference. Activities of KB should trigger
certain behaviours among knowledge users. As shown in Figure 1, in an ideal
situation users correctly understand the evidence provided by knowledge brokers,
internalize that knowledge by modifying their individual mental models (that is their
assumptions about how interventions work) (Levitt, March, 1988; Senge, 1990) and
then act upon that gained knowledge by making improvements in an intervention.
They could make simple adjustments of actions, procedures and routines (single loop
learning), substantial changes in underlying premises of interventions and strategic
orientation (double loop learning) or they could even change the ways they analyse
intervention (deutero learning) (Argyris, Schon, 1995; Fiol, Lyles, 1985).
This chain of reaction is heavily influenced by a complex set of mechanisms. In our
opinion, knowledge brokers have to be aware of four types of mechanisms that
determine their influence on decision makers.
The first set lies in the way individual decision-makers' cognition and comprehension
of information (Cousins, Leithwood, 1986). The literature stemming from cognitive
psychology provides good insight into the boundaries of human cognition and such
as short term memory limited to seven issues, people need to relate new information
to their existing mental models (Johnson-Laird, 2009; Lupia, 2013). Brokers should
be aware of these limitations and apply appropriate communication strategies
(Evergreen, 2013).
The second set of mechanisms sheds light on the behavioural mechanisms of human
decision-making. Empirical findings of behavioural psychology reveal that actual
decisions on complex issues, undertaken under uncertainty heavily rely on heuristics
and “rules of thumb” that can lead to systematic errors and biases (Kahneman, 2011;
Tversky, Kahneman, 1974). Effective knowledge brokers have to be aware of the so-
called "bounded rationality" of knowledge users (Simon, 1991).
Thirdly, knowledge brokering strategy should be cognizant of the dynamics of
organizational learning, which shapes how evaluation findings are integrated, or not
in a collective mental model at the level of a particular institution. Again, effective
brokers have to understand the inherent factors determining learning in organizations
such as trust level, incentives, organizational routines that support collective
30
reflection, as well as the role of leaders in that process (Lipshitz et al., 2007;
Olejniczak, Mazur, 2014).
Finally, knowledge brokers must watch for mechanisms inherent in the policy
process. Different decision regimes operate within public policies, often depending on
the topic and the stage in policy cycle (Lindquist, 2001). Those regimes assign
different value to research evidences and expertise. For example political
negotiations value group interests, feasibility of projects or media opinions rather than
research results (Bots et al., 2010; Davies et al., 2010). Thus knowledge brokers
should be realistic - research evidence is at best only one of the many considerations
that decision-makers taken into account.
4 Conclusion
The article offers an empirically tested framework for the analysis of the role of
evaluation units as knowledge brokers. Our framework is positioned at the
intersection between the domain of evaluation utilization, the growing field of
evidence-based policy and knowledge use in decision-making processes, and a large
untapped literature on organizational learning. Cross-fertilization between these fields
can go a long way in deepening our understanding of the phenomenon of
evaluation’s role in decision-making processes.
The theory of change underlying the framework of evaluation units as knowledge
brokers states that different actors involved in the Cohesion Policy have different
knowledge needs at the different stages of policy cycle. The role of knowledge
brokers is to feed the right users with credible and brokered knowledge on time and
in an accessible way. This triggers individual and organizational learning that in turn
can lead to improvement in public interventions.
This narrative transforms evaluation units from mere buyers of expertise and
producers of isolated reports into animators of "reflexive social learning" that steer
streams of knowledge to decision-makers.
We found that evaluation units already engage in many brokering activities. However,
framing evaluation units as knowledge brokers substantially rearrange the current
31
perspective on the role of evaluation units in at least five directions. First, the
framework puts policy issues and public interventions at the centre of attention of
knowledge brokers. The accent is on understanding the mechanisms that help socio-
economic change. Therefore, the subjects of brokers' activities are not single reports
but body of evidence on certain types of interventions that contribute to solving socio-
economic issues.
Second, knowledge needs are always related to actors – politicians, senior civil
servants or managers. There is a clearly defined group of knowledge users that
needs to be identified and engaged in the knowledge production process. This user-
oriented perspective requires evaluation units to follow the dynamic of the policy
cycle, understand the needs of users, their constrains, preferable forms of
communication, and sometimes even rise awareness of knowledge needs and
information gaps for the particular policy issue.
Third, the framework clearly focuses on evaluation's learning function
(underdeveloped in evaluation of Cohesion Policy), rather than the more traditional
accountability function that is already well covered by the numerous reporting,
monitoring, audit and controls tools of Cohesion Policy. Brokers are change agents
that incrementally contribute to behavioural and organizational adaptation to
changing contexts at the levels of individual decision makers, organizations and
policy arenas.
Fourth, the framework portraits complex knowledge brokering activities in terms of
game dynamics that insists on matching the right configuration of elements, such as
type of knowledge needs with policy cycle and users of knowledge, research designs
with research questions, users types with knowledge feeding methods, to increase
chances of success.
Fifth, the framework does not downplay the degree of uncertainty that underlie the
link between brokers’ actions and their impact on the quality of interventions. Many
factors, beyond evaluation findings, influence decision-making process, including
political rationality, organizational dynamics, characteristics and sense making of the
knowledge users. However the better the quality of brokers' activities and the
32
stronger the evidence-base they presented, the higher the chances of positive
influence.
Although the framework substantially reshapes the role of evaluation units, it does
not demand any major increase in their current volume of activities. It has been built
on existing experiences and it only requires some reorganization of thinking. Since
evaluation units can be found across all European Union countries, in US federal
agencies and in international organizations (UN, World Bank), the framework has a
high potential for reaching diverse evaluation practitioners and range of
organizations, including government bureaus and international organizations.
The proposed framework is a starting point for further exploration of how evaluation
units can become fully equipped knowledge brokers and make a difference in
complex decision-making settings. It is currently being used as the platform for an
important capacity-building enterprise with Polish evaluation units in the form of a
simulation. Evaluation units are invited to engage in a game dynamics that consists in
matching different configurations of factors (resources, time, evaluation questions,
skills, etc.) to particular task environments with the goal of increasing the likelihood of
knowledge utilization.
!
33
References
Abbate T., Coppolino R. (2011). "Knowledge creation through knowledge brokers: Some
anecdotal evidence". Journal of Management Control, 22(3), 359-371.
Am H. (2013). “'Don't make nanotechnology sexy, ensure its benefits, and be neutral':
Studying the logics of new intermediary institutions in ambiguous governance contexts”.
Science and Public Policy, 40(4), 466-478.
Argyris, C. & Schon, D.A. (1995). Organizational Learning II: Theory, Method, and Practice.
Reading, Massachusetts: FT Press.
Armstrong R., Waters E., Crockett B., Keleher H. (2007). "The nature of evidence resources
and knowledge translation for health promotion practitioners". Health Promotion International,
22(3), 254-260.
Azzam, T., Evergreen, S., Germuth, A. & Kistler, S. (2013). “Data Visualization and
Evaluation”. New Directions for Evaluation, 139(Fall), 7-32.
Bachtler, J. & Wren, C. (2006). “Evaluation of European Union Cohesion Policy: Research
Questions and Policy Challenges”. Regional Studies, 40(2), pp.143-pp.153.
Bardach, E. (2008). A Practical Guide for Policy Analysis: The Eightfold Path to More
Effective Problem Solving. Washington D.C.: CQ Press.
Batterbury, S. (2006). “Principles and Purposes of European Union Cohesion policy
Evaluation”. Regional Studies, 40(2), pp.179-pp.188.
Berbegal-Mirabent J., Sabate F., Canabate A. (2012). “Brokering knowledge from universities
to the marketplace: The role of knowledge transfer offices”. Management Decision, 50(7),
1285-1307.
Bergenholtz C. (2011). "Knowledge brokering: Spanning technological and network
boundaries". European Journal of Innovation Management, 14(1), 74-92.
Bielak A.T., Campbell A., Pope S., Schaefer K., Shaxson, L. (2008). From science
communication to knowledge brokering: The shift from science push to policy pull” in
Communicating Science in Social Contexts: New Models, New Practices, D. Cheng, M.
Claessens, T. Gascoigne, J. Metcalfe et al. (eds), 201-226. Amsterdam: Springer.
Blackman D., Kennedy M., Ritchie B. (2011). "Knowledge management: The missing link in
DMO crisis management?". Current Issues in Tourism, 14(4), 337-354.
Bots, P., Wagenaar, P. & Willemse, R. (2010). “Assimilation of Public Policy Concepts
Through Role-Play: Distinguishing Rational Design and Political Negotiation”. Simulation &
Gaming, 41(5), 743-766.
Boyer L., Roth W.-M., Wright N. (2009). “The emergence of a community mapping network:
Coastal eelgrass mapping in British Columbia”. Public Understanding of Science, 18(2), 130-
148.
34
Broda, B., Maziarz, M., Piekot, T., Poprawa, M., Radziszewski, A. & Zarzeczny, G. (2011).
Język raportów ewaluacyjnych, Warszawa: Ministerstwo Rozwoju Regionalnego.
Brodersen S., Jørgensen M.S. (2003). The Danish National Case Study Report. Lyngby,
Denmark: Technical University of Denmark.
Cameron D., Russell D.J., Rivard L., Darrah J., Palisano R. (2011). "Knowledge brokering in
children's rehabilitation organizations: Perspectives from administrators". Journal of
Continuing Education in the Health Professions, 31(1), 28-33.
Cash D.W., Clark W.C., Alcock F., Dickson N.M., Eckley N., Gutson D.H., Jäger J., Mitchell
R.B., (2003). “Knowledge systems for sustainable development”. Proceedings of the National
Academy of Sciences of the United States of America 100, 80868091.
Chaudhury M., Vervoort J., Kristjanson P., Ericksen P., Ainslie A. (2013). "Participatory
scenarios as a tool to link science and policy on food security under climate change in East
Africa". Regional Environmental Change, 13(2), 389-398.
Cillo P. (2005). “Fostering market knowledge use in innovation: The role of internal brokers”.
European Management Journal, 23(4), 404-412.
Clar, C., Prutsch, A. & Steurer, R. (2013). “Barriers and guidelines for public policies on
climate change adaptation: A missed opportunity of scientific knowledge-brokerage”. Natural
Resources Forum, 37, 1-18.
Conklin J., Lusk E., Harris M., Stolee P. (2013). “Knowledge brokers in a knowledge network:
The case of Seniors Health Research Transfer Network knowledge brokers”. Implementation
Science, 8(1).
Cooper A. (2013). “Research mediation in education: A typology of research brokering
organizations that exist across canada”. Alberta Journal of Educational Research, 59(2),
181-207.
Cousins, B.J. & Leithwood, K.A. (1986). “Current empirical research in evaluation utilization”.
Review of Educational Research, 56(3), 331-364.
Davies, H., Nutley, S. & Walter, I. (2010). “Using evidence: how social research could be
better used to improve public service performance”; in: Walshe, K., Harvey, G. & Jas, P. (ed.)
Connecting Knowledge and Performance in Public Services: From Knowing to Doing, s.199-
225. Cambridge: Cambridge University Press.
Dilling L., Lemos M.C. (2011). “Creating usable science: Opportunities and constraints for
climate knowledge use and their implications for science policy”. Global Environmental
Change, 21(2), 680-689.
Dinan, D. (2010). Ever Closer Union: An Introduction to European Integration. London:
Palgrave Macmillan.
Dobbins M., Hanna S.E., Ciliska D., Manske S., Cameron R., Mercer S.L., O'Mara L.,
Decorby K., Robeson P. (2009). "A randomized controlled trial evaluating the impact of
knowledge translation and exchange strategies". Implementation Science, 4(1), -.
35
Donaldson, S.I., Christie, C.A. & Mark, M.M. (2008). (ed.) What Counts as Credible Evidence
in Applied Research and Evaluation Practice? Sage Publications, Inc, Los Angeles.
EGO s.c. (2010). Ocena systemu realizacji polityki spójności w Polsce w ramach
perspektywy 2004-2006. 25.11.2010, Warszawa: Krajowa Jednostka Oceny.
Ekblom, P. (2002). “From the Source to the Mainstream is Uphill: The Challenge of
Transferring Knowledge of Crime Prevention Through Replication, Innovation and
Anticipation”. Crime Prevention Studies, 13, 131-203.
Evergreen, S. (2013). Presenting Data Effectively: Communicating Your Findings for
Maximum Impact. SAGE Publications, Inc.
Feng W., Duan Y., Fu Z., Mathews B. (2009). "Understanding expert systems applications
from a knowledge transfer perspective". Knowledge Management Research and Practice,
7(2), 131-141.
Ferguson B.C., Frantzeskaki N., Brown R.R. (2013). "A strategic program for transitioning to
a Water Sensitive City". Landscape and Urban Planning, 117, 32-45.
Ferry, M. & Olejniczak, K. (2008). The use of evaluation in the management of EU
programmes in Poland, Warsaw: Ernst & Young - Program “Sprawne Państwo”.
Fiol, M. & Lyles, M. (1985). “Organizational learning”. Academy of Management Review,
10(4), 803-813.
Frost H., Geddes R., Haw S., Jackson C.A., Jepson R., Mooney J.D., Frank J. (2012).
“Experiences of knowledge brokering for evidence-informed public health policy and practice:
three years of the Scottish Collaboration for Public Health Research and Policy”. Evidence
and Policy, 8(3), 347-359.
Gagnon M.L. (2011). “Moving knowledge to action through dissemination and exchange”.
Journal of Clinical Epidemiology, 64, 25-31.
Gutierrez R.A. (2010). "When experts do politics: Introducing water policy reform in Brazil".
Governance, 23(1), 59-88.
Guston D.H. (2000). Between Politics and Science. Assuring the Integrity and Productivity if
Research. Cambridge University Press, Cambridge.
Hargadon A.B. (1998). "Firms as knowledge brokers: Lessons in pursuing continuous
innovation". California Management Review, (3), 209-227.
Hargadon A.B. (2002). "Brokering knowledge: Linking learning and innovation". Research in
Organizational Behavior, 24, 41-85.
Hargadon A., Sutton R.I. (2000). "Building an innovation factory.". Harvard Business Review,
78(3), 157-166, 217.
Heiskanen E., Mont O., Power K. (2014). “A Map Is Not a Territory-Making Research More
Helpful for Sustainable Consumption Policy”. Journal of Consumer Policy, 37(1), 27-44.
36
Hoeijmakers M., Harting J., Jansen M. (2013). "Academic Collaborative Centre Limburg: A
platform for knowledge transfer and exchange in public health policy, research and
practice?". Health Policy, 111(2), 175-183.
Hojlund, S. (2014a). “Evaluation use in the organizational context - changing focus to
improve theory”. Evaluation, 20(1), 26-43.
Hojlund, S. (2014b). "Evaluation use in evaluation systems the case of the European
Commission" Evaluation 20 (4), 428-446.
Holmes J., Clarke R. (2008). “Enhancing the use of science in environmental policy-making
and regulation”. Environmental Science & Policy, 11, 702711.
Holzmann V. (2013). “A meta-analysis of brokering knowledge in project management”.
International Journal of Project Management, 31(1), 2-13.
Howlett, M. (2011). Designing Public Policies. Principles and instruments. London, New
York: Routledge.
Huusko L. (2006). "The lack of skills: An obstacle in teamwork". Team Performance
Management, 12(2), 5-16.
Jacobson N., Butterill D., Goering P. (2005). “Consulting as a strategy for knowledge
transfer”. The
Milbank Quarterly, 83(2), 299-321.
Johnson, K., Greenseid, L., Toal, S., King, J., Lawernz, F. & Volkov, B. (2009). “Research on
Evaluation Use : A Review of the Empirical Literature From 1986 to 2005”. American Journal
of Evaluation, 30(3), 377-410.
Johnson-Laird, P. (2009). “Mental Models and Thought”; in: Holyoak, K. & Morrison, R. (ed.)
The Cambridge Handbook of Thinking and Reasoning, s.185-208. Cambridge: Cambridge
University Press.
Jinnah S. (2010). "Overlap management in the world trade organization: Secretariat influence
on trade-environment politics". Global Environmental Politics, 10(2), 54-79.
Kahneman, D. (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.
Kauffeld-Monz M., Fritsch M. (2013). “Who Are the Knowledge Brokers in Regional Systems
of Innovation? A Multi-Actor Network Analysis”. Regional Studies, 47(5), 669-685.
Kingdon, J. (1995) Agendas, Alternatives, and Public Policies. Second Edition.
Kingiri A.N., Hall A.(2012) The Role of Policy Brokers: The Case of Biotechnology in Kenya
Review of Policy Research 29 (4),492-522
Klerkx L., Schut M., Leeuwis C., Kilelu C. (2012). "Advances in knowledge brokering in the
agricultural sector: Towards innovation system facilitation". IDS Bulletin, 43(5), 53-60.
37
Kupiec, T. (2014). “Użyteczność ewaluacji jako narzędzia zarządzania regionalnymi
programami operacyjnymi”. Studia Regionalne i Lokalne, 2(56), 52-67.
Laubli Loud and Mayne (2014) Enhancing Evaluation Use: Insights from Internal Evaluation
Units. Sage Thousand Oak
Leeuw, F.L. (2003). “Reconstructing Program Theories: Methods Avaliable and Problems to
be Solved”. American Journal of Evaluation, 24(1), 5-20.
Leeuw, F. & Furubo, J. (2008). Evaluation System: What Are They and Why Study Them?
Evaluation 14(2): 157-169.
Leino H. (2012). "Boundary Interaction in Emerging Scenes: Two Participatory Planning
Cases from Finland". Planning Theory and Practice, 13(3), 383-396.
Leviton, L.C. (2003). “Evaluation use: advances, challenges and applications”. American
Journal of Evaluation, 24(4), 525-535.
Levitt, B. & March, J. (1988). “Organizational Learning”. Annual Review of Sociology, 14,
319-340.
Lin Y.-H. (2012). "Knowledge brokering for transference to the pilot's safety behavior".
Management Decision, 50(7), 1326-1338.
Lindquist, E. (2001). Discerning policy influence: framework for a strategic evaluation of
IDRC-Supported research, British Columbia: International Development Research Centre.
Lipshitz, R., Friedman, V.J. & Popper, M. (2007). Demystifying Organizational Learning.
Thousand Oaks: Sage Publications, Inc.
Lomas J. (2007). “The in-between world of knowledge brokering”. BMJ, 334(7585), 129132.
Lomas, J. (1993). “Diffusion, Dissemination, and Implementation: Who Should Do What?”.
Annals of the New York Academy od Sciences, 703(December), 226-237.
Lowell S.M., Hoffmann T.C., McGrath M., Brazil G., Thomas S.L. (2012). “Coastal and Ocean
Science-Based Decision-Making in the Gulf of California: Lessons and Opportunities for
Improvement”. Coastal Management, 40(6), 557-576.
Lupia, A. (2013). “Which Evaluations Should We Believe: Origins of Credibility and
Legitimacy in Politicized Environments”, Evaluation Practice in the Early 21st Century,
American Evaluation Association, Washington D.C.
Mavoa H., Waqa G., Moodie M., Kremer P., McCabe M., Snowdon W., Swinburn B. (2012).
“Knowledge exchange in the Pacific: The TROPIC (Translational Research into Obesity
Prevention Policies for Communities) project”. BMC Public Health, 12(1).
Mayne, J. (2008). Building an Evaluative Culture for Effective Evaluation and Results
Management. ILAC Working Paper 8.
38
McAllister R.R.J., McCrea R., Lubell M.N. (2014). "Policy networks, stakeholder interactions
and climate adaptation in the region of South East Queensland, Australia". Regional
Environmental Change, 14(2), 527-539.
McAneney H., McCann J.F., Prior L., Wilde J., Kee F. (2010). "Translating evidence into
practice: A shared priority in public health?". Social Science and Medicine, 70(10), 1492-
1500.
Meyer M. (2010). “The rise of the knowledge broker”. Science Communication, 32(1), 118-
127.
Meyer M., Kearnes M. (2013). "Introduction to special section: Intermediaries between
science, policy and the market". Science and Public Policy, 40(4), 423-429.
Michaels S. (2009). “Matching knowledge brokering strategies to environmental policy
problems and settings”. Environmental Science and Policy, 12(7), 994-1011.
Miller C. (2001). “Hybrid management: boundary organisations, science policy, and
environmental governance in the climate regime”. Science, Technology & Human Values,
26(4), 478500.
Morgan M., Barry C.A., Donovan J.L., Sandall J., Wolfe C.D.A., Boaz A. (2011).
“Implementing 'translational' biomedical research: Convergence and divergence among
clinical and basic scientists”. Social Science and Medicine, 73(7), 945-952.
Morra L. & Rist, R. (2009). The Road to Results: Designing and Conducting Effective
Development Evaluations. The World Bank Press.
Nair S., Nisar A., Palacios M., Ruiz F. (2012). "Impact of knowledge brokering on
performance heterogeneity among business models". Management Decision, 50(9), 1649-
1660.
National Evaluation Unit. (2014). Impact of evaluation on the effectiveness and efficiency of
implementation of the Cohesion Polisy in Poland. Good practices. Warsaw: Ministry of
Infrastructure and Development, National Evaluation Unit.
Naylor L.A., Coombes M.A., Venn O., Roast S.D., Thompson R.C. (2012). “Facilitating
ecological enhancement of coastal infrastructure: The role of policy, people and planning”,
Environmental Science and Policy, 22, 36-46.
Nutley, S., Walter, I. & Davies, H.T.O. (2003). “From Knowing to Doing. A Framework for
Understanding the Evidence-Into-Practice Agenda”. Evaluation, 9(2), 125-148.
Nutley, S.M., Walter, I. & Davies, H.T.O. (2007). Using Evidence: How research can inform
public services. Bristol: Policy Press.
Olejniczak, K. & Mazur, S. (2014). (ed.) Organizational Learning. A Framework for Public
Administration. Scholar Publishing House, Warsaw.
Olejniczak, K. (2013). “Mechanisms Shaping Evaluation System A Case Study of Poland
1999-2010”. Europe-Asia Studies, 65(8), 1642-1666.
39
Olejniczak, K. (2015). “Focusing on Success: A Review of Everyday Practices of
Organizational Learning in Public Administration”; in: Bohni Nielsen, S., Turksema, R. & van
der Knaap, P. (ed.) Success in Evaluation, s.99-124. New Brunswick: Transaction
Publishers.
Partidario M.R., Sheate W.R. (2013). “Knowledge brokerage - potential for increased
capacities and shared power in impact assessment”. Environmental Impact Assessment
Review, 39, 26-36.
Pawson, R. (2002). “Evidence-based Policy: The Promise of ‘Realist Synthesis’”. Evaluation,
8(3), 340-358.
Pawson, R. (2013). The Science of Evaluation: A Realist Manifesto. London: SAGE
Publications Ltd.
Petticrew, M. & Roberts, H. (2003). “Evidence, hierarchies, and typologies: horses for
courses”. Journal of Epidemiology and Community Health, 57(7), 527-529.
Pesch U., Huitema D., Hisschemoller M. (2012). "A boundary organization and its changing
environment: The Netherlands Environmental Assessment Agency, the MNP". Environment
and Planning C: Government and Policy, 30(3), 487-503.
Polanyi, M. (1966). Tacit Dimension. DOUBLEDAY & CO INC.
Prewitt, K., Schwandt, T. & Straf, M. (2012). (ed.) Using Science and Evidence in Public
Policy. The National Academies Press, Washington DC.
Reiche B.S., Harzing A.-W., Kraimer M.L. (2009). “The role of international assignees' social
capital in creating inter-unit intellectual capital: A cross-level model”. Journal of International
Business Studies, 40(3), 509-526.
Reid L.A., McCormick A. (2010). “Knowledge transfer at the research-policy interface: The
geography postgraduates' experiences of collaborative studentships”. Journal of Geography
in Higher Education, 34(4), 529-539.
Rich, R.F. (1977). “Uses of social science information by federal bureaucrats: knowledge for
action versus knowledge for understanding”; in: Weiss, C.H. (ed.) Using social research in
public policy making, s.199-211. Lexington, Mass.: D.C. Heath.
Rist, R.C. & Stame, N. (2011). (ed.) From Studies to Streams: Managing Evaluative Systems
(Comparative Policy Evaluation). Transaction Publishers, New Brunswick.
Rivard L.M., Russell D.J., Roxborough L., Ketelaar M., Bartlett D.J., Rosenbaum P. (2010).
"Promoting the use of measurement tools in practice: A mixed-methods study of the activities
and experiences of physical therapist knowledge brokers". Physical Therapy, 90(11), 1580-
1590.
Rogers, P. & Funnell, S. (2011). Purposeful Program Theory: Effective Use of Theories of
Change and Logic Model. San Francisco: Jossey-Bass.
Russell D.J., Rivard L.M., Walter S.D., Rosenbaum P.L., Roxborough L., Cameron D.,
Darrah J., Bartlett D.J., Hanna S.E., Avery L.M. (2010). "Using knowledge brokers to facilitate
40
the uptake of pediatric measurement tools into clinical practice: A before-after intervention
study". Implementation Science, 5(1).
Rydin Y., Amjad U., Whitaker M. (2007). "Environmentally sustainable construction:
Knowledge and learning in London planning departments". Planning Theory and Practice,
8(3), 363-380.
Sanderson, I. (2002). “Evaluation, Policy Learning and Evidence-Based Policy Making”.
Public Administration, 80(1), 1-22.
Sanjeev, K. & Craig, T. (2010). “Toward a Platinum Standard for Evidence-Based
Assessment by 2020”. Public Administration Review, December, 100-106.
Schlierf K., Meyer M. (2013). “Situating knowledge intermediation: Insights from science
shops and knowledge brokers”. Science and Public Policy, 40(4), 430-441.
Shaw L. (2012). "Getting the message across: Principles for developing brief-Knowledge
Transfer (b-KT) communiqués". Work, 41(4), 477-481.
Shaw L., McDermid J., Kothari A., Lindsay R., Brake P., Page P., Argyle C., Gagnon C.,
Knott M. (2010). "Knowledge brokering with injured workers: Perspectives of injured worker
groups and health care professionals". Work, 36(1), 89-101.
Senge, P.M. (1990). The Fifth Discipline: The Art & Practice of The Learning Organization.
New York: Currency Doubleday.
Sheate W.R., Partidario M.R. (2010). “Strategic approaches and assessment techniques-
Potential for knowledge brokerage towards sustainability”. Environmental Impact
Assessment Review, 30(4), 278-288.
Shillabeer, A., Buss, T.F. & Rousseau, D.M. (2011). (ed.) Evidence-Based Public
Management: Practices, Issues, and Prospects. M.E. Sharpe, Armonk, New York, London.
Simon, H. (1991). “Bounded Rationality and Organizational Learning”. Organization Science,
2(1), 125-134.
Smith, V. (2013). “Data Dashboard as Evaluation and Research Communication Tool”. New
Directions for Evaluation, 140(Winter), 21-45.
Stern, E., Stame, N., Mayne, J., Forss, K., Davies, R. & Befani, B. (2012). Broadening the
Range of Designs and Methods for Impact Evaluations, Washington DC: Department of
International Development - Working Paper 38.
Stone D., Maxwell S., Keaton M. (2001). Bridging research and policy. An International
workshop funded by the UK department for international development.
Taylor J.S., Verrier M.C., Landry M.D. (2014). “What do we know about knowledge brokers in
paediatric rehabilitation? A systematic search and narrative summary”. Physiotherapy
Canada, 66(2), 143-152.
Torres, R. & Preskill, H. (2001). Evaluation and organizational learning: past, present, and
future.
41
American Journal of Evaluation Vol. 22(3): 387-395.
Torres, R., Preskill, H. & Piontek, M. (2005). Evaluation strategies for Communicating and
Reporting. Thousand Oaks: Sage Publications.
Tran N.T., Hyder A.A., Kulanthayan S., Singh S., Umar R.S.R. (2009). "Engaging policy
makers in road safety research in Malaysia: A theoretical and contextual analysis". Health
Policy, 90(1), 58-65.
Traynor R., DeCorby K., Dobbins M. (2014). “Knowledge brokering in public health: a tale of
two studies”. Public Health.
Tucker, J. (2005). “Intervention”; in: Mathison, S. (ed.) Encyclopedia of Evaluation, s.210.
Thousand Oaks, Calif.; London: SAGE.
Turnhout E., Stuiver M., Judith J., Harms B., Leeuwis C. (2013). "New roles of science in
society: Different repertoires of knowledge brokering". Science and Public Policy, 40(3), 354-
365.
Tversky, A. & Kahneman, D. (1974). “Judgement under Uncertainty: Heuristics and Biases”.
Science, 185(4157), 1124-1131.
Tyler, C. (2013). “Top 20 things scientists need to know about policy-making”. The Guardian,
Monday 2 December,
van der Knaap, P. (1995). “Policy Evaluation and Learning. Feedback, Enlightement or
Argumentation?”. Evaluation, 1(2), 189-216.
Van Kammen J., Jansen C.W., Bonsel G.J., Kremer J.A.M., Evers J.L.H., Wladimiroff J.W.
(2006). "Technology assessment and knowledge brokering: The case of assisted
reproduction in the Netherlands". International Journal of Technology Assessment in Health
Care, 22(3), 302-306.
Waqa G., Mavoa H., Snowdon W., Moodie M., Schultz J., McCabe M., Kremer P., Swinburn
B. (2013). “Knowledge brokering between researchers and policymakers in Fiji to develop
policies to reduce obesity: A process evaluation”. Implementation Science, 8(1).
Ward V.L., House A.O., Hamer S. (2009). “Knowledge brokering: Exploring the process of
transferring knowledge into action”. BMC Health Services Research, 9(1).
Ward V.L., Smith S., House A. and Hamer S. (2012) “Exploring knowledge exchange, a
useful framework for practice and policy”, Social Science and Medicine, 74, 297304.
Warner G., Lyons R., Parker V., Phillips S. (2011). “Advancing coordinated care in four
provincial healthcare systems: Evaluating a knowledge-exchange intervention”. Healthcare
Policy, 7(1), 80-94.
Weiss, C.H. (1998). Have we learned anything new about the use of evaluation. American
Journal of Evaluation 19:21-33. 2
Weiss, C.H., Murphy-Graham, E. & Birkeland, S. (2005). “An Alternate Route to Policy
Influence How Evaluations Affect D.A.R.E.”. American Journal of Evaluation, 26(1), 12-30.
42
White D.D., Wutich A., Larson K.L., Gober P., Lant T., Senneville C. (2010). "Credibility,
salience, and legitimacy of boundary objects: Water managers' assessment of a simulation
model in an immersive decision theater". Science and Public Policy, 37(3), 219-232.
Willems M., Schroder C., Post M., Van Der Weijden T., Visser-Meily A. (2013). “Do
knowledge brokers facilitate implementation of the stroke guideline in clinical practice?”. BMC
Health Services Research, 13(1).
Yousefi-Nooraie R., Dobbins M., Brouwers M., Wakefield P. (2012). "Information seeking for
making evidence-informed decisions: A social network analysis on the staff of a public health
department in Canada". BMC Health Services Research, 12(1).
Zook M.A. (2004). "The knowledge brokers: Venture capitalists, tacit knowledge and regional
development". International Journal of Urban and Regional Research, 28(3), 621-641.
#
... No entanto, seja entendendo-a como habilidade do governo para realizar escolhas inteligentes (Painter e Pierre, 2005), ou para mapear o ambiente e definir direções estratégicas (Howlett e Lindquist, 2004) ou, ainda, de forma mais ampliada, como conjunto de habilidades e recursos necessários para o desempenho das funções de políticas públicas (Wu, Ramesh e Howlett, 2015), todas essas concepções dão grande ênfase à dimensão analítica entre os recursos estatais necessários para viabilizar a melhor condução das políticas públicas. Nesse sentido, toma-se a capacidade analítica tanto dos burocratas como das organizações públicas como condicionante fundamental para viabilizar o fluxo da inteligência sobre e para a atuação governamental, em particular para as instâncias decisórias (Olejniczak, Raimondo e Kupiec, 2016). ...
... Estudos que exploram o trabalho na política pública (Meltsner, 1976;Colebatch, Hoppe e Noordegraaf, 2010;Olejniczak, Raimondo e Kupiec, 2016) têm demonstrado, no entanto, que a função de análise de política pública, em geral, ocorre associada a outras funções, como as de tipo "relacional". Demonstram, ainda, uma diversidade de atividades realizadas na produção de políticas públicas, que vão além da função de pesquisa, análise, avaliação e recomendações, como funções de negociação intergovernamental, consultas públicas, tradução e até mesmo de democratização. ...
... Such an approach still prevails e .g . in Polish administrative theory and practice 13 . Although there are visible efforts that could change it 14 . ...
Article
Full-text available
This article aims at discussing the values and principles of the so called AGILE management (AGILE methodologies) in the context of public policy making. In particular, an effort is put into answering the question on the possibility of including the AGILE values and principles in the conduct of policy making. Known from the IT sector, AGILE methodologies have been lately drawing more attention of public policy makers in the world. This approach seems to have a big advantage over hitherto applied modes of public operation, as it allows for a quicker adaptation of policies to changes of the environment. At the same time however, it seems to pose much bigger challenges on policy making bodies, as it requires a strong capacity for regulating diversity throughout the policy making processes in real time. *** W artykule prowadzona jest dyskusja nad wartościami i zasadami tzw. " zwinnego " zarządzania (AGILE methodologies) w kontekście polityki publicznej. Następuje próba odpowiedzi na pytanie o możliwość włączenia tych wartości i zasad do procesu realizacji polityki publicznej. Znane z sektora informatycznego metody przyciągają coraz większą uwagę praktyków zaangażowanych w prowadzenie polityki publicznej na świecie. Wydaje się, że podejście to ma dużą przewagę nad dotychczas stosowanymi metodami prowa-dzenia polityki publicznej, ponieważ pozwala na o wiele szybsze dostosowanie polityki do pojawiających się zmian otoczenia. Jednocześnie jednak, od instytucji prowadzących politykę publiczną w podejściu tym wymaga się na bieżąco silnej zdolności do regulo-wania zróżnicowania zachodzącego w trakcie realizacji polityki. Słowa kluczowe: rządzenie, zwinne rządzenie, inkrementalizm 006_SzPP 3_2016.indb 41
Conference Paper
Full-text available
Public policies need research results in order to effectively address the complex socioeconomic challenges (so-called: evidence-based policies). However there is a clear gap between producing scientific expertise and using it in public decision-making. This "know-do" gap is common in all policy areas. Knowledge brokering is a new and promising practice for tackling the challenge of evidence use. It means that selected civil servants play the role of intermediaries who steer the flow of knowledge between its producers (experts and researchers) and users (decision makers and public managers). Knowledge brokering requires a specific combination of skills that can be learnt effectively only by experience. However this is very challenging in the public sector. Experiential learning requires learning from own actions-often own mistakes, while public institutions tend to avoid risk and are naturally concerned with the costs of potential errors. Therefore, a special approach is required to teach civil servants. This article addresses the question of how to develop knowledge brokering skills for civil servants working in analytical units. It reports on the application of a simulation game to teach civil servants through experiential learning in a risk-free environment. Article (1) introduces the concept of knowledge brokering, (2) shows how it was translated into a game design and applied in the teaching process of civil servants and (3) reflects on further improvement. It concludes that serious game simulation is a promising tool for teaching knowledge brokering to public policy practitioners.
Book
This book combines political-economic, sociological and historical approaches to provide a coherent framework for analysing the changing relationship between politics and science in the United States. Fundamental to this relationship are problems of delegation, especially the integrity and productivity of sponsored research: politicians must see that research is conducted with integrity and productivity, and scientists must be able to show it. A science policy regime changes when solutions to these problems change. After World War II, the 'social contract for science' assumed that the integrity and productivity of research were automatic and, despite many challenges, that contract endured for four decades. However in the 1980s, as rich empirical studies show, cases of misconduct in science and flagging economic performance broke the trust between politics and science. New 'boundary organizations', in which scientists and nonscientists collaborate to assure the integrity and productivity of research, were created to mend the relationship.
Book
The theme of this book is about using evaluation to best effect. It draws on the experience and rich examples of internal evaluation units to share information about the different measures they use to improve the use of evaluations. The authors argue that while a great deal of advice about evaluation utilisation has already been written, it is generally the work of "outsiders". There is a real lack of information coming from the experienced "insiders" who commission, manage or even carry out evaluations in their organisations. The book offers a wealth of practical knowledge which will help readers ensure that evaluation does not just sit on the shelf, but becomes a valuable resource for adding value to their organisations.
Book
This book presents a solid, research-based conceptual framework that demystifies organizational learning and bridges the gap between theory and practice. Using an integrative approach, authors Raanan Lipshitz, Victor Friedman and Micha Popper provide practitioners and researchers with tools for understanding organizational learning under real-world conditions.
Article
This textbook provides a concise and accessible introduction to the principles and elements of policy design in contemporary governance. Howlett seeks to examine in detail the range of substantive and procedural policy instruments that together comprise the toolbox from which governments select specific tools expected to resolve policy problems. Guiding students through the study of the instruments used by governments in carrying out their tasks, adapting to, and altering, their environments, this book: Discusses several current trends in instrument use often linked to factors such as globalization and the increasingly networked nature of modern society. Considers the principles behind the selection and use of specific types of instruments in contemporary government. Evaluates in detail the merits, demerits and rationales for the use of specific organization, regulatory, financial and information-based tools and the trends visible in their use Addresses the issues of instrument mixes and their (re)design in a discussion of the future research agenda of policy design. Providing a comprehensive overview of this essential component of modern governance and featuring helpful definitions of key concepts and further reading, this book is essential reading for all students of public policy, administration and management.
Article
Many decisions are based on beliefs concerning the likelihood of uncertain events such as the outcome of an election, the guilt of a defendant, or the future value of the dollar. Occasionally, beliefs concerning uncertain events are expressed in numerical form as odds or subjective probabilities. In general, the heuristics are quite useful, but sometimes they lead to severe and systematic errors. The subjective assessment of probability resembles the subjective assessment of physical quantities such as distance or size. These judgments are all based on data of limited validity, which are processed according to heuristic rules. However, the reliance on this rule leads to systematic errors in the estimation of distance. This chapter describes three heuristics that are employed in making judgments under uncertainty. The first is representativeness, which is usually employed when people are asked to judge the probability that an object or event belongs to a class or event. The second is the availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development, and the third is adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available.
Article
Introduction Public service organisations are preoccupied with understanding how good performance can be achieved: what matters is what works. But delivering high-quality services requires a far wider array of evidence than just that on effectiveness – we need, for example, knowledge about the scale, source and structuring of social problems; practical ‘know-how’ to support effective programme implementation in local contexts; and insights into the relationships between values and policy directions. Research can make an important contribution to the development of public services and policy programmes, and it can enrich debates about the nature of social problems and what works in addressing them. However, such positive research impacts are far from routine, and the impact of research is not always positive. Negative impacts may, for example, arise in situations where tentative or highly specific findings are seized upon too readily or applied too widely. Despite this, the overzealous use of research is not normally considered to be the main problem. Quite the opposite; researchers and others are often disappointed that clear findings are overlooked or ignored when decisions are made about the direction and delivery of services. This view is supported by many studies that have found that practice often lags behind the best available evidence about what works and that it may remain out of step for quite some time.