ArticlePDF Available

Challenges and Lessons in Implementing Results-Based Management

Authors:

Abstract and Figures

Integrating performance information into budgeting, managing and reporting has become a common component of good public and not-for-profit management. In many jurisdictions, efforts to do so have been under way for many years, yet progress is usually seen as slow at best. It is also clear that, while much has been learned, many challenges remain; few organizations would argue they have been completely successful. The paper argues that implementing results-based management-type initiatives is difficult because to do so impacts throughout an organization. Many of the key challenges are organizational challenges rather than technical: implementing resultsbased management is not primarily a measurement problem. A discussion is provided of 12 key challenges to results-based management, identifying the challenge, noting the experience others have had in relation to the challenge and providing lessons and suggestions for dealing with them.
Content may be subject to copyright.
http://evi.sagepub.com
Evaluation
DOI: 10.1177/1356389007073683
2007; 13; 87 Evaluation
John Mayne
Challenges and Lessons in Implementing Results-Based Management
http://evi.sagepub.com/cgi/content/abstract/13/1/87
The online version of this article can be found at:
Published by:
http://www.sagepublications.com
On behalf of:
The Tavistock Institute
can be found at:Evaluation Additional services and information for
http://evi.sagepub.com/cgi/alerts Email Alerts:
http://evi.sagepub.com/subscriptions Subscriptions:
http://www.sagepub.com/journalsReprints.navReprints:
http://www.sagepub.com/journalsPermissions.navPermissions:
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
by John Mayne on November 7, 2007 http://evi.sagepub.comDownloaded from
Mayne: Challenges and Lessons in Implementing Results-Based Management
87
Challenges and Lessons in
Implementing Results-Based
Management
! " # $ % & '( $ )
!"#$%&'()*(+,-.$/(%&/0)'(+&'1)'23*/&4(53*3"3
*+,-./0,1+.%2-/34/50+6-%1+34/50,14+%1+,4%789.-,1+.:%50+0.1+.%0+9%/-24/,1+.%
;0<%7-645-%0%64554+%64524+-+,%43%.449%287=16%0+9%+4,>34/>2/431,%
50+0.-5-+,?%*+%50+@%A8/1<916,14+<:%-334/,<%,4%94%<4%;0B-%7--+%8+9-/%C0@%34/%
50+@%@-0/<:%@-,%2/4./-<<%1<%8<80==@%<--+%0<%<=4C%0,%7-<,?%*,%1<%0=<4%6=-0/%,;0,:%
C;1=-%586;%;0<%7--+%=-0/+-9:%50+@%6;0==-+.-<%/-501+D%3-C%4/.0+1E0,14+<%
C48=9%0/.8-%,;-@%;0B-%7--+%6452=-,-=@%<866-<<38=?%F;-%202-/%0/.8-<%,;0,%
152=-5-+,1+.%/-<8=,<>70<-9%50+0.-5-+,>,@2-%1+1,10,1B-<%1<%9133168=,%7-608<-%
,4%94%<4%15206,<%,;/48.;48,%0+%4/.0+1E0,14+?%&0+@%43%,;-%G-@%6;0==-+.-<%
0/-%4/.0+1E0,14+0=%6;0==-+.-<%/0,;-/%,;0+%,-6;+160=H%152=-5-+,1+.%/-<8=,<>
70<-9%50+0.-5-+,%1<%+4,%2/150/1=@%0%5-0<8/-5-+,%2/47=-5?%'%91<68<<14+%1<%
2/4B19-9%43%IJ%G-@%6;0==-+.-<%,4%/-<8=,<>70<-9%50+0.-5-+,:%19-+,13@1+.%,;-%
6;0==-+.-:%+4,1+.%,;-%-K2-/1-+6-%4,;-/<%;0B-%;09%1+%/-=0,14+%,4%,;-%6;0==-+.-%
0+9%2/4B191+.%=-<<4+<%0+9%<8..-<,14+<%34/%9-0=1+.%C1,;%,;-5?
L ) ( M " N O P H% 0,,/178,14+D%152=-5-+,0,14+%6;0==-+.-<D%4/.0+1E0,14+0=%
=-0/+1+.D%2-/34/50+6-%54+1,4/1+.D%/-<8=,<>70<-9%50+0.-5-+,
Integrating performance information – information on results from performance
measurement and/or evaluations – into budgeting, managing and reporting
has become a common component of public and not-for-profi t management
(Kettl, 1997; Moynihan, 2006; Norman, 2002; OECD, 1997; Pollitt and
Bouckaert, 2000; Wholey and Hatry, 1992; Zapico and Mayne, 1997). This is
certainly true of OECD countries (OECD, 2005), many developing countries
(World Bank, 2002), UN organizations (such as UNDP, 2004) and many non-
profi t non-governmental organizations (Mayston, 1985). In some of these cases,
initiatives to develop performance information capacity are recent, but in other
cases efforts in this direction have been going on for decades. Refl ecting recent
OECD ndings, Curristine (2005a: 150) recently concluded, ‘The performance
orientation in public management is here to stay. It is essential for successful
government’.
6#3.,30$)*
Q42@/1.;,%R%JSST
P'U)%V87=160,14+<%WX4<%'+.-=-<:
X4+94+:%$-C%O-=;1%0+9%P1+.024/-Y
O"*H%IS?IITTZI[\][^_SSTST[]^[
`4=%I[WIYH%^T%a%%IS_
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
by John Mayne on November 7, 2007 http://evi.sagepub.comDownloaded from
Evaluation 13(1)
88
Such efforts are linked with the general reforms undertaken in many public sectors
over the past 20 years (Aucoin, 1995; OECD, 2005; Pollitt and Bouckaert, 2000;
Schick, 2003). In order to improve public (and not-for-profi t) sector performance
and accountability, most reforms aimed to free managers from upfront controls and
reduce the emphasis on compliance, while requiring better monitoring, measuring
and accounting of the results that are being obtained from the expenditure of public
money.
It is also clear that, while much has been learned, many challenges remain;
few organizations would argue that they have been completely successful in
integrating performance information into their management and budgeting.
Meyer and Gupta (1994), Smith (1995), Perrin (1998), Wholey (1999), van Thiel
and Leeuw (2002), Bouckaert and Peters (2002), Feller (2002) and Thomas (2005)
are some of the authors who discuss specifi c challenges and pitfalls in results-
based management.
Why are we Still Discussing Challenges?
It is reasonable to ask, given the efforts made over the past 25–30 years in
many countries and jurisdictions, why performance information – both that
from performance measurement and from evaluations – is not routinely part of
management and budgeting systems?1 Haven’t we learned? Haven’t we ‘solved’
all the signifi cant problems? And if not, as Uusikylå and Valovirta (2004) ask,
‘Why is that?’. Behn (2002) similarly asks why everyone isn’t jumping on the
performance-management bandwagon.
The fact is that considerable progress has been made and lessons are there for
the learning. It is now widely accepted that performance information should be part
of public and not-for-profi t management and budgeting. This has not always been
so and lack of agreement on the usefulness of performance information has been a
major stumbling block in the past – and no doubt still is in some quarters.
But the slow and sometimes frustrating lack of progress remains. There are
no doubt many reasons for this state of affairs, and certainly the specifi c reasons
will vary by jurisdiction and organization. As will be discussed, the challenges
are quite real and many are formidable. Refl ecting on this history, the following
broad observations can be made.
The Focus has Changed
Early efforts at results-based management were most often focused on outputs –
the direct goods and services produced – rather than on outcomes – the benefi ts
achieved as a result of those goods and services. Today the focus is more on
outcomes. What are citizens getting for their tax money? Are the benefi ciaries
really benefi ting as intended from the service or programme? Whatever challenges
there were in using output information in public and not-for-profi t management,
the challenges are signifi cantly more complex and have a much greater effect
when outcome information is the focus. Lessons learned when measuring and
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
by John Mayne on November 7, 2007 http://evi.sagepub.comDownloaded from
Mayne: Challenges and Lessons in Implementing Results-Based Management
89
using output information may be of limited use when outcome information is
sought and used.
It Requires Fundamental Changes
A key reason for the diffi cult progress is that integrating performance information
into management and budgeting is not primarily a technical problem that can be left
to ‘experts’ such as performance measurers and evaluators. Rather, an evidence-
based outcome focus can require signi cant and often fundamental changes in how
an organization is managed; in how public sector and non-profi t organizations go
about their business of delivering programmes and services. It requires behavioural
changes. Behn (2002: 9) argues that ‘It requires a complete mental reorientation’.
It often requires signifi cant changes to all aspects of managing, from operational
management to personnel assessment to strategic planning to budgeting. And it
usually requires that elusive ‘cultural change’, whereby performance information
becomes valued as essential to good management. Seen in this light, perhaps it is
not surprising that progress has been challenging.
It Takes Years
Another key reason for slow progress is that it takes time and perseverance:
evidence suggests at least four to fi ve years of consistent effort – and many
organizations have been working towards it for much longer. The problem is
that key people move on, governance structures change, priorities shift. A lot of
relearning has taken place in the history of results-based management. And four
to fi ve years is just the timeline to get up and running. Good management requires
constant attention. To quote a cliché, integrating performance information into
managing and budgeting is a journey, not a destination.
It Costs
Yet another key reason is that developing and using performance information
costs time and money, time and money that an already harassed public or non-
profi t organization often does not have. Managers may feel, literally, that they do
not have enough time or resources to manage or budget for results.
The general point is that if the challenge is seen mainly as one of measuring,
or as an initiative that can be completed in a year of hard work, or that it can
be carried out using existing resources – since after all, it is just part of good
management! – then progress will likely be slow, spotty and not lasting.
It is in light of this history and background that the lessons and the more
specifi c challenges faced by organizations2 can be discussed.
Lessons for Learning
Given the attention paid in many jurisdictions and organizations to the use of
performance information, it is not surprising that there have been quite a few
studies and reports of the experiences. These include reviews of efforts in a
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
by John Mayne on November 7, 2007 http://evi.sagepub.comDownloaded from
Evaluation 13(1)
90
number of private sector organizations, development cooperation agencies, and
in organizations in both developed and developing governments (Azevedo, 1999;
Binnendijk, 2000; Diamond, 2005; Ittner and Larcker, 2003; Letts et al., 2004; Offi ce
of the Auditor General of Canada, 2000; Perrin, 2002, 2006; World Bank, 2002).
Clearly there is much experience on which to build. Organizations developing
their performance measurement practices can benefi t from these experiences.
A synthesis of the challenges discussed in the literature in implementing results-
based management produces an extensive list. A main theme of this article is that
the challenges are more than technical ones dealing with, for example, various
measurement problems. Rather, the main challenges are often organizational and
behavioural in nature, whereby organizations and the people in them need to
change how they do or see things.
Thus, two types of challenges are identifi ed: organizational (behavioural)
and technical. Organizational challenges cover areas where organizations and
the people in them need to change or to do things not done before. Technical
challenges are those where expertise is required in measurement and reporting.
And for many of these challenges there are embedded conceptual challenges
where new thinking is required about a problem, whether by individuals or
organizations. There are clear overlaps between these two groups of challenges.
Nevertheless, the groupings are useful for discussion. Uusikylå and Valovirta
(2004) make a similar distinction.
Table 1 lists 12 key challenges based on the relevant literature, previous
experience and results from a recent OECD survey (Curristine, 2005a). For each,
the challenge posed is presented and discussed, related lessons that have emerged
are outlined and potential future directions indicated.
Table 1. Challenges to Implementing Results-Based Management in Public
Organizations
7'83*$930$)*3.(/:3..&*8&% ;&/:*$/3.(/:3..&*8&%
I?% b4<,-/1+.%,;-%/1.;,%6=150,- ^?% &-0<8/-5-+,%
J?% P-,,1+.%/-0=1<,16%-K2-6,0,14+< _?% ',,/178,14+
[?% *52=-5-+,1+.%,4%.-,%78@>1+%0+9%8<- IS?% X1+G1+.%31+0+610=%0+9%2-/34/50+6-%1+34/50,14+
c?% P-,,1+.%48,645-%-K2-6,0,14+< II?% O0,0%d80=1,@
\?% P-=-6,1B1,@ IJ?% N-24/,1+.%2-/34/50+6-%
]?% 'B4191+.%91<,4/,1+.%7-;0B148/%
T?% '6648+,071=1,@%34/%48,645-<
Organizational Challenges and Lessons
Box 1 sets out the organizational challenges and ways to address the challenges,
each of which will be discussed.
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
by John Mayne on November 7, 2007 http://evi.sagepub.comDownloaded from
Mayne: Challenges and Lessons in Implementing Results-Based Management
91
Box 1. Organizational Challenges and Approaches to Implementing Results-Based
Management in Organizations
<=( >)%0&'$*8(0:&(?$8:0(5.$230&@
e% ,;-%+--9%34/%<,/4+.%=-09-/<;12f
e% .-,,1+.%,;-%/1.;,%1+6-+,1B-<%1+%2=06-f
e% 9-B-=421+.%0+9%<8224/,1+.%0%=-0/+1+.%68=,8/-f
e% B0=81+.%-B19-+6->70<-9%1+34/50,14+
A=( B&00$*8(?&3.$%0$/(6C+&/030$)*%(1)'(?&%,.0%DE3%&"(F3*38&2&*0
e% <8224/,1+.%549-<,@%1+%,;-%/4=-%43%2-/34/50+6-%1+34/50,14+
e% 9-B-=421+.%/-0=1<,16%9-50+9%34/%2-/34/50+6-%1+34/50,14+
e% -9860,1+.%8<-/<%43%2-/34/50+6-%1+34/50,14+
G=( H2+.&2&*0$*8(0)(I&0(E,JDH*(3*"(K%&
e% 1+B4=B-5-+,%1+%9-B-=421+.%2-/34/50+6-%1+34/50,14+
e% 501+,01+1+.%545-+,85:%64551,,1+.%,15-%0+9%54+-@
e% <,/86,8/0==@%=1+G1+.%2-/34/50+6-%1+34/50,14+%0+9%9-61<14+>50G1+.
e% 6/-0,1+.%=-0/+1+.%5-6;0+1<5<
L=( B&00$*8(7,0/)2&(6C+&/030$)*%
e% 54B1+.%7-@4+9%48,28,<
e% 8<1+.%/-<8=,<%6;01+<
e% 8<1+.%2/-916,1B-%0+9%<,/-,6;%-K2-6,0,14+<
e% ;0B1+.%0%<,/0,-.@
M=( B&.&/0$#$0J
e% 0B4191+.%1+34/50,14+%4B-/=409
e% 8<1+.%1+34/50,14+%,-6;+4=4.@
N=( !#)$"$*8(O$%0)'0$*8(E&:3#$),'
e% /-B1-C1+.%5-0<8/-<%/-.8=0/=@
e% 3468<1+.%4+%48,645-<
e% 8<1+.%648+,-/70=0+61+.%5-0<8/-<
P=( !//),*03-$.$0J(1)'(7,0/)2&%
e% 0%/-0=1<,16%B1-C%43%06648+,071=1,@
e% 9-0=1+.%C1,;%<;0/-9%48,645-<
Fostering the Right Climate for Performance Information
The challenge This is the ‘culture change’ issue. It is diffi cult to get individuals
in organizations (and governments) to change their management behaviour.
Performance information typically has not historically played a large role in how
they manage themselves. And these organizations have been able to carry on
without this information, so why change?
Box 1 indicates there are a number of issues here that need attention: the need
for strong leadership, the need for the right incentives for people to diligently
gather and use performance information, the importance of a learning culture
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
by John Mayne on November 7, 2007 http://evi.sagepub.comDownloaded from
Evaluation 13(1)
92
and the capacity to adapt, and valuing evidence-based, especially outcome, infor-
mation.
The experience Almost all discussion of building performance information sys-
tems stress this challenge. An OECD 2005 survey (Curristine, 2005b) of member
countries confi rmed that the most important factor cited to explain success in
performance management systems is strong leadership. In their reviews of the
experiences of development cooperation agencies and of OECD countries, both
Binnendijk (2000) and Perrin (2002, 2006) point to the need for top leadership
support and supporting incentives in the organization. Binnendijk notes the need
to use the information for learning, not just external reporting. Perrin, as does
Thomas (2005), stressed the need to develop a results-focused culture.
The review of the World Bank experience (2002) noted the key role played
by senior management and the board as they started paying more attention to
results. Kusek, Rist and White (2003), in reviewing the experiences of developing
governments, pointed to the need for strong leadership, usually through a strong
champion at the most senior level of government. Azevedo (1999) noted the need
for a ‘recognition system’ that recognized good performance. Norton (2002), in
looking at private sector organizations that had been successful with implementing
balanced scorecards, noted that in each case the organization was introducing
new strategies requiring signifi cant change and new cultures. Instilling a results-
oriented culture is a key challenge identifi ed by the US General Accounting Offi ce
(1997a). Pal and Teplova (2003) discuss the challenge of aligning organizational
culture with performance initiatives.
Discussion If performance information is not valued by an organization and
the many incentives in the organization do not and are not seen to support the
development and use of performance information, success is unlikely, no matter
if other challenges are met.
Culture change in organizations is quite diffi cult to bring about, for any number
of reasons.
People are quite comfortable doing things the way they have been done in
the past. Indeed, performance information for many may appear as limiting
their scope of action – ‘the comfort of ambiguity’.
Some are fearful of evidence-based approaches to management and
budgeting, seeing it as an erosion of years of built-up experience.
The formal and informal incentives in organizations are powerful and well
known, and may not be seen to value performance information. Or the lack
of incentives may make it diffi cult to integrate performance information
into the existing organizational processes. As early as 1983, Wholey devoted
a whole chapter to creating incentives to support a performance focus in
government.
Senior management may be seen as only paying lip service to this latest
‘trend’; others will do likewise.
If budget decisions clearly ignore performance information, the message is
clear.
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
by John Mayne on November 7, 2007 http://evi.sagepub.comDownloaded from
Mayne: Challenges and Lessons in Implementing Results-Based Management
93
This challenge is very real and requires strong leadership and supporting incentives
to get around it. A culture of learning is required where evidence on what works
and what doesn’t is valued and acted upon to improve performance. A number of
the remaining challenges relate quite closely to this one.
Setting Realistic Expectations for the Role of Performance Information
The challenge The long history of efforts at introducing performance information
to managing and budgeting has been fraught with setbacks. And often these have
to do with unrealistic expectations set out for or assumed for what performance
information can do in an organization. Performance information has been
cast by some as a panacea for improving management and budgeting: users of
performance information will have at their ngertips everything they need to
know to manage, budget or hold to account. Such is not and will not be the case.
The experience Perrin (2002) notes that many OECD governments have found
building a performance information system to be more diffi cult than expected.
Diamond (2005) argues that countries with limited experience in performance
information should proceed modestly. Thomas (2005: 5) notes that
. . . performance measurement and performance management were oversold as offering
an objective and rational approach to overcoming the constraints of ‘politics’. . . . Recent
stock taking in the leading jurisdictions has created more realistic expectations and led
to a scaling back of [performance measurement/performance management] efforts.
Melkers and Willoughby (2001: 54) suggest the greatest diffi culty in implementing
performance-based budgeting is ‘the differing perceptions of use and success
among budget players’.
Discussion One of the lessons that should have been learned by now is the need
for modesty. The diffi culty of developing and using performance information,
as exemplifi ed by these challenges, should be recognized by all. Further, the
role of performance information is one of informing decisions not determining
them. There is a real need to educate the users of such information on how to
use the information and on its possible interpretations and limitations. The need
for experience and management skills will always remain at the centre of public
sector management.
The importance of sensible and informed use of performance information
may be especially pertinent for budget decision-makers. There may be a
temptation to use evidence of poorly performing programmes but to ignore or
question performance information about well-performing ones. Misuse here
will send quite strong messages. Performance information will normally not be
comprehensive, will contain some uncertainty; its role should always be seen as
informing. Judgement and larger issues will always be part of good budgeting and
managing.
And there is some evidence that this view is being accepted. Curristine
(2005b: 124) notes that, despite advocate arguments that performance budgeting
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
by John Mayne on November 7, 2007 http://evi.sagepub.comDownloaded from
Evaluation 13(1)
94
ought to be closely tied to performance information, ‘the trends indicate that a
majority of counties have taken a realistic and sensible approach. They do use
performance information . . . to inform, but not determine, budget allocations’.
Implementing to Get Buy-in and Use
The challenge How a performance information system is implemented in an
organization is critical to its success. Important factors can be:
the need to get buy-in throughout the organization;
the strategy used to get started;
the need to maintain momentum once started;
the realization that the process requires many years to succeed;
making sure the system refl ects the organization; and
the importance of encouraging learning from performance information.
The time frame has already been identifi ed as one of the overriding issues.
Integrating performance information into management and budgeting needs
ongoing commitment over many years. It is not a short-term, one-, two- or three-
year initiative. Many organizations and governments have diffi culty maintaining
momentum over the long term. A long-term commitment also implies the need
for resources over the long term. Developing performance information is not
cost-free.
Further, the aim here is to use performance information to learn what
works well and what does not. Organizational learning is a challenge for many
organizations.
The experience The UNDP (2004) noted the need for their system to be
designed to meet its own specifi c needs. Off-the-shelf solutions do not work. The
OECD argued that ‘approaches to implementing performance management . . .
must be selected according to the needs and situations of each country’ (1997: 29).
Both Binnendijk (2000) and Perrin (2002) point to the importance of ownership
by managers of the system, ensuring the information is useful to them, with Perrin
recommending a bottom–up approach with active grass-roots staff involvement.
He stresses the importance of providing feedback to those who are supplying the
information. The experience of CARE stresses the importance of getting buy-in
throughout the organization (Letts et al., 2004). In summarizing the conclusions
of a roundtable of a number of countries on their experience in implementing
results-based management, Perrin (2006) points to the need for both a bottom–
up and a top down approach.
Binnendijk (2000) notes that several donor agencies found it useful fi rst
to introduce pilots projects in selected areas, before moving to full-scale
implementation. The UNDP (2004) used pilot projects to refi ne its approach, as
did the World Bank (2002). Perrin (2006) also notes the use of pilots in both
developed and developing countries. The UNDP (2004) noted that implementing
a results-based management is a learning process that takes time. They have
been at it now for seven to eight years. Binnendijk (2000) notes that experience
suggested it takes fi ve to ten years and requires adequate resources.
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
by John Mayne on November 7, 2007 http://evi.sagepub.comDownloaded from
Mayne: Challenges and Lessons in Implementing Results-Based Management
95
Itell (1998), in reviewing the progress of pioneers in the fi eld in the US, talks
about not overreaching when starting. Perrin (2002) argued the need for a
strategic rather than a piecemeal approach, with several governments advocating
the need to proceed slowly, revising and learning over time. There is also general
agreement on the need for some central unit to oversee the development and
maintenance of the system, as well to help maintain momentum.
Moynihan (2005) argues that ‘The weakness of most state MFR systems . . .
lies between the points of dissemination of the data (which is done well) and use
(the ultimate purpose, which is done poorly)’. He argues that the gap is the lack of
learning forums, ‘routines that encourage actors to closely examine information,
consider its signifi cance, and decide how it will affect future action’ (2005: 205),
and that as much attention ought to be paid to mechanisms for learning as to
mechanisms for data collection. Barrados and Mayne (2003) discuss the needs of
a results-based learning culture in organizations.
Discussion While direction and support from the top is essential to building
a performance information system, equally important is the need to build
and implement the system in such a way that interest in and ownership of the
information being gathered and used is built throughout the organization. A
mixed bottom–up and top–down approach is usually best. A system built on
lling in performance information forms for others, with no apparent use for
those down the line, is unlikely to be robust and survive over time.
A frequent approach in implementing is to develop the system in several pilot
areas rst, to see how it goes and build success around its proven usefulness.
Finding champions who are willing to try out a more performance-based app-
roach is quite useful, as they can act as credible supporters of the system for
their colleagues. The idea of deliberately building in learning events or forums to
develop a learning culture is perhaps an approach that needs more attention.
Initiatives in management come and go. Some have seen this ebb and ow with
respect to the importance of performance information. Persistence over many
years is needed. Indeed, there have to be new ways of managing and budgeting,
ways that place a premium on the use of evidence. Realizing the long-term
commitment required, celebrating progress, rewarding successes and sticking
with it are what count.
Setting Performance Expectations for Outcomes
The challenge Essential to integrating performance information into managing
and budgeting is the need for organizations to establish reasonable expectations
about what level of performance is expected to be achieved. This is a serious
challenge for a number of reasons:
It directly raises the question of accountability for performance.
Outcomes are by defi nition results over which organizations do not have
complete control; setting targets can be seen as dangerous. It may not be at
all known what reasonable levels ought to be.
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
by John Mayne on November 7, 2007 http://evi.sagepub.comDownloaded from
Evaluation 13(1)
96
It may not be clear whether the expectations to be set are predictions of future
levels that can be achieved – predictive targets – or are stretch targets – levels
to be aimed for to inspire better performance.
Setting acceptable expectations may require dialogue with the benefi ciaries
and/or budget offi cials.
The experience Many OECD governments (Perrin, 2002) acknowledge the
importance of targets but also the particular challenge of dealing with outcomes.
Ittner and Larcker (2003) discuss the problem of not setting the right targets in
the private sector. Wholey, with considerable experience in the area, has pointed
out, ‘The most important initial step in performance-based management is getting
a reasonable degree of consensus on key results to be achieved’ (1997: 100). The
US GAO (1997a, 1997b) points to goal clarifi cation as an ongoing problem. Boyne
and Law (2005) discuss the challenges and the progress made in setting outcomes
targets within the framework of local public service agreements in the UK.
Discussion Some authors have identifi ed this challenge as the most diffi cult,
raising as it does accountability issues, considerably complicated when expecta-
tions for outcomes rather than outputs are being sought. Indeed, the account-
ability issue is worth a separate discussion, and hence it is identifi ed as a key
conceptual challenge.
A simplistic interpretation of the situation here is that specifi c expectations
are set, performance recorded and the variance is determined, indicating strong
or weak performance on the part of organizations (or managers). Of course, the
results may just show that the expected targets were poorly envisaged or just a
wild guess. Or were set deliberately low enough that it would have been hard not
to achieve them!
The question of whether performance expectations that have been set are
predictive targets or perhaps more usefully stretch targets is important to
determine, so that interpretation of performance against those expectations
is meaningful (Mayne, 2004). What is the purpose of setting targets? To assess
performance or to learn how to improve performance?
Setting performance expectations can also serve a useful role in discussion
between programmes and those who are to benefi t, as well as with budget offi cials
on what is possible.
There are technical issues as well. Programmes and policies can be thought
of in terms of results chains, whereby certain activities produce a set of outputs
that in turn produce a chain of effects intended to infl uence the nal outcomes
sought. Expectations can be set at a variety of levels in this chain of results, even
if the fi nal outcome is seen as ‘the bottom line’. Meeting a target at one particular
level may or may not be important. What is most important is that the whole
results chain is in fact happening as expected. That is the real performance story.
Setting expectations might better be thought of as answering the question, ‘Has
the chain of expected events which set out the theory of the programme – and the
specifi c targets in it – been realized?’. For further discussion, see Mayne (2004).
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
by John Mayne on November 7, 2007 http://evi.sagepub.comDownloaded from
Mayne: Challenges and Lessons in Implementing Results-Based Management
97
Selectivity
The challenge While in some quarters, there may be concerns about a lack of
performance information, an even more common problem has been the danger
of information overload. For any given programme, a huge array of possible
measures and evaluative information can be created, easily swamping the ability
of users to deal with the information. Quite a few performance measurement
systems have collapsed under the weight of too much information. Most now
realize the need for selectivity in what information is gathered and used.
However, selectivity is easier to talk about than to achieve. Selectivity means
that some information will not be collected or not reported, information that
someone somewhere may want sometime. How to best deal with the information
overload challenge is not completely clear.
The experience Binnendijk (2000), the OECD (2003) and the UNDP (2004)
all argue for keeping the system relatively simple and user-friendly, limiting the
number of measures used. The 2005 OECD survey (Curristine, 2005a) notes the
increasing concerns expressed about the danger of information overload.
Discussion Collecting all performance information about a programme is not
practical, so some selection must occur. But organizations often fi nd it takes
some time often years to determine which data are truly needed and worth
collecting. And what is seen as key today will likely change in the future. Review
and updating are essential, with decisions about what information to collect taken
deliberately. It is common to nd that some measures for which data have been
collected turn out to be of less interest and these should be dropped. The need for
identifi cation of the key measures is important, but so is the need to realize that
information requirements must evolve over time if they are to remain pertinent.
One way to deal with the information overload problem may be through the
smart use of information technology. Today – or in the near future – organizations
should be able to have large databases from which, in a straightforward manner,
concise, individually designed reports can be produced to meet the needs of
different users. One can imagine a web-based data system with user-friendly
interface allowing each person to design their own performance report. This may
ultimately be how ‘information overload’ can be dealt with.
Avoiding Distorting Behaviour
The challenge The classic problem in using performance measures is that, by
selecting a few specifi c indicators with accompanying targets, managers and
staff focus on improving those numbers, perhaps to the detriment of what the
programme is actually trying to achieve. This is a more signifi cant danger when the
measures are outputs or lower level outcomes. The experience with performance
measures is replete with examples of this kind of behaviour distortions. The oft-
used expression ‘what gets measured gets done’, used as a good principle, might
rather be seen as a warning of what can happen when measurement gets it
wrong.
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
by John Mayne on November 7, 2007 http://evi.sagepub.comDownloaded from
Evaluation 13(1)
98
The experience Binnendijk (2000), Curristine (2005a), Diamond (2005), Perrin
(2002) and the World Bank (2002) all discuss this classic problem. Most organi-
zations that have moved into results-based management have encountered this
phenomenon. Perrin (1998) discusses the many ways performance measures can
be misused. Feller (2002) provides some examples in the areas of higher educa-
tion and science, as do Wiggins and Tymms (2002) for primary schools in England
and Scotland. Again with respect to schools, Bohte and Meier (2000) discuss goal
displacement and ‘organizational cheating’ when staff are forced to use some per-
formance measures. Van Thiel and Leuuw (2002) review a number of unintended
consequences that can arise when using performance measures. Boyne and Law
(2005) likewise discuss the dangers of using the wrong measures.
Discussion This issue is often presented as a major objection to using performance
measures, and the danger is indeed real. Part of the answer lies in addressing the
rst challenge listed. If sensible use of performance information is encouraged
and supported by the right incentives, the danger of distorting behaviour will
be lessened. Performance measures should be reviewed and updated regularly
to ensure they remain relevant, useful and are not causing perverse behaviour
or other unintended effects. And the use of a set of counterbalancing measures
rather than only one can often reduce these problems.
Further, the more the focus is on higher level outcomes, the less chance there
is of this phenomenon occurring since the measures are then closely related to
the true aim of the activities. Indeed, as discussed earlier, even better would be to
focus on the whole results chain and the extent to which it is happening.
Accountability for Outcomes
The challenge People are generally comfortable with being accountable for
things they can control. Thus, managers can see themselves as being accountable
for the outputs produced by the activities they control. When the focus turns
to outcomes, they are considerably less comfortable, since the outcomes to be
achieved are affected by many factors not under the control of the manager:
social and economic trends, exogenous events and other programmes.
It may not be clear just what accountability for outcomes can sensibly mean.
If outputs are not delivered, one can rightly point to the manager responsible
and take corrective action. If outcomes do not occur, and the same action is
automatically taken, few in the future will be willing to commit to outcomes. A
somewhat different approach to accountability in this case is required.
There is a second aspect to this challenge that again arises when outcomes
are the focus. Many outcomes of interest to governments involve the efforts of
several programmes and often several ministries. The outcomes are shared. Can
the accountability for those outcomes also be shared? If so, how?
The experience A number of authors speak to the need to rearticulate what
accountability might mean in a results-based management regime (Behn, 2000;
Hatry, 1997; Shergold, 1997). Binnendijk (2000) discusses the need to give
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
by John Mayne on November 7, 2007 http://evi.sagepub.comDownloaded from
Mayne: Challenges and Lessons in Implementing Results-Based Management
99
managers who are managing for outcomes the necessary autonomy to do so.
Without such fl exibility, they can only manage for outputs.
Discussion This challenge is a major one and needs to be addressed if performance
information is to play a signifi cant role in managing and budgeting. And the way
to address it is to consider accountability not for achieving the outcomes per se
but rather for having infl uenced the outcomes. The Auditor General of Canada
(2002) has made similar suggestions, as has Baehler (2003).
In the case of shared outcomes, the accountabilities multiply. Partners are
accountable to their superiors, as well as to the other partners with whom they are
working. Collectively they are accountable for having infl uenced the outcome, as
well as being accountable for their own actions and their own contribution.
This then requires a rather more sophisticated approach to accountability, one
where a simple variance analysis is quite insuffi cient. Evidence on the extent to
which the outcomes occurred is still required, but so is evidence on the extent
to which the programme in question has had an infl uence and has contributed
to the outcomes observed. Judgement in interpreting how good performance
has been is essential in this approach to accountability. How much infl uence is
good enough? Also required are approaches to assessing the contribution made
towards outcomes.
Technical Challenges
I turn now to the challenges that are of a more technical nature, ones where
measurement skills play a key role. Box 2 sets out these technical challenges and
ways to address the challenges, each of which is then discussed.
Measurement
The challenge The issue of how to measure the outputs and outcomes of
government programmes is often considered to be the major challenge faced
when developing performance information systems. Performance information
will not be used unless the ‘right’ data and information are collected. Challenges
include measuring many of the outcomes of interest to governments, acquiring the
needed measurement skills, and making appropriate use of evaluations and other
periodic studies. Further, many have found, not unexpectedly, that some types of
programmes and services are more amenable to measurement than others.
The experience Three of the four shortcomings identifi ed by Ittner and Larcker
(2003) in their research on the use of performance information in the private
sector dealt with measurement challenges: not linking measures to strategies, not
validating casual links and measuring incorrectly. Diamond (2005) discusses the
need for setting up a performance framework. Perrin (2002) notes the challenge
and the importance of focusing on outcomes, but also warns against over-
quantifi cation. He notes that measures have a limited half-life, and that measures
that remain unchanged are the most susceptible to corruption, such as distorting
behaviour.
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
by John Mayne on November 7, 2007 http://evi.sagepub.comDownloaded from
Evaluation 13(1)
100
Box 2. Technical Challenges and Approaches to Implementing Results-Based
Management in Organizations
( Q=(( F&3%,'&2&*0
e% <-+<17=-%5-0<8/-5-+,:%0+9%G+4C1+.%,;-%64<,<%43%5-0<8/1+.
e% 8<1+.%/-<8=,<%6;01+<
e% .-,,1+.%,;-%/1.;,%5-0<8/-<
e% 9-B-=421+.%5-0<8/-5-+,%602061,@
e% 9133-/-+,%,@2-<%43%<-/B16-Z2/4./055-<
e% 781=91+.%1+%-B0=80,14+%0+9%2-/34/50+6-%<,891-<
( R=((!00'$-,0$)*
e% =1+G1+.%48,645-<%0+9%06,14+<D%/-<8=,<%6;01+<
e% 0<<-<<1+.%64+,/178,14+%0+9%1+3=8-+6-
<S=(( T$*U$*8(>$*3*/$3.(3*"(V&'1)'23*/&(H*1)'230$)*
e% 64<,1+.%48,28,<
e% =1+G1+.%C1,;%48,645-<%8<1+.%/-<8=,<%6;01+<
<<=(( O303(W,3.$0J
e% 90,0%0+9%1+34/50,14+%g31,%34/%28/24<-h
e% d80=1,@%0<<8/0+6-%2/06,16-<
<A=(( 5'&"$-.J(?&+)'0$*8(V&'1)'23*/&
e% 2/06,1<1+.%.449%64558+160,14+
e% ,-==1+.%2-/34/50+6-%<,4/1-<
A recent OECD survey (Curristine, 2005a) found that the type of programmes
and services being measured was a key factor in explaining the success of
initiatives. Feller (2002) discusses the different challenges faced when trying to
build performance measures for different types of activities, identifying four types
of public sector organizations: production, procedural, craft or coping.
Several of the studies stressed the importance of adequate training and skills
development, without which results-based systems may very well fail (Perrin, 2002;
World Bank, 2002). Perrin (2006) argues for the need to complement monitoring
systems with evaluations to assess such issues as continued relevance, attribution,
unintended impacts and to help understand why things are working or not.
Discussion While there are real challenges here, some of the diffi culty may
be in how ‘measurement’ is approached. Much measurement in the public and
non-profi t sectors differs considerably from measurement in the natural sciences,
where precision and accuracy can indeed be routinely achieved. Non-private
sector measurement will always be dealing with soft events and never be able
to defi nitively ‘measure’ many issues. There will always be a level of uncertainty
involved in assessing the performance of a programme or policy.
Measurement here might better be thought of as gathering evidence that will
reduce this uncertainty. From this perspective, most of the soft results can indeed
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
by John Mayne on November 7, 2007 http://evi.sagepub.comDownloaded from
Mayne: Challenges and Lessons in Implementing Results-Based Management
101
be measured, i.e. additional data and information can be gathered which will
improve understanding about the performance in question.
There are numerous guides and suggestions for good practice with respect to
developing performance measures. Rohm (2003), for example, provides good
practice suggestions based on work with balanced scorecards.
Most discussions of performance measurement and monitoring focus solely
on ongoing types of measurement activities. In my view, evaluations ought to
play a key role in performance measurement systems, which are often referred
to as ‘monitoring and evaluation’ systems, with little attention paid in practice to
the evaluation component. Evaluations are often the best way to get at hard-to-
measure aspects of performance and to deal with the thorny issue of attribution.
In addition, more attention might be paid to ad hoc, short ‘performance studies’
that could be done from time to time to get a reading of performance levels,
as a less disruptive and less expensive measurement approach. These might
be evaluations, but could also be more modest studies analysing activities and
outputs. For many types of activities, ongoing measurement may be much more
than is needed and costly; performance studies might fi ll the gap (Mayne, 2006).
Measurement skills are still needed, but it is known how to develop or buy
those skills. Developing the right measures can be a challenge but should be done
from the perspective of experimenting, realizing that time will often tell which
measures prove useful and robust. Getting the right measures is not done once
and for all, but – to repeat – is a journey of trial and error.
Of course, the quality of the measurement done is important, and will be dealt
with separately.
Attributing Outcomes to Actions
The challenge Measuring outcomes is one challenge. Determining the extent to
which the programme contributed to those outcomes is quite another issue, and
indeed rather more of a challenge. The problem is that there are often a number
of factors other than the programme that have contributed to the observed
outcomes. Indeed, the outcomes may have occurred without the programme. But
to be able to make any assessment about the worth of spending public money
on the programme, some idea of how the programme has affected the desired
outcomes is needed.
Sorting out the various possible contributions is a real challenge.
The experience Binnendijk (2000) and Perrin (2002, 2006) point to the need to
ensure that, in addition to monitoring measures, evaluations play a role, at least
in part to get a better handling on the attribution issue. Feller (2002) notes the
long gestation period between outputs and outcomes and probabilistic linkages
between the two limit the usefulness of many performance measures as guides for
action. The US GAO (1997a, 1997b) has identifi ed attribution as a key challenge,
as has the OECD 2005 survey (Curristine, 2005b). Midwinter (1994) in discussing
Scottish experience examines the diffi culty of linking many performance indicators
to organizational performance.
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
by John Mayne on November 7, 2007 http://evi.sagepub.comDownloaded from
Evaluation 13(1)
102
Discussion Undertaking programme evaluations is the usual way to get
estimates of the link between the actions of a programme and their outcomes.
A well-designed evaluation can try to establish the counterfactual; what would
have happened without the programme? To undertake such an evaluation
requires considerable skills and can be costly, with results not always guaranteed.
Nevertheless, when attribution is an important issue with considerable uncertainty,
a well-designed evaluation is the best way to go.
Less sophisticated approaches can also be useful in reducing at least to
some extent the uncertainty surrounding attribution. Mayne (2001) discusses
contribution analysis as a way of addressing this issue, short of doing an evaluation,
through the careful consideration of the theory behind the programme.
Linking Financial and Performance Information
The challenge A key aim of integrating performance information into
management and budgeting is to be able to determine the costs of the results
of programmes. For outputs, this is relatively straightforward, since there is a
direct link – for the most part – between the costs of inputs and the direct outputs
produced. But even for outputs there may often be a challenge since nancial
systems are not always aligned with outputs.
But for outcomes, especially higher level outcomes, the challenge is not only
technical, but also conceptual. Given that outputs and lower level outcomes can
contribute to a number of outcomes sought, what does the ‘cost’ of an outcome
mean?
The experience In a report for the OECD, Pollitt (2001) discussed this challenge
and outlined some of the factors that would help or hinder linking fi nancial
and non-fi nancial information. To date, much ‘linking’ of these two types of
information has been simply to provide them in the same report (Perrin, 2002).
A recent OECD survey identifi ed this technical problem (Curristine, 2005b). Itell
(1998) notes that leaders in the eld pointed out that there is not a straightforward
relationship between performance and budgeting.
Discussion This issue has not received much attention in the literature. The issue
seems to be largely ignored or unrecognized. If a nancial system has been aligned
with outputs, then those can be costed, albeit there are still technical challenges
involved such as the allocation of overhead costs. The conceptual challenge with
respect to outcomes is a bit more tricky. Other than in very simple situations,
allocating the costs of outputs between a number of higher level outcomes does
not seem practical.
One answer would be to determine the costs for the set of outcomes to which
the outputs contribute. This is similar to costing a programme, or parts of a
programme. Further, for the purposes of results-based management, reasonable
estimates of the costs of various outcomes would be all that is needed. Accounting
for the fi nances is a different matter, handled through fi nancial statements. More
research is needed on the issue. What is now generally agreed is that ‘a mechanistic
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
by John Mayne on November 7, 2007 http://evi.sagepub.comDownloaded from
Mayne: Challenges and Lessons in Implementing Results-Based Management
103
link between outcomes and budget allocations is neither possible nor desirable’
(Perrin, 2006: 8).
Quality of Data and Information
The challenge In discussing the measurement challenge, it was argued that
measurement in the public sector will never be perfect. It follows that care needs
to be paid to the quality of data and information in a performance measurement
system. And given the contested context in which performance information is
used, it is quite important that it is seen as credible by those who use it. Quality
touches on a range of matters, such as accuracy, relevance and timeliness. It is
not an absolute concept. Further, better quality costs more resources. What are
usually sought are data and information ‘fi t for purpose’, i.e. good enough for the
intended purpose. Ensuring this is the challenge.
The experience Perrin (2002) reports this concern about quality data in OECD
governments and the risk of making bad decisions based on poor data and
information. This concern is reiterated in the OECD 2005 survey (Curristine,
2005b). The US GAO (1997a) identifi ed data generation as a key challenge.
Kusek, Rist and White (2003) make the link between adequate technical skills
and quality data.
Discussion While the importance of the quality of performance information is
generally recognized, the attention paid by organizations to quality matters is not
always evident. Some research suggests that often only modest attention is paid
to quality assurance practices in the area of performance measurement (Schwartz
and Mayne, 2005). The challenge is perhaps less of a technical nature – quite a bit
is known about quality assurance practices – but instead the will to put the needed
resources into adequate quality control practices. Perrin (2006) discusses the use
of independent bodies such as audit offi ces and civil society to assist government
in ensuring data quality.
One might speculate that the importance paid to empirical information in an
organization is proportional to its quality assurance efforts.
Credibly Reporting Performance
The challenge With the proliferation of data and information that is possible
and the variety of interests of users, how best to report performance information
is not straightforward. This is particularly the case when outcomes are being
reported on, since there is often uncertainty surrounding the measurement of the
outcomes and the extent to which the outcomes are linked to the programme in
question. In addition, public reporting is increasingly seen to include issues of how
an organization goes about achieving its aims, and the ability of an organization
to continue to operate and improve. Experience in measuring and reporting on
these matters is not widespread. National audit offi ces reporting on performance
information being provided to parliaments are frequently critical. And there are
no widely recognized standards for such reporting; each jurisdiction publishes
their own.
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
by John Mayne on November 7, 2007 http://evi.sagepub.comDownloaded from
Evaluation 13(1)
104
The experience Considerable efforts by ministries, budget offi ces and national
audit offi ces, as well as a number of private sector companies reporting on
corporate social responsibility, have been expended in many jurisdictions on how
to best report performance information. Boyle (2005), Funnell (1993), Hyndman
and Anderson (1995) and Thomas (2005) have reviewed the efforts to date in
a variety of jurisdictions, pointing to limited progress and daunting challenges.
The OECD (2003) discusses the need to harmonize and simplify the reporting
requirements of donor agencies.
Discussion There are now quite a few guides on public reporting produced in
a number of jurisdictions, with an array of advice on good practice, for example
CCAF (2002), Mayne (2004) and the Global Reporting Initiative (1999). Again,
the more reporting focuses on outcomes the greater the challenges become, since
there is more of a need to report a performance story rather than simply report
numbers: i.e. to report the context surrounding the events being reported and to
build a credible case that the performance reported did indeed happen and was
due at least in part to the actions of the programme.
A Final Word
Integrating performance information into management and budgeting is not
easy; it is doable but the challenges are formidable. An important rst step is to
recognize the various challenges and consider how to deal with them. The task is
not straightforward because it affects the whole organization. Hirschmann (2002)
suggests the image should not be of a thermometer testing the level of performance
of an organization but rather a sauna which affects the whole organization. I have
argued that the more diffi cult challenges are the organizational/behavioural ones
because systematically gathering and using empirical information on the results
being achieved to inform management decisions typically requires changes in
approaches to managing. Planning, budgeting, implementing and reviewing all
need to become results-focused and more empirically based. Results-based
management usually requires a culture change, something organizations and
individuals do not do easily.
The task is also not easy because failing to meet any one of the challenges
especially the organizational/behavioural ones can undermine efforts to build
performance information in an organization. That is, failure to deal with any one
of these challenges has the likelihood of seriously undermining efforts to imple-
ment results-based management. Perhaps this is another reason for the slow
progress often observed; there are a lot of things that can go wrong and derail
the initiative.
But there is considerable experience available on which to draw, much of
which has been referenced in this article. Advice exists on ways to deal with the
challenges discussed and, in my experience, most organizations are quite willing
to discuss and share their experiences.
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
by John Mayne on November 7, 2007 http://evi.sagepub.comDownloaded from
Mayne: Challenges and Lessons in Implementing Results-Based Management
105
In the end, integrating performance information into management and bud-
geting is about learning; learning from past experience based on empirical infor-
mation about what works and what doesn’t. This must be the explicit or implicit
aim. This type of learning requires culture change, persistent efforts over many
years and an investment in data gathering and analysis and in the sensible use of
such information. That is why the challenges remain.
And this learning approach applies as well to results-based management itself.
There is no Holy Grail, no magic set of steps that are sure to work. Implemen-
tation ought to be seen as incremental, with explicit review sessions built in so
that the experience to date can be refl ected upon, the challenges reassessed and
implementation approaches appropriately revised.
Notes
The author would like to thank Assia Alexieva of the World Conservation Union (IUCN)
for her help in preparing background for this contribution.
1. Williams (2003) discusses the development of performance-measurement practices in
the early 20th century in the US.
2. The unit of analysis here is the organization, not jurisdictions.
References
Aucoin, P. (1995) The New Public Management: Canada in Comparative Perspective.
Montreal: Institute for Research on Public Policy.
Auditor General of Canada (2002) Modernizing Accountability in the Public Sector. Report
of the Auditor General of Canada to the House of Commons, ch. 9. Ottawa: Auditor
General of Canada.
Azevedo, Luis Carlos dos Santos (1999) Developing a Performance Measurement System
for a Public Organization: A Case Study of the Rio de Janeiro City Controller’s Offi ce,
Minerva Program. Washington, DC: Institute of Brazilian Issues, George Washington
University. Retrieved (November 2004) from http://www.gwu.edu/%7Eibi/minerva/
Spring1999/Luiz.Carlos.Azevedo/Luiz.Carlos.Azevedo.html
Baehler, K. (2003) ‘“Managing for Outcomes”: Accountability and Thrust’, Australian
Journal of Public Administration 62(4): 23.
Barrados, M. and J. Mayne (2003) ‘Can Public Sector Organizations Learn?’, OECD
Journal on Budgeting 3(3): 87 103.
Behn, R. (2000) Rethinking Democratic Accountability. Washington, DC: Brookings
Institute.
Behn, R. (2002) ‘The Psychological Barriers to Performance Management, or Why isn’t
Everyone Jumping on the Performance-Management Bandwagon’, Public Performance
and Management Review 26(1): 5 25.
Binnendijk, A. (2000) Results-Based Management in the Development Cooperation
Agencies: A Review of Experience. Background Report, Paris: DAC OECD Working
Party on Aid Evaluation. Retrieved (2 May 2005) from http://www.oecd.org/
dataoecd/17/1/1886527.pdf
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
by John Mayne on November 7, 2007 http://evi.sagepub.comDownloaded from
Evaluation 13(1)
106
Bohte, J. and K. J. Meier (2000) ‘Goal Displacement: Assessing the Motivation for
Organizational Cheating’, Public Administration Review 60(2): 173 82.
Bouckaert, G. and B. G. Peters (2002) ‘Performance Measurement and Management: The
Achilles’ Heel in Administrative Modernization’, Public Performance and Management
Review 25(4): 359 62.
Boyle, R. (2005) ‘Assessment of Performance Reports: A Comparative Perspective’,
in R. Schwartz and J. Mayne (eds) Quality Matters: Seeking Confi dence in Evaluation,
Auditing and Performance Reporting, pp. 279 97. New Brunswick, NJ: Transaction
Publishers.
Boyne, G. A. and J. Law (2005) ‘Setting Public Service Outcome Targets: Lessons from
Local Public Service Agreements’, Public Money and Management 25(4): 253 60.
CCAF (2002) Reporting Principles: Taking Public Performance Reporting to a New Level.
Ottawa: CCAF.
Curristine, T. (2005a) ‘Government Performance: Lessons and Challenges’, OECD Journal
on Budgeting 5(1): 127 51.
Curristine, T. (2005b) ‘Performance Information in the Budget Process: Results of the
OECD 2005 Questionnaire’, OECD Journal on Budgeting 5(2): 87 127.
Diamond, J. (2005) Establishing a Performance Management Framework for Government.
IMF Working Paper, International Monetary Fund. Retrieved (April 2005) from http://
www.imf.org/external/pubs/cat/longres.cfm?sk=17809.0
Feller, I. (2002) ‘Performance Measurement Redux’, American Journal of Evaluation
23(4): 435 52.
Funnell, S. (1993) Effective Reporting in Program Performance Statements. Canberra:
Department of Finance.
Global Reporting Initiative (1999) Sustainability Reporting Guidelines: Exposure Draft for
Public Comment and Pilot Testing. Boston: Global Reporting Initiative.
Hatry, H. (1997) ‘We Need a New Concept of Accountability’, The Public Manager
26(3): 37 8.
Hirschmann, D. (2002) ‘Thermometer or Sauna? Performance Measurement and
Democratic Assistance in the United States Agency for International Development
(USAID)’, Public Administration 80(2): 235 55.
Hyndman, N. S. and R. Anderson (1995) ‘The Use of Performance Information in External
Reporting: An Empirical Study of UK Executive Agencies’, Financial Accountability
and Management 11(1): 1 17.
Itell, J. (1998) ‘Where are They Now? Performance Measurement Pioneers Offer Lessons
from the Long, Hard Road’, The New Public Innovator (May/June): 11 17.
Ittner, C. and D. Larcker (2003) ‘Coming up Short on Nonfi nancial Performance
Measurement’, Harvard Business Review (Nov.): 88 95.
Kettl, D. F. (1997) ‘The Global Revolution in Public Management: Driving Themes, Missing
Links’, Policy Analysis and Management 16(3): 446 62.
Kusek, J. R. Rist and E. White (2003) How will we Know the Millennium Development
Goal Results When we See them? Building a Results-Based Monitoring and
Evaluation System to Give us the Answers. Retrieved (2 May 2005) from http://www.
managingfordevelopmentresults.org/documents/KusekRistWhitepaper.pdf
Letts, C. W. Ryan and A. Grossman (2004) Benchmarking: How Nonprofi ts are Adapting
a Business Planning Tool for Enhanced Performance, Internal Benchmarking at a
Large Nonprofi t: CARE USA. Retrieved (November 2004) from http://www.tgci.com/
magazine/99winter/bench3.asp
Mayne, J. (2001) ‘Addressing Attribution through Contribution Analysis: Using Perfor-
mance Measures Sensibly’, Canadian Journal of Program Evaluation 16(1): 1 24.
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
by John Mayne on November 7, 2007 http://evi.sagepub.comDownloaded from
Mayne: Challenges and Lessons in Implementing Results-Based Management
107
Mayne, J. (2004) ‘Reporting on Outcomes: Setting Performance Expectations and Telling
Performance Stories’, Canadian Journal of Program Evaluation 19(1): 31 60.
Mayne, J. (2006) ‘Performance Studies: The Missing Link?’, Canadian Journal of Program
Evaluation 21(2): 201 – 8.
Mayston, D. J. (1985) ‘Nonprofi t Performance Indicators in the Public Sector’, Financial
Accountability and Management 1(1): 51 74.
Melkers, J. E. and K. G. Willoughby (2001) ‘Budgeter’s Views of State Performance-
Budgeting Systems: Distinctions across Branches’, Public Administration Review
61(1): 54 64.
Meyer, M. W. and V. Gupta (1994) ‘The Performance Paradox’, Research in Organizational
Behaviour 16: 309 69.
Midwinter, A. (1994) ‘Developing Performance Indicators for Local Government: The
Scottish Experience’, Public Money and Management 14(2): 37 43.
Moynihan, D. P. (2005) ‘Goal-Based Learning and the Future of Performance Management’,
Public Administration Review 65(2): 203 16.
Moynihan, D. P. (2006) ‘Managing for Results in State Government: Evaluating a Decade
of Reform’, Public Administration Review 66(1): 77 89.
Norman, R. (2002) ‘Managing through Measurement or Meaning? Lessons from Experience
with New Zealand’s Public Sector Performance Management Systems’, International
Review of Administrative Sciences 68: 619 28.
Norton, D. P. (2002) ‘Managing Strategy is Managing Change’, Balanced Scorecard Report
4(1), retrieved (Dec. 2004) from http://harvardbusinessonline.hbsp.harvard.edu/b01/en/
les/newsletters/bsr-sample.pdf;jsessionid=THNHXYQPDH0OKAKRGWCB5VQB
KE0YIIPS?_requestid=11865
OECD (1997) In Search of Results: Performance Management Practices. Paris: OECD.
OECD (2003) Harmonizing Donor Practices for Effective Aid Delivery, Good Practice
Papers, DAC Guidelines and Reference Series. Paris: OECD. Retrieved (2 May 2005)
from http://www.oecd.org/dataoecd/0/48/20896122.pdf
OECD (2005) Modernising Government: The Way Forward, ch. 2. Paris: OECD.
Offi ce of the Auditor General of Canada (2000) Implementing Results-Based Management:
Lessons from the Literature. Ottawa: Offi ce of the Auditor General of Canada. Retrieved
(2 May 2005) at http://www.oag-bvg.gc.ca/domino/other.nsf/html/00rbme.html
Pal, L. A. and T. Teplova (2003) Rubik’s Cube? Aligning Organizational Culture, Performance
Measurement, and Horizontal Management. Ottawa, Carleton University. http://www.
ppx.ca/Research/PPX-Research%20-%20Pal-Teplova%2005-15-03[1].pdf
Perrin, B. (1998) ‘Effective Use and Misuse of Performance Measurement’, American
Journal of Evaluation 19: 367 79.
Perrin, B. (2002) Implementing the Vision: Addressing Challenges to Results-Focussed
Management and Budgeting. Paris: OECD.
Perrin, B. (2006) Moving from Outputs to Outcomes: Practical Advice from Governments
Around the World. Managing for Performance and Results Series. Washington, DC: IBM
Centre for the Business of Government and the World Bank.
Pollitt, C. (2001) ‘Integrating Financial Management and Performance Management’,
OECD Journal on Budgeting 1(2): 7 37.
Pollitt, C. and G. Bouckaert (2000) Public Mangement Reform: A Comparative Analysis.
Oxford: Oxford University Press.
Rohm, H. (2003) ‘Improve Public Sector Results with a Balanced Scorecard: Nine
Steps to Success’ (presentation), the Balanced Scorecard Institute, US Foundation
for Performance Measurement. Retrieved (December 2004) from http://www.
balancedscorecard.org/fi les/Improve_Public_Sector_Perf_w_BSC_0203.swf
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
by John Mayne on November 7, 2007 http://evi.sagepub.comDownloaded from
Evaluation 13(1)
108
Schick, A. (2003) ‘The Performing State: Refl ection on an Idea Whose Time has Come But
Whose Implementation has Not’, OECD Journal on Budgeting 3(2): 71 103.
Schwartz, R. and J. Mayne (2005) Quality Matters: Seeking Confi dence in Evaluation,
Auditing and Performance Reporting. New Brunswick, NJ: Transaction Publishers.
Shergold, P. (1997) ‘The Colour Purple: Perceptions of Accountability across the Tasman’,
Public Administration and Development 17: 293 306.
Smith, P. (1995) ‘Performance Indicators and Outcome in the Public Sector’, Public Money
and Management (Oct.–Dec.): 13 16.
Thomas, P. (2005) ‘Performance Measurement and Management in the Public Sector’,
Optimum 35(2). Retrieved from http://www.optimumonline.ca/print.phtml?id=225
United Nations Development Programme (2004) UNDP Results Framework. Retrieved (19
November 2004) from http://www.gm-unccd.org/FIELD/Multi/UNDP/UNDPResFram.
pdf
United States General Accounting Offi ce (1997a) The Government Performance and
Results Act: 1997 Government-Wide Implementation will be Uneven. GAO/GGD-
97–109. Washington, DC: United States General Accounting Offi ce.
United States General Accounting Offi ce (1997b) Managing for Results: Analytic Challenges
in Measuring Performance. GAO/HEHS/GGD-97–138. Washington, DC: United States
General Accounting Offi ce.
Uusikylä, P. and V. Valovirta (2004) ‘Three Spheres of Performance Governance: Spanning
the Boundaries from Single-Organisation Focus towards a Partnership Network’, EGPA
2004 Annual Conference, Ljubljana, Slovenia.
van Thiel, S. and F. L. Leeuw (2002) ‘The Performance Paradox in the Public Sector’, Public
Performance and Management Review 25(3): 267 81.
Wholey, J. S. (1983) Evaluation and Effective Public Management. Boston: Little, Brown
and Co.
Wholey, J. S. (1997) ‘Clarifying Goals, Reporting Results’, in D. J. Rog (ed.) Progress
and Future Directions in Evaluation: Perspectives on Theory, Practice and Methods,
pp. 95 105. New Directions for Evaluation, 76; San Francisco: Jossey-Bass.
Wholey, J. S. (1999) ‘Performance-Based Management: Responding to the Challenges’,
Public Productivity and Management Review 22(3): 288 307.
Wholey, J. S. and H. P. Hatry (1992) ‘The Case for Performance Monitoring’, Public
Administration Review 52(6): 604 10.
Wiggins, A. and P. Tymms (2002) ‘Dysfunctional Effects of League Tables: A Comparison
between English and Scottish Primary Schools’, Public Money and Management
22(1): 43 8.
Williams, D. (2003) ‘Measuring Government in the Early Twentieth Century’, Public
Administration Review 63(6): 643 59.
World Bank (2002) Better Measuring, Monitoring, and Managing for Development Results.
Development Committee (Joint Ministerial Committee of the Boards of Governors of
the World Bank and the International Monetary Fund on the Transfer of Resources to
Developing Countries). Retrieved (2 May 2005) from http://siteresources.worldbank.
org/DEVCOMMINT/Documentation/90015418/DC2002-0019(E)-Results.pdf
Zapico-Goni, E. and J. Mayne (eds) (1997) Monitoring Performance in the Public Sector:
Future Directions from International Experience. New Brunswick, NJ: Transaction
Publishers.
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
by John Mayne on November 7, 2007 http://evi.sagepub.comDownloaded from
Mayne: Challenges and Lessons in Implementing Results-Based Management
109
! " # $ % & '( $ ) %1<%0+%1+9 -2-+9-+,%09B1 <-/%4+%287=16%<-6,4 /%2-/34/50+ 6-?%#-%
;0<%7--+%C4/G1+.%C1,;%0%+857-/%43%Q0+0910+%0+9%1+,-/+0,14+0=%4/.0+1E0,14+<:%
4+%2-/34/50+6-%5-0<8/-5-+,:%-B0=80,14+%0+9%06648+,071=1,@%1<<8-<?%#-%
;0<%08,;4/-9%+85-/48<%287=160,14+<:%0+9%-91,-9%31B-%744G<%1+%,;-<-%0/-0<?%
*+%JSS]:%;-%7-605-%0%Q0+0910+%)B0=80,14+%P461-,@%b-==4C?%V=-0<-%099/-<<%
64//-<24+9-+6-%,4H%]\c%P;-/748/+-%N9?:%",,0C0:%"$:%Q0+090:%LJ'%[#[?%
i-501=H%A4;+?50@+-j/4.-/<?645k
© 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
by John Mayne on November 7, 2007 http://evi.sagepub.comDownloaded from
... The relationship between Monitoring and Evaluation (M&E) planning and programme performance has been documented in development literature, with findings pointing to the critical role of structured M&E systems in enhancing programme effectiveness. This study has its theoretical foundation on the Results-Based Management (RBM) approach, which emphasizes that effective M&E systems align programme activities with predefined objectives, creating clear pathways between inputs and outcomes (Kusek & Rist, 2004;Mayne, 2007). The Logical Framework Approach (LFA) is another widely used methodology that aligns programme activities with expected outcomes, facilitating structured monitoring and evaluation (Bamberger et al., 2012). ...
... Additionally, some programme implementers perceive M&E requirements as bureaucratic burdens rather than valuable management tools, sometimes leading to resistance . Data quality issues further complicate M&E efforts, as unreliable data collection methods can undermine the validity of monitoring findings (Mayne, 2007). These challenges highlight the need for context-appropriate M&E systems that balance rigor with practicality. ...
Article
Full-text available
This study aims to establish the relationship between the planning practices in monitoring and evaluation and programme performance at Caritas Meru, Kenya. In this study, a theory of change provides a theoretically grounded, evidence-based approach. It ensures that interventions are systematically planned, monitored, and evaluated in a transparent and adaptable way. Mixed data collection methods were used, including questionnaires, key informant interviews, and focus group discussions. Frequencies, percentages, standard deviations and correlation coefficients were used for analysis. The study revealed a statistically significant but weak relationship between M&E budgeting practice and the performance of livelihood programmes (r = 0.227, p = 0.000). Additionally, it found a very weak significant correlation (r = 0.141, p = 0.000) between monitoring and evaluation planning practices and the performance of livelihood programmes implemented by Caritas Meru, Kenya. The finding contributes to academia and practice by establishing a correlation between planning practices in monitoring and evaluation and performance in livelihood programme at Caritas Meru. Received: 15 February 2025 / Accepted: 27 April 2025 / Published: 06 May 2025
... This has been influenced by New Public Management approaches to public sector governance and is often linked to devolved autonomy and authority in planning, management, and decision-making to lower-level governance units, such as regions, districts or municipalities [2]. Performance management approaches have been applied at all levels of administrative and governance systems [3]. ...
... All high-performance organizations, whether public or private, are and must be, interest in developing an effective and efficient performance measurement and performance management system, since it is only through such systems that they can remain as a high-performance organization. Mayne (2007) believe that the implementation of results-based management-type initiatives is difficult. ...
Article
Full-text available
This is quantitative descriptive research that analyzes the Results-Based Performance Management System (RPMS) used as performance management tool for the teachers. The Results-Based Performance Management System was gathered through survey questionnaire. The assessment focused on the extent of implementation in terms of Performance Planning and Commitment; Performance Monitoring and Coaching; Performance Review and Evaluation; and Performance Rewards and Development Planning. There are 161 teacher respondents from the four (4) public elementary schools of District III in the Division of Makati City. The extent of implementation of the Results-Based Performance Management System in terms of Performance Planning and Commitment and Performance Monitoring and Coaching were interpreted to a great extent with a weighted mean of 4.13 and 4.05 respectively. Meanwhile, in terms of Performance Rewarding and Planning and Performance Review and Evaluation, the respondents assessed as a great extent with the same weighted mean of 4.09. It was evident that the implementation of Results-Based Performance Management System (RPMS) improves teachers personally and professionally, there is a need that this program must be enhanced. Moreover, implementing such program helps teachers improve their practice in the teaching-learning process which will drive them to achieve better performance. There should be proper review to develop a program plan that addresses the challenges of the performance.
... In the pursuit of technological well-being during and after digitized work, the most significant human factor to be acknowledged is the amount of technology-related stress (Chesley 2014;Leclercq-Vandelannoitte 2019;Marsh et al. 2022). Information overload has ways to deteriorate motivation for any type of organizational changes (Mayne 2007) but is emphasized in technological changes (Bordi et al. 2018;Elciyar 2021). Excessive use of technology is found to correlate with technology-related stress in a variety of working sectors from industrial manufacturing to knowledge work (Marsh et al. 2022;Sharma et al. 2020). ...
Article
Full-text available
Digitalization adds demands to contend with technological developments for both employees and organizations. At the same time, technological changes transform work to become more intensive and hectic. This study examined determinants of technological well-being after digitized work. Technological well-being was operationalized as Digi-downshifting where decreased workload associates with job satisfaction and as Digi-uplifting where increased workload associates with job satisfaction. A subsample (N = 3321) of workers at digitalized workplaces from the Finnish Quality of Work Life Survey was used in mean comparisons and binary logistic regression analysis. Digi-uplifters emerged as the most predominant profile among categories of technological well-being and ill-being. Extensive working time with technologies and employees’ influencing opportunities at the workplace stood out as the most consistent determinants of technological well-being. Thus, Nordic countries with skilled, technologically oriented workforce and democratic working cultures have particular promise in fostering Digi-uplifting and Digidownshifting at work.
Chapter
Boundary spanning initiatives are central to public education. They support an extension of the system boundaries by taking into account the socio-economic context where public schools operate and promote stakeholder collaboration. In addition, they recognize the relevant role of those people responsible for linking the organization they represent with its environment and enabling information exchange out of the institutional borders. This chapter advocates the adoption of an “outside-in” view of policymaking—through dynamic performance management—in order to consistently align organizational outputs with community outcomes and promote a shared system view among different societal actors. Therefore, common goods are identified and analyzed by means of a DPM chart, with the purpose of increasing the effectiveness of current education policies. The discussions end by detailing the contributions to the body of knowledge and research limitations.
Article
There has been a gap in knowledge regarding how civil society organisations (CSOs) interact at different levels when working to integrate refugees into the labour market—its causes and limitations. To contribute to filling the gap, this study employs both a rational choice perspective and sociological institutionalism to analyse how and why CSOs in Jönköping municipality in Sweden interacted with other relevant actors, both other CSOs (horizontally) and the public sector (vertically), to integrate refugees into the labour market after the refugee crisis in 2015 and what challenges they faced. It analyses different forms of interaction, that is, not only the relationship between the state and the civil society but also the one between different civil society organisations, which brings a new analytical dimension to the concept of coproduction to support refugees. By analysing the organisations’ interactions at different levels, the study identifies four themes: Striving to be flexible and service-minded organisations; between rational choice and institutionalisation of horizontal interactions; obstacles to horizontal interactions; and difficulty of measuring goal attainment.
Article
Full-text available
Typically, a good measurement strategy to support results-based management includes both ongoing performance measures and periodic evaluations. It is argued that this is a too limited set of measurement tools, resulting, not infrequently, in less useful and costly ongoing performance measures. It is proposed that, in addition to ongoing performance measures and periodic evaluations, an alternative measurement tool called a performance study should be used in many situations, and further, that in a number of circumstances, performance studies should replace specific ongoing performance measures.
Article
The evaluation profession has been affected by two crises during the last two decades: the evaluation crisis, characterized by a deficit in evaluation utilization, and a more important governance crisis, characterized by deficits in government performance and credibility. Tactics and tools for evaluators to use in addressing these challenges are described in this chapter, and recommendations are offered for evaluators and program managers to better define, measure, and improve government performance.
Article
Public agencies in the UK and elsewhere are increasingly required to set outcome targets as a strategy for improving their services. A crucial element of this ‘results orientation’ is a clear definition of the desired outcomes and a specification of appropriate performance indicators. A recent example of this policy in the UK-Local Public Service Agreements (LP S As)-is examined in this article. The authors’ analysis of the first generation of LP S As shows that just under half of the indicators used were measures of outcome. The authors explain the ‘wicked’ issues in outcome measurement that emerged from the research.
Article
Administrative reform has led to a strong increase in the use of performance assessment instruments in the public sector. However, this has also led to several unintended consequences, such as the performance paradox, tunnel vision, and “analysis paralysis.” These unintended consequences can reduce the quality of the knowledge about actual levels of performance or even negatively affect performance. Examples can be found in all policy sectors. The authors argue that certain characteristics of the public sector–such as ambiguous policy objectives, discretionary authority of street–level bureaucrats, simultaneous production and consumption of services, and the disjunction of costs and revenues–increase the risk of a performance paradox, either unintentionally or deliberately. Performance assessment should therefore take the special characteristics of the public sector into account and develop systems that can handle contested and multiple performance indicators, striking a balance in the degree of “measure ...
Article
After a 1-year hiatus, the Health Care Financing Administration (HCFA) has resumed issuing Comparative Performance Reports to physicians in each Medicare carrier area during 1994. As a result of a 1989 Congressional mandate, HCFA has monitored and profiled the Medicare billing patterns of physicians and provided comparative data to physicians whose utilization patterns vary significantly from physicians in the same specialty and payment locality. Through September, 1994, carriers will send reports to six physicians per 1,000 (0.6%) in their provider universe. As in the past, Comparative Performance Reports are 'informational' and are intended to provide physicians with an opportunity to assess the appropriateness of their coding, billing, documentation, and utilization practices. An accompanying summary of a recent study by the U.S. General Accounting Office suggests that if Medicare contractors were given larger budgets to improve medical review operations, they could produce substantial Medicare program savings.
Article
League tables based on Key Performance Indicators (KPIs) have become an important part of the management of the UK's education system. While the performance measured by KPIs has apparently improved, concerns have been raised that they may have unintended or dysfunctional effects. This article compares English with Scottish schools. The authors found that English primary schools perceive their KPI systems (with league tables), as being significantly more dysfunctional than those of their Scottish counterparts (without tables). The article provides empirical evidence to support the many arguments that high-stakes single proxy indicators can have significant dysfunctional effects.
Article
This book takes stock of the past two decades of public sector modernisation in OECD countries. These years have witnessed an influx of new ideas and initiatives but how have these new ideas worked in practice? This report assesses failures and successes and identifies the challenges ahead. It examines certain selected key public management policy levers of reform such as: Making government more responsive, transparent and accessible, Instilling a performance approache in the public sector, Changing accountability and control systems, Facilitating reallocation and restructuring, Organising and motivating public servants, and Integrating a market approach. This report includes comparable data and tables comparing systems across countries. It aims to help policy makers equip themselves for the future. "This is a tremendously useful international overview which searches for generalisations, but in a nuanced and contextualised way. I have no hesitation in recommending it." -Christopher Pollitt, Professor of Public Management, Centre for Public Management, Erasmus University Rotterdam "Modernising Government is of immense value to practitioners who want to know what countries are doing to upgrade public management, as well as to scholars who want a conceptual understanding of contemporary reform. It is a highly useful reference for countries that have embraced new public management and countries that have traditional forms of public administration." -Allen Schick, Professor of Public Policy, University of Maryland and The Brookings Institution.