ArticlePDF Available

Strengthening Evaluation for Development

Authors:

Abstract

Although some argue that distinctions between evaluation and development evaluation are increasingly superfluous, it is important to recognize that some distinctions still matter. The severe vulnerabilities and power asymmetries inherent in most developing country systems and societies make the task of evaluation specialists in these contexts both highly challenging and highly responsible. It calls for specialists from diverse fields, in particular those in developing countries, to be equipped and active, and visible where evaluation is done and shaped. These specialists need to work in a concerted fashion on evaluation priorities that enable a critical scrutiny of current and emerging development frameworks and models (from global to local level), and their implications for evaluation-and vice versa. The agenda would include studying the paradigms and values underlying development interventions; working with complex adaptive systems; interrogating new private sector linked development financing modalities; and opening up to other scientific disciplines' notions of what constitutes rigor and credible evidence. It would also promote a shift focus from a feverish enthrallment with measuring impact to how to better manage for sustained impact. The explosion in the development profession over the last decade also opens up the potential for non-Western wisdom and traditions, including indigenous knowledge systems, to help shape novel development as well as evaluation frameworks in support of local contexts. For all these efforts, intellectual and financial resources have to be mobilized across disciplinary, paradigm, sector and geographic boundaries. This demands powerful thought leadership in evaluation-a challenge in particular for the global South and East.
Author Query Form
Journal Title : AJE
Article Number : 497531
Dear Author/Editor,
Greetings, and thank you for publishing with SAGE Publications. Your article has been
copyedited, and we have a few queries for you. Please address these queries when you send your
proof corrections to the production editor. Thank you for your time and effort.
Please assist us by clarifying the following queries:
No. Query Remarks
1 Please check that all authors are listed in the proper order; clarify which part
of each author’s name is his or her surname; and verify that all author names
are correctly spelled/punctuated and are presented in a manner consistent
with any prior publications.
2 Please provide the complete address for author ‘‘Zenda Ofir.’
3 Please provide an abstract not to exceed 250 words.
4 Please provide complete details for ‘‘CDA, 2012’’ or allow us to delete the
citation.
5 Please provide complete details for ‘‘Chang, 2010’’ or allow us to delete the
citation.
6 Please verify whether the conflicting interest and funding statements are
accurate and correct.
Forum
Evaluation for Development:
Strengthening the Development
Evaluation Agenda
Zenda
AQ 1
Ofir
1
Abstract
AQ 3
Keywords
development, development evaluation, thought leadership, evaluation priorities, developing
countries
The Challenge
Over the past decade, the distinctions between developed and developing
1
countries have become
increasingly blurred. Yet, a main difference remains: With few exceptions, the vulnerabilities of
developing countries are magnified. The poor tend to be poorer, the vulnerable more vulnerable,
institutions and systems more fragile, unstable or dysfunctional, the powerful and powerless more
so, contexts less predictable, and those capacities seen by many as essential to executing conven-
tional development models, lower.
Therefore, while the monitoring and evaluation of development is an exciting and vibrant endea-
vor, it has high stakes. If an evaluation is poorly designed or executed, it can have considerable and
destructive consequences: People, communities, or countries already in a precarious position might
lose their only chance at a better future, or policies and practices that are harmful might continue.
Development evaluators have to respect and engage with such risk, and the profession has to be
responsive to the ensuing challenges. Most crucially, those directing and influencing development
evaluation theory and practice—evaluation commissioners and thought leaders,
2
evaluators, as well
as organizational leaders and managers—all have to bear the weight of this responsibility when
executing their charge.
This notion also presupposes that evaluation is a valued and valuable activity that is regularly
used to ensure development effectiveness. This is of course not necessarily so; the legacy of poorly
executed evaluations as well as the highly political and technically challenging nature of both
development and development evaluation, interferes. Yet, a firm belief in the relevance, utility, and
1
Evalnut, Johannesburg, South Africa
AQ 2
Corresponding Author:
Zenda Ofir, Evalnut, P. O. Box 41829, Carighall, Johannesburg 2024, South Africa.
Email: zenda@evalnet.co.za
American Journal of Evaluation
00(0) 1-5
ªThe Author(s) 2013
Reprints and permission:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/1098214013497531
aje.sagepub.com
essential contributions of evaluation must continue to guide the profession, especially in countries
still struggling to find their most effective development path.
We live in extraordinary times. The rapid development of new technologies is taking society into
uncharted waters, inequalities are accelerating in many previously prosperous nations and developed
countries face increasing uncertainties. Yet, we can nevertheless celebrate for the first time in history
that hundreds of millions have been lifted out of poverty, in record time and primarily through their
own efforts, in countries until recent regarded as severely underdeveloped. This short article argues
that in preparation for the exciting yet challenging time ahead, development evaluation requires a
revitalized, purposeful, innovated agenda, nurtured by more visible, dynamic thought leadership
from the ‘‘developing’’ countries themselves, with greater attention to critical issues at the develop-
ment–evaluation interface.
The Development–Evaluation Interface
Development and evaluation are, or should be, in a dance with each other—the one sometimes lead-
ing, and sometimes the other, learning from each other and working together synergistically to create
something meaningful. Taken together with research, they can be viewed in the same way as a strand
of DNA, building a healthy body of knowledge for development. These metaphors emphasize the
importance of the relationship between development and evaluation, and the need for a greater
emphasis on the intersection between the two, preventing the one from mindlessly leading the other.
This is not a trivial issue. It assumes that we are clear and explicit on the underlying assumptions,
values, and frameworks that underpin and link the two, and that innovation in development evalua-
tion is pursued with attention to the implications or consequences for development and its effective-
ness. This is evaluation for development, rather than the evaluation of development.
For example, excluding or understating the role of power in evaluation negates its importance in
development policies, strategies, and interventions. Using people as experiments and numbers while
ignoring their voices during evaluation is disempowering and dismisses their voice in the course of
their development. Focusing an evaluation on the interests of individuals at the cost of community
harmony reflects an understanding of development where individual interests dominate those of the
collective. Failing to evaluate for weaknesses identified in past development interventions decreases
the chance for development success. Rigidly applying the ubiquitous ‘‘logframes’’ within a usually
too short funding cycle for accountability in results-based management and impact evaluation
neglects the critical reality of ever-evolving development contexts and slow, initially even negative
trajectories of change. Tackling for impact evaluation, one strand of a development intervention
without recognizing that the whole is more (or less) than the sum of the parts, or focusing on the
achievement of (average) impacts without also focusing vigorously on crucial development needs
such equity, transformation, institution building, accountability, sustainability, and resilience, can
inflate measures of success—often at the expense of long-term, sustained, truly effective develop-
ment. And as studies such as ‘‘Time to Listen’’ (CDA, 2012
AQ 4
) highlight, failing to evaluate for
realities on the ground and key weaknesses identified in past development interventions, very
significantly weakens the chance of development success.
Developing countries now also seek to decrease their aid dependency. As highlighted in state-
ments at key forums such as the Fourth High Level meeting on Development Effectiveness in Busan,
developing countries now more than ever insist on the need to direct their own development efforts,
including referring to the many diverse models of successful development available worldwide. This
trend has been accompanied by a growing indigenous focus on evaluation. In this context is likely
inevitable that development imperatives will shift toward perspectives such as those most recently
articulated by leading Korean economist Ha-Joon Chang, who argues that a country can be called
developed only if its high income is based on superior knowledge embodied in technologies and
2American Journal of Evaluation 00(0)
institutions. Interventions that focus on individuals and their small, fragmented enterprises may pro-
vide some building blocks but hardly facilitate development at national level, instead exacerbating
the micro–macro disconnect that haunts development evaluation practice. Sustained development
requires effective, efficient institutions, and productive enterprises supported by the collective accu-
mulation and use of knowledge, and the expansion of those social and technological capabilities that
are ‘‘both the causes and the consequences of such transformation’’ (Chang, 2010
AQ 5
). Yet although
current primarily aid-driven models such as the human development approach remind us that devel-
opment has to be about more than poverty reduction, increasing income levels, or the provision of
basic needs, these and other key global development discourses such as the Millennium Develop-
ment Goals, the Doha Development Agenda, and the World Trade Organization discussions fail
to address some of the most important components for national development. All of this has impor-
tant implications for the evolution of the field of evaluation as it moves in synergy with development
trends.
Strengthening the Development Evaluation Agenda
It is beyond the scope of this article to provide a detailed analysis of the development–evaluation
interface. Therefore, the following only highlights a few important priorities for frontier work in
development evaluation. First, more ground-breaking work is needed to bring to the forefront
non-Western worldviews and values in evaluation theory and practice. The profession is poorer for
the absence of a concerted effort in this regard. Second, an increasingly sophisticated understanding
is required for urgent priorities that depend on complexity and systems thinking in a highly net-
worked, competitive world—for example, evaluating impact, sustainability, transformation, and
resilience, or individual, organizational, and institutional empowerment. This implies acquiring a
clearer multidisciplinary understanding and use of work on complex systems, understanding change
and change trajectories, interlinked theories of change, the many different types of relationships
found in partnerships, coalitions, and networks, and the role of power in political and social contexts.
Such a focus will help better address issues such as the ‘‘micro-macro disconnect,’’ the ‘‘missing
middle,’’ and unintended consequences, and support critical development priorities including insti-
tution strengthening, organizational learning and change, knowledge generation and translation for
technological and social advancement, and transformative social change.
Third, alternative financing and funding models are poised to complement and even overtake the
role of conventional aid mechanisms. Several types of investment by Brazil, Russia, India, China,
and South Africa (known as the BRICS) that serve to spur development are gaining momentum
in Africa, Latin America, and Asia. At the same time the private sector in developed countries
appears increasingly interested in investing in financing mechanisms with seductive names such
as ‘‘impact investing’’ and ‘‘social impact bonds.’’ These mechanisms may put vulnerable societies
at risk unless the evaluation profession is from the beginning equipped to help stakeholders plan and
assess the benefits and risks, and in particular any negative consequences following from new fund-
ing modalities—or, for that matter from any development model or strategy.
Fourth, the pendulum needs to move back from enthrallment with simplistic notions of ‘‘measur-
ing impact’’ and determining ‘‘value for money’’ toward enabling—in parallel with these latter
efforts—a smart engagement with managing for impact that goes far beyond conventional process
evaluation and that is based on the many lessons that have emerged from results-based management
and other similar efforts. Fifth, there is a dire need to engage vigorously, in theory and in practice,
with a better understanding and use of standards for evaluation quality, ‘‘rigor’’ and ‘‘credible
evidence’’ that transcend incorrectly or too narrowly defined ideas of the ‘‘scientific method’’ and
so-called magic bullets for measuring impact. Finally, credible, useful syntheses of evaluation
Ofir 3
results and lessons should be available and communicated in a manner that can truly support differ-
ent worldviews of development in theory and practice.
Evaluation Thought Leadership for Development
It is time that thought leadership in evaluation theory and practice emerges more visibly from the
global South and East. Champions are needed who have a propensity toward conventional as well
as new indigenous evaluation paradigms. Individual sparks in developing countries need to be
stoked, so that ideas can spread and ignite meaningful innovation and new directions in evaluation.
There has been significant progress in building indigenous evaluation capacities and recent global
efforts such as EvalPartners provide scope for much more. However, capacity strengthening efforts
tend to focus on technical aspects of evaluation within established approaches and frameworks, pri-
marily results-based management. Although welcome and essential contributions, they may not
encourage or stimulate deeper questioning of these and alternative approaches and frameworks.
In most developing countries, the profession is barely a decade old and continues to be led by
theories and practices that originated in North America and Europe. The field of evaluation can grow
and benefit from the definitions, frameworks, models, and methods also rooted in many other coun-
tries’ experiences and systems of knowing. Developing countries have rich cultures with knowledge
and wisdom spanning thousands of years—often as relevant today as ever—that have yet to be
applied to the field of evaluation.
This is not about ‘‘cultural sensitivity,’’ but rather about the fundamental questioning of world-
views, frameworks, and definitions on which evaluation theory and practice—and resultant devel-
opment—have been built. The potential for new theories and practices that might revolutionize
development evaluation is not yet quite clear, but fledgling efforts need to be harnessed and
nurtured. The knowledge and wisdom of the rest of the world needs to complement the 50 years
of advancements in the West that have established and evolved the rich body of knowledge and
expertise on which we draw today.
The explosion in the profession of, and demand for, evaluation over the past decade has attracted
many poorly prepared practitioners from many professions, disciplines, and practices, both from
developed and from developing countries. Development evaluation has yet to attract more of the finest
minds from diverse disciplines and sectors to practice full time. This situation is likely to change sub-
stantially only when incentives exist, when more governments and other influential entities in devel-
oping countries recognize and demand high-quality evaluation and perceive it as a strategic,
intellectually challenging endeavor that is also linked with high-quality academic research. More
importantly, thought leadership from the global South and East needs to have the power to change
practice. This means reaching and influencing evaluation commissioners, those who work with aid
programs and those who work in-country with development strategies and funding modalities. Such
power is still limited by capacities; innovation often takes place when what exists has been mastered.
Insufficient confidence and incentives are still hampering progress. Papers and presentations are too
few, with too little traction to cultivate sustained influence. Thought leadership from developing coun-
tries needs to bring about high-quality and useful analyses and innovations benefiting their own coun-
tries’ priorities and contexts; establish repositories with useful syntheses; build influential coalitions
and think tanks; ensure dynamic participation at important development forums, and have a robust
focus on visibility. From small beginnings, thought leaders in developing countries have to nourish
and accelerate the positive trajectory of the evaluation profession worldwide.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publi-
cation of this article.
4American Journal of Evaluation 00(0)
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
AQ 6
Notes
1. In this article, the term developing refers to countries—primarily in the low- and lower middle-income group
of nations—where large groups of the population have a relatively low level of human development, including
limited capabilities to enjoy a long and healthy life in a safe environment. Such countries typically lack
robust, effective institutions, and continuous, self-sustaining economic growth—although with many devel-
oping countries currently on an upward growth trajectory, this is not always the case.
2. This rather uncomfortable term refers to a person who has a proven, in-depth understanding of an issue in
theory or practice, uses this understanding to innovate, and is keen and able to share novel, often radical
thinking and new directions that inspire others. These latter characteristics are especially important, distin-
guishing the ‘‘thought leader’’ from the conventional ‘‘expert’’ who may not necessarily be committed to
transformation, improvement, innovating, sharing with, or inspiring others.
Ofir 5
... This paper examines the intricacies of coloniality in Africa's development and how a Eurocentric development agenda affects the advancement of the MAE practice. Made in Africa Evaluation focuses on harnessing the use of localised approaches with the aim of aligning evaluation to the lives and needs of African people whilst promoting African values (Ofir 2013). Made in Africa Evaluation heightens the contextual relevance and transformative nature of evaluation, which is critical to improving development outcomes for Africans. ...
... Scholars, such as Ofir (2013) and Chilisa (2015), associate the rise of MAE as an approach to evaluation in the African continent with the rise in prominence of the African Evaluation Association (AfrEA). Chilisa (2015) Firstly, in the least indigenised branch or approach, evaluation is rooted in Western practices, dominated by Western evaluation theory and practice and evaluation practice does not recognise the fundamental worldviews informing African knowledge systems, having no foregrounding of the contextualisation of African evaluation theory and practice. ...
... Whilst the MAE discourse has progressed mainly from the 'copy and paste' approach of Western-influenced evaluation approaches without critically questioning their relevance to unique African contexts (primarily reflected in the least indigenised approach), this has not achieved the sought transformation. Instead, progression in MAE has consisted of the adaptation of what exists in the West, such as the OECD DAC criteria (Ofir 2013) as well as the African Peer Review Mechanism (Chilisa 2015) (reflected in the adaption evaluation approach). However, the question whether transformation in the evaluation discourse has progressed towards the African relational evaluation and development evaluation branches in which MAE is reconstructed and redefined by Africans for Africans grounded on African realities and epistemologies is one that still needs to be addressed through further research. ...
Article
Full-text available
Background: It is imperative to recognise the effects of the intrinsically Eurocentric development agenda on attaining transformative evaluation that appropriately addresses development priorities in Africa. The role of international development agencies as critical anchors in African evaluation practice needs examination to advance the Made in Africa Evaluation (MAE) discourse. Objectives: This article critiques the dominance of a Eurocentric lens to evaluation in Africa, illustrating how this impedes MAE. It harnesses the importance of MAE as a transformative, contextually relevant approach to espousing Afrocentric values in evaluation theory and practice. Method: Through a desktop review, the article examines the intrinsic power relations inherent in Western knowledge systems and how the effects of coloniality on African knowledge systems can deter the progression of a transformative, decolonial evaluation agenda. Results: The article recognises positive strides towards legitimising African knowledge systems and harnessing a more African evaluation agenda, for example, through the African Evaluation Association (AfrEA), leading the standardisation of African evaluation competencies and guidelines. Conclusion: It establishes, however, the adverse effects of long-standing power imbalances, with the development agenda in Africa being primarily set by international development organisations, such as donors. This leaves little room for African evaluators to manoeuvre and define contextually appropriate approaches to the evaluation outside of the dominant Eurocentric evaluation standards. The article contributes to understanding the role of the dominant international development agencies on evaluation in Africa and proposes recommendations for achieving a more decolonised evaluation agenda. It highlights the importance of the legitimisation of African knowledge systems, a multidisciplinary approach to monitoring and evaluation (ME), ensuring inclusivity and representation in evaluation and negotiating power balances with international development agencies.
... In September 1999, as a brainchild of Mahesh Patel of UNICEF, the African Evaluation Association (AfrEA) was formed in Nairobi, Kenya during a pan-African conference of evaluators. Mahesh Patel was also elected as the first president of the new Organization (Cloete, 2016;Mouton et al., 2014, Ofir, 2013. The goals of AfrEA were to: (1) share information and build evaluation capacity; (2) promote the formation of national evaluation associations; (3) promote knowledge and use of an African adaptation of the program evaluation standards; (4) form an Africa-wide association, promoting evaluation both as a discipline and as a profession; and (5) create and disseminate a database of evaluators (Cloete, 2016;Mouton et al., 2014). ...
... An example of this is the formation of Voluntary Organizations for Professional Evaluation (VOPEs). The birth and the activities of AfrEA since its formation has been instrumental to the emergence of evaluation as a profession in Africa (Cloete, 2016;Mouton et al., 2014;Ofir, 2013;Segone & Ocampo, 2006). As of 1999, there were only six national African evaluation bodies. ...
Article
Full-text available
Most evaluation in Africa today is rooted in dominant Western approaches. This presents at least two problems. First, Western evaluation methods and approaches, when used in Africa, may in fact lack validity, leading to low quality evaluations, wrong conclusions, and bad development outcomes. Second, Western evaluation approaches may encourage subjugation of African culture through neo-imperialism and the ‘colonization of the mind.’ These problems have been addressed in recent years through a focus on Made in Africa Evaluation (MAE). Given the current state of development of this nascent yet increasingly influential concept, we conducted research to contribute towards a better definition of MAE. This brief article presents the background, methods, and findings from that study. We conclude that MAE is based on the standards of the African Evaluation Association (AfrEA), using localized methods or approaches with the aim of aligning the evaluation process with the lifestyle and needs of African people.
... Evaluation may have been phenomenally embedded through international development (Cloete 2016;Ofir 2013), however in recent years governments have increasingly started to build state capacity to evaluate (Porter & Goldman 2013;Mbava 2017). ...
... From its earlier roots in evaluating United States government social programmes in the eras of the 'New Deal' and 'Great Society' policies (Shadish & Luellen 2011:184-186;Mbava 2017) evaluation has through development advanced and broadened to a highly globalised world and is now practiced in a multicultural world and in complex contexts, impacting the lives of various and diverse communities globally. Demanded by governments (Porter & Goldman 2013;Mbava 2017), embedded in development (Cloete 2016;Ofir 2013) and increasingly utilised in private and not-for-profit sectors (Bisgard 2017;Abrahams 2015;Wildschut 2014), an evaluation wave fuelled by performance and quality standards is creating an evaluating society (Dahler-Larsen 2011. ...
... Once we highlight the importance of geolocation of knowledge, we can argue for local experts and evaluators (Carden, 2013). Many have emphasized a need to adopt a local frame of inquiry as determined by the country and stakeholder communities to interpret evaluation findings and recommendations (Ofir, 2013). The context and local frame of inquiry, in turn, allude to the importance of situated knowledge. ...
Article
Full-text available
Scholars, practitioners, and activists have all contributed to the discussion of decolonization of evaluation practice in recent years as attention has increasingly focused on the persistent harms of colonization. While these discussions have led to the development of evaluation frameworks rooted in Indigenous and locally-situated understandings, values, and methods, little attention has been paid to the colonial origins of Western-based evaluation practices that continue to pervade the field. This article seeks to contribute to the conversation about decolonization by focusing on the ways in which Western social theory, born of colonizing nations, has been influenced by the processes of colonization. Drawing on scholars and theorists from the Global South, this article highlights specific apparatuses for dismantling imperial ways of thinking and ways of knowing, and proposes a path forward for evaluators who wish to grapple with the deeply imperial epistemological roots of our field of practice.
... Unfortunately, in NESs and elsewhere, few impact evaluations are commissioned in such a manner as to allow for this elaborate groundwork. Ofir (2013) asserted the following in her rousing call to action for revolutionising evaluation for development in Africa: [T]his is not about 'cultural sensitivity', but rather about the fundamental questioning of worldviews, frameworks and definitions on which evaluation theory and practice -and resultant development -have been built. The potential for new http://www.aejonline.org ...
Article
Full-text available
Background: Growing numbers of developing countries are investing in National Evaluation Systems (NESs). A key question is whether these have the potential to bring about meaningful policy change, and if so, what evaluation approaches are appropriate to support reflection and learning throughout the change process. Objectives: We describe the efforts of commissioned external evaluators in developing an evaluation approach to help critically assess the efficacy of some of the most important policies and programmes aimed at supporting South African farmers from the past two decades. Method: We present the diagnostic evaluation approach we developed. The approach guides evaluation end users through a series of logical steps to help make sense of an existing evidence base in relation to the root problems addressed, and the specific needs of the target populations. No additional evaluation data were collected. Groups who participated include government representatives, academics and representatives from non-governmental organisations and national associations supporting emerging farmers. Results: Our main evaluation findings relate to a lack of policy coherence in important key areas, most notably extension and advisory services, and microfinance and grants. This was characterised by; (1) an absence of common understanding of policies and objectives; (2) overly ambitious objectives often not directly linked to the policy frameworks; (3) lack of logical connections between target groups and interventions and (4) inadequate identification, selection, targeting and retention of beneficiaries. Conclusion: The diagnostic evaluation allowed for uniquely cross-cutting and interactive engagement with a complex evidence base. The evaluation process shed light on new evaluation review methods that might work to support a NES.
Article
While effective in imparting skills and competencies required for donor‐centric evaluations, the present system of evaluation education in the Global South adds little to the development of Indigenous evaluation theory and practice. As education is the primary tool for building evaluators’ capacity to construct knowledge situated in local epistemologies and culture, deconstructing the colonial character of education is the first step toward the decolonization of evaluation practice. The chapter first discusses the importance of disrupting the colonial episteme as a core feature of the decolonization process. Next, it explores the coloniality of the present education system in Global South evaluation and its implication for the evaluation field. The chapter then proposes five key strategic directions for decolonizing evaluation education and reinstating the voice and agency of Global South communities in the evaluation process: (1) transforming evaluation education to prioritize the learning needs of field‐based organizations, (2) strengthening access to evaluation education for grassroots communities, (3) acknowledging the primacy of local languages in building transformative knowledge, (4) reimagining evaluation educators, and (5) recognizing internal colonialism and social justice in the evaluation curriculum.
Thesis
In a world of unprecedented uncertainty and complexity, and post-truth politics, the ethical challenges for the independent evaluator are greater than ever. At a time when the opinions of independent experts are increasingly considered to be part of an out-of-touch elite, what are the ethical dilemmas and challenges for independent evaluation consultants working in a highly competitive evaluation marketplace? How can professional practice be protected from the politicization of the evaluation process in an increasingly polarized policy space (Schwandt, 2018)? If evaluations of public policies are inevitably intertwined with political interests (Englert et al., 1977; Vestman & Conner, 2006), and commercial interests (Nielsen et al., 2018), independent evaluators are caught in complex power dynamics. By following a hermeneutic phenomenology approach, this study investigates the lived experience of 9 independent evaluation consultants (5 women and 4 men) when dealing with ethical dilemmas as well as rendering ethically sensitive judgments in the framework of external international development evaluations commissioned by a variety of multilateral organizations and bilateral donors. In contrast to most other studies on evaluation ethics that examine what evaluators should do, this phenomenological study explores evaluators at the core of their being through their actual personal experiences in i) making ethical decisions dealing with good and bad in their daily work, ii) dealing with their clients and the power dynamics and ethical challenges stemming from this relationship; and iii) drawing upon the evaluation professional community as a support system to reinforce their ethical decision-making and ethical practices. This study argues that independent evaluators’ ethical decision-making is never solely the result of one’s cognition, reasoning or intuition but relies on the decision-maker’s subjective rationality and is often bounded by both information and power asymmetries. The thematic analysis of evaluators’ lived experience yielded the five essential themes: knowing one’s roots, navigating through the ethical fog, relying on a decision support system, negotiating one’s independence and influence, and turning to other stakeholders. With a view to deepening the understanding of the evaluation process and its relational dynamics within the evaluation community, this study highlights the need for the strengthening of moral thinking and ethical practice among all involved actors, in order to foster action for a more reflective practice in the evaluation consulting field.
Thesis
State development cooperation (DC) is facing growing pressure of legitimizing its budget expenditure and relevance, the most crucial instrument for this purpose being evaluations. This thesis therefore deals with the evaluation in German DC by referring to the following research questions: How is development cooperation evaluated in Germany? To what extent do the central evaluation units differ especially regarding effectiveness and participation? Two main development institutions are in charge of evaluating the German state aid: the German Corporation for International Cooperation (GIZ) following its predecessor the German Corporation for Technical Cooperation (GTZ) as the leading implementing organization for projects and programs and the German Institute for Development Evaluation (DEval), which has taken over the task of evaluating superordinate research topics from the Federal Ministry for Economic Cooperation and Development (BMZ). The quality of these institutions’ evaluation systems is measured upon four selected categories from the internationally accepted criteria by the Organization for Economic Cooperation and Development (OECD): impartiality and independence, effectiveness, transparency and participation. Those categories are being applied not only on the level of project, program and superordinate evaluations, but also for meta-evaluations, as these “evaluations of evaluations” are also among the institutions’ task range. The assessment furthermore draws on two time frames which are determined by the 1999 and 2009 system analyses on evaluation in German development cooperation which had been commissioned by the BMZ. The time frame of 1999-2009 (investigating the GTZ/BMZ) helps understand the evolvement of German DC and concludes with recommendations for action and the one of 2009-2017 (investigating the GIZ/DEval) shows how the recommendations of action were adopted and outlines remaining weaknesses in the system. Much information was found not only through literature and online research, but through personal- and phone interviews with three relevant agencies: the DEval and the GIZ for an inside view and the German Development Institute (DIE) for an outside opinion. Two theses are foremost important: Firstly, the evaluation system in Germany changed drastically after the publication of the 2009 system analysis and secondly a convergence can be observed between project evaluations (represented by the GTZ/GIZ) and superordinate evaluations (represented by the BMZ/DEval) despite their different learning processes. The comparison of the institutions based on the above differentiations verified the theses. Yet, a third finding emerged: meta-evaluations were and still are poorly established but shall be put more into focus in the future. Additionally, interesting research topics are being described in the conclusion such as enhanced evaluation capacity development (recipient participation in evaluation). Furthermore, it is confirmed that the German evaluation system for DC still faces challenges but is under constant transformation and improvement.
Article
Motivation Authoritarian states receive development funding from international donors for programmes and interventions, some aimed at improving their governance systems. This article reports on the evaluation of an EU‐funded programme in Kazakhstan, seen as the most progressive reformer in Central Asia. The EU Programme was aimed at enhancing Kazakhstan’s business competitiveness through better regulation and civil service modernization. Purpose This article addresses two research questions. What was the impact of the EU‐funded intervention? What role, if any, did the evaluation play in reflective policy learning for the future? Approach and Methods The research draws on quantitative and qualitative evidence. This involved analysing secondary data sources on the effectiveness of governance over time in Kazakhstan and interviews for 34 key stakeholders on the impact of the EU interventions. Findings We find no significant improvements in governance over time. While the donor responded in a flexible way to meet the changing strategic goals of the state (which were at the personal behest of the President), this did not help to embed evaluation as part of the policy cycle for future learning. The key beneficiary here was the Government of Kazakhstan. Policy implications Wider systemic change from upward accountability to downward accountability to citizens is needed to make evaluation relevant for authoritarian states. Upward accountability to the President is a feature of authoritarian regimes, which precludes citizens from being able to hold the state to account. Without these systemic changes, autocracies simply engage in development evaluation as a perfunctory exercise to meet donor requirements.
Article
Full-text available
Background: A recent study of African evaluations identified deficiencies in present evaluation practices. Due to limited public sector expertise for the design of policy impact evaluations, expertise for such complex designs is largely external to the public sector. Consequently, recommendations made sometimes pay insufficient attention to variations in local contexts. Objectives: The bold idea presented in this article is that theory-based evaluation (TBE) in its most recent participatory versions offers promising opportunities towards more flexible epistemology. When properly tweaked, tuned and adapted to local needs and demands in African contexts, better theory-based evaluations are possible. Method: Three TBE-inspired criteria for better evaluations are suggested. The usefulness of including broad perspectives in theory-making was illustrated with a recent policy example, that is, the provision of tablets to school children in South Africa. Results: A model of collaborative theory-making is presented. The pros and cons of the proposed hybrid model are discussed. Conclusion: Recent trends in TBE point towards more participation of stakeholders in the theory-making process and towards more flexible epistemologies. The proposed innovation of TBE may have broader implications and serve as a promising inspiration for better evaluation practices in African contexts, given that existing research has demonstrated a need for such visions.
ResearchGate has not been able to resolve any references for this publication.