ArticlePDF Available

Strengthening Evaluation for Development



Although some argue that distinctions between evaluation and development evaluation are increasingly superfluous, it is important to recognize that some distinctions still matter. The severe vulnerabilities and power asymmetries inherent in most developing country systems and societies make the task of evaluation specialists in these contexts both highly challenging and highly responsible. It calls for specialists from diverse fields, in particular those in developing countries, to be equipped and active, and visible where evaluation is done and shaped. These specialists need to work in a concerted fashion on evaluation priorities that enable a critical scrutiny of current and emerging development frameworks and models (from global to local level), and their implications for evaluation-and vice versa. The agenda would include studying the paradigms and values underlying development interventions; working with complex adaptive systems; interrogating new private sector linked development financing modalities; and opening up to other scientific disciplines' notions of what constitutes rigor and credible evidence. It would also promote a shift focus from a feverish enthrallment with measuring impact to how to better manage for sustained impact. The explosion in the development profession over the last decade also opens up the potential for non-Western wisdom and traditions, including indigenous knowledge systems, to help shape novel development as well as evaluation frameworks in support of local contexts. For all these efforts, intellectual and financial resources have to be mobilized across disciplinary, paradigm, sector and geographic boundaries. This demands powerful thought leadership in evaluation-a challenge in particular for the global South and East.
Author Query Form
Journal Title : AJE
Article Number : 497531
Dear Author/Editor,
Greetings, and thank you for publishing with SAGE Publications. Your article has been
copyedited, and we have a few queries for you. Please address these queries when you send your
proof corrections to the production editor. Thank you for your time and effort.
Please assist us by clarifying the following queries:
No. Query Remarks
1 Please check that all authors are listed in the proper order; clarify which part
of each author’s name is his or her surname; and verify that all author names
are correctly spelled/punctuated and are presented in a manner consistent
with any prior publications.
2 Please provide the complete address for author ‘‘Zenda Ofir.’
3 Please provide an abstract not to exceed 250 words.
4 Please provide complete details for ‘‘CDA, 2012’’ or allow us to delete the
5 Please provide complete details for ‘‘Chang, 2010’’ or allow us to delete the
6 Please verify whether the conflicting interest and funding statements are
accurate and correct.
Evaluation for Development:
Strengthening the Development
Evaluation Agenda
AQ 1
AQ 3
development, development evaluation, thought leadership, evaluation priorities, developing
The Challenge
Over the past decade, the distinctions between developed and developing
countries have become
increasingly blurred. Yet, a main difference remains: With few exceptions, the vulnerabilities of
developing countries are magnified. The poor tend to be poorer, the vulnerable more vulnerable,
institutions and systems more fragile, unstable or dysfunctional, the powerful and powerless more
so, contexts less predictable, and those capacities seen by many as essential to executing conven-
tional development models, lower.
Therefore, while the monitoring and evaluation of development is an exciting and vibrant endea-
vor, it has high stakes. If an evaluation is poorly designed or executed, it can have considerable and
destructive consequences: People, communities, or countries already in a precarious position might
lose their only chance at a better future, or policies and practices that are harmful might continue.
Development evaluators have to respect and engage with such risk, and the profession has to be
responsive to the ensuing challenges. Most crucially, those directing and influencing development
evaluation theory and practice—evaluation commissioners and thought leaders,
evaluators, as well
as organizational leaders and managers—all have to bear the weight of this responsibility when
executing their charge.
This notion also presupposes that evaluation is a valued and valuable activity that is regularly
used to ensure development effectiveness. This is of course not necessarily so; the legacy of poorly
executed evaluations as well as the highly political and technically challenging nature of both
development and development evaluation, interferes. Yet, a firm belief in the relevance, utility, and
Evalnut, Johannesburg, South Africa
AQ 2
Corresponding Author:
Zenda Ofir, Evalnut, P. O. Box 41829, Carighall, Johannesburg 2024, South Africa.
American Journal of Evaluation
00(0) 1-5
ªThe Author(s) 2013
Reprints and permission:
DOI: 10.1177/1098214013497531
essential contributions of evaluation must continue to guide the profession, especially in countries
still struggling to find their most effective development path.
We live in extraordinary times. The rapid development of new technologies is taking society into
uncharted waters, inequalities are accelerating in many previously prosperous nations and developed
countries face increasing uncertainties. Yet, we can nevertheless celebrate for the first time in history
that hundreds of millions have been lifted out of poverty, in record time and primarily through their
own efforts, in countries until recent regarded as severely underdeveloped. This short article argues
that in preparation for the exciting yet challenging time ahead, development evaluation requires a
revitalized, purposeful, innovated agenda, nurtured by more visible, dynamic thought leadership
from the ‘‘developing’’ countries themselves, with greater attention to critical issues at the develop-
ment–evaluation interface.
The Development–Evaluation Interface
Development and evaluation are, or should be, in a dance with each other—the one sometimes lead-
ing, and sometimes the other, learning from each other and working together synergistically to create
something meaningful. Taken together with research, they can be viewed in the same way as a strand
of DNA, building a healthy body of knowledge for development. These metaphors emphasize the
importance of the relationship between development and evaluation, and the need for a greater
emphasis on the intersection between the two, preventing the one from mindlessly leading the other.
This is not a trivial issue. It assumes that we are clear and explicit on the underlying assumptions,
values, and frameworks that underpin and link the two, and that innovation in development evalua-
tion is pursued with attention to the implications or consequences for development and its effective-
ness. This is evaluation for development, rather than the evaluation of development.
For example, excluding or understating the role of power in evaluation negates its importance in
development policies, strategies, and interventions. Using people as experiments and numbers while
ignoring their voices during evaluation is disempowering and dismisses their voice in the course of
their development. Focusing an evaluation on the interests of individuals at the cost of community
harmony reflects an understanding of development where individual interests dominate those of the
collective. Failing to evaluate for weaknesses identified in past development interventions decreases
the chance for development success. Rigidly applying the ubiquitous ‘‘logframes’’ within a usually
too short funding cycle for accountability in results-based management and impact evaluation
neglects the critical reality of ever-evolving development contexts and slow, initially even negative
trajectories of change. Tackling for impact evaluation, one strand of a development intervention
without recognizing that the whole is more (or less) than the sum of the parts, or focusing on the
achievement of (average) impacts without also focusing vigorously on crucial development needs
such equity, transformation, institution building, accountability, sustainability, and resilience, can
inflate measures of success—often at the expense of long-term, sustained, truly effective develop-
ment. And as studies such as ‘‘Time to Listen’’ (CDA, 2012
AQ 4
) highlight, failing to evaluate for
realities on the ground and key weaknesses identified in past development interventions, very
significantly weakens the chance of development success.
Developing countries now also seek to decrease their aid dependency. As highlighted in state-
ments at key forums such as the Fourth High Level meeting on Development Effectiveness in Busan,
developing countries now more than ever insist on the need to direct their own development efforts,
including referring to the many diverse models of successful development available worldwide. This
trend has been accompanied by a growing indigenous focus on evaluation. In this context is likely
inevitable that development imperatives will shift toward perspectives such as those most recently
articulated by leading Korean economist Ha-Joon Chang, who argues that a country can be called
developed only if its high income is based on superior knowledge embodied in technologies and
2American Journal of Evaluation 00(0)
institutions. Interventions that focus on individuals and their small, fragmented enterprises may pro-
vide some building blocks but hardly facilitate development at national level, instead exacerbating
the micro–macro disconnect that haunts development evaluation practice. Sustained development
requires effective, efficient institutions, and productive enterprises supported by the collective accu-
mulation and use of knowledge, and the expansion of those social and technological capabilities that
are ‘‘both the causes and the consequences of such transformation’’ (Chang, 2010
AQ 5
). Yet although
current primarily aid-driven models such as the human development approach remind us that devel-
opment has to be about more than poverty reduction, increasing income levels, or the provision of
basic needs, these and other key global development discourses such as the Millennium Develop-
ment Goals, the Doha Development Agenda, and the World Trade Organization discussions fail
to address some of the most important components for national development. All of this has impor-
tant implications for the evolution of the field of evaluation as it moves in synergy with development
Strengthening the Development Evaluation Agenda
It is beyond the scope of this article to provide a detailed analysis of the development–evaluation
interface. Therefore, the following only highlights a few important priorities for frontier work in
development evaluation. First, more ground-breaking work is needed to bring to the forefront
non-Western worldviews and values in evaluation theory and practice. The profession is poorer for
the absence of a concerted effort in this regard. Second, an increasingly sophisticated understanding
is required for urgent priorities that depend on complexity and systems thinking in a highly net-
worked, competitive world—for example, evaluating impact, sustainability, transformation, and
resilience, or individual, organizational, and institutional empowerment. This implies acquiring a
clearer multidisciplinary understanding and use of work on complex systems, understanding change
and change trajectories, interlinked theories of change, the many different types of relationships
found in partnerships, coalitions, and networks, and the role of power in political and social contexts.
Such a focus will help better address issues such as the ‘‘micro-macro disconnect,’’ the ‘‘missing
middle,’’ and unintended consequences, and support critical development priorities including insti-
tution strengthening, organizational learning and change, knowledge generation and translation for
technological and social advancement, and transformative social change.
Third, alternative financing and funding models are poised to complement and even overtake the
role of conventional aid mechanisms. Several types of investment by Brazil, Russia, India, China,
and South Africa (known as the BRICS) that serve to spur development are gaining momentum
in Africa, Latin America, and Asia. At the same time the private sector in developed countries
appears increasingly interested in investing in financing mechanisms with seductive names such
as ‘‘impact investing’’ and ‘‘social impact bonds.’’ These mechanisms may put vulnerable societies
at risk unless the evaluation profession is from the beginning equipped to help stakeholders plan and
assess the benefits and risks, and in particular any negative consequences following from new fund-
ing modalities—or, for that matter from any development model or strategy.
Fourth, the pendulum needs to move back from enthrallment with simplistic notions of ‘‘measur-
ing impact’’ and determining ‘‘value for money’’ toward enabling—in parallel with these latter
efforts—a smart engagement with managing for impact that goes far beyond conventional process
evaluation and that is based on the many lessons that have emerged from results-based management
and other similar efforts. Fifth, there is a dire need to engage vigorously, in theory and in practice,
with a better understanding and use of standards for evaluation quality, ‘‘rigor’’ and ‘‘credible
evidence’’ that transcend incorrectly or too narrowly defined ideas of the ‘‘scientific method’’ and
so-called magic bullets for measuring impact. Finally, credible, useful syntheses of evaluation
Ofir 3
results and lessons should be available and communicated in a manner that can truly support differ-
ent worldviews of development in theory and practice.
Evaluation Thought Leadership for Development
It is time that thought leadership in evaluation theory and practice emerges more visibly from the
global South and East. Champions are needed who have a propensity toward conventional as well
as new indigenous evaluation paradigms. Individual sparks in developing countries need to be
stoked, so that ideas can spread and ignite meaningful innovation and new directions in evaluation.
There has been significant progress in building indigenous evaluation capacities and recent global
efforts such as EvalPartners provide scope for much more. However, capacity strengthening efforts
tend to focus on technical aspects of evaluation within established approaches and frameworks, pri-
marily results-based management. Although welcome and essential contributions, they may not
encourage or stimulate deeper questioning of these and alternative approaches and frameworks.
In most developing countries, the profession is barely a decade old and continues to be led by
theories and practices that originated in North America and Europe. The field of evaluation can grow
and benefit from the definitions, frameworks, models, and methods also rooted in many other coun-
tries’ experiences and systems of knowing. Developing countries have rich cultures with knowledge
and wisdom spanning thousands of years—often as relevant today as ever—that have yet to be
applied to the field of evaluation.
This is not about ‘‘cultural sensitivity,’’ but rather about the fundamental questioning of world-
views, frameworks, and definitions on which evaluation theory and practice—and resultant devel-
opment—have been built. The potential for new theories and practices that might revolutionize
development evaluation is not yet quite clear, but fledgling efforts need to be harnessed and
nurtured. The knowledge and wisdom of the rest of the world needs to complement the 50 years
of advancements in the West that have established and evolved the rich body of knowledge and
expertise on which we draw today.
The explosion in the profession of, and demand for, evaluation over the past decade has attracted
many poorly prepared practitioners from many professions, disciplines, and practices, both from
developed and from developing countries. Development evaluation has yet to attract more of the finest
minds from diverse disciplines and sectors to practice full time. This situation is likely to change sub-
stantially only when incentives exist, when more governments and other influential entities in devel-
oping countries recognize and demand high-quality evaluation and perceive it as a strategic,
intellectually challenging endeavor that is also linked with high-quality academic research. More
importantly, thought leadership from the global South and East needs to have the power to change
practice. This means reaching and influencing evaluation commissioners, those who work with aid
programs and those who work in-country with development strategies and funding modalities. Such
power is still limited by capacities; innovation often takes place when what exists has been mastered.
Insufficient confidence and incentives are still hampering progress. Papers and presentations are too
few, with too little traction to cultivate sustained influence. Thought leadership from developing coun-
tries needs to bring about high-quality and useful analyses and innovations benefiting their own coun-
tries’ priorities and contexts; establish repositories with useful syntheses; build influential coalitions
and think tanks; ensure dynamic participation at important development forums, and have a robust
focus on visibility. From small beginnings, thought leaders in developing countries have to nourish
and accelerate the positive trajectory of the evaluation profession worldwide.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publi-
cation of this article.
4American Journal of Evaluation 00(0)
The author(s) received no financial support for the research, authorship, and/or publication of this article.
AQ 6
1. In this article, the term developing refers to countries—primarily in the low- and lower middle-income group
of nations—where large groups of the population have a relatively low level of human development, including
limited capabilities to enjoy a long and healthy life in a safe environment. Such countries typically lack
robust, effective institutions, and continuous, self-sustaining economic growth—although with many devel-
oping countries currently on an upward growth trajectory, this is not always the case.
2. This rather uncomfortable term refers to a person who has a proven, in-depth understanding of an issue in
theory or practice, uses this understanding to innovate, and is keen and able to share novel, often radical
thinking and new directions that inspire others. These latter characteristics are especially important, distin-
guishing the ‘‘thought leader’’ from the conventional ‘‘expert’’ who may not necessarily be committed to
transformation, improvement, innovating, sharing with, or inspiring others.
Ofir 5
... In September 1999, as a brainchild of Mahesh Patel of UNICEF, the African Evaluation Association (AfrEA) was formed in Nairobi, Kenya during a pan-African conference of evaluators. Mahesh Patel was also elected as the first president of the new Organization (Cloete, 2016;Mouton et al., 2014, Ofir, 2013. The goals of AfrEA were to: (1) share information and build evaluation capacity; (2) promote the formation of national evaluation associations; (3) promote knowledge and use of an African adaptation of the program evaluation standards; (4) form an Africa-wide association, promoting evaluation both as a discipline and as a profession; and (5) create and disseminate a database of evaluators (Cloete, 2016;Mouton et al., 2014). ...
... An example of this is the formation of Voluntary Organizations for Professional Evaluation (VOPEs). The birth and the activities of AfrEA since its formation has been instrumental to the emergence of evaluation as a profession in Africa (Cloete, 2016;Mouton et al., 2014;Ofir, 2013;Segone & Ocampo, 2006). As of 1999, there were only six national African evaluation bodies. ...
Full-text available
Most evaluation in Africa today is rooted in dominant Western approaches. This presents at least two problems. First, Western evaluation methods and approaches, when used in Africa, may in fact lack validity, leading to low quality evaluations, wrong conclusions, and bad development outcomes. Second, Western evaluation approaches may encourage subjugation of African culture through neo-imperialism and the ‘colonization of the mind.’ These problems have been addressed in recent years through a focus on Made in Africa Evaluation (MAE). Given the current state of development of this nascent yet increasingly influential concept, we conducted research to contribute towards a better definition of MAE. This brief article presents the background, methods, and findings from that study. We conclude that MAE is based on the standards of the African Evaluation Association (AfrEA), using localized methods or approaches with the aim of aligning the evaluation process with the lifestyle and needs of African people.
... Evaluation may have been phenomenally embedded through international development (Cloete 2016;Ofir 2013), however in recent years governments have increasingly started to build state capacity to evaluate (Porter & Goldman 2013;Mbava 2017). ...
... From its earlier roots in evaluating United States government social programmes in the eras of the 'New Deal' and 'Great Society' policies (Shadish & Luellen 2011:184-186;Mbava 2017) evaluation has through development advanced and broadened to a highly globalised world and is now practiced in a multicultural world and in complex contexts, impacting the lives of various and diverse communities globally. Demanded by governments (Porter & Goldman 2013;Mbava 2017), embedded in development (Cloete 2016;Ofir 2013) and increasingly utilised in private and not-for-profit sectors (Bisgard 2017;Abrahams 2015;Wildschut 2014), an evaluation wave fuelled by performance and quality standards is creating an evaluating society (Dahler-Larsen 2011. ...
... At the theoretical level, EE and PE propose that in the collaboration of participants and evaluators, there is the co-creation of new knowledge that encourages the instrumental use of the findings and results which in turn becomes 'actionable knowledge' that addresses the problem that was the focus of the intervention (Smits & Champagne, 2008). Within the developmental context, PP and EE have been preferred as they seem to shift the preoccupation with measuring impacts to the notion of managing for sustained impacts that lead to real societal change (Ofir, 2013). As alternatives to the positivist evaluation methodologies, participatory evaluation claims to engage in evaluation for development rather than merely only assessing the characteristics of the developmental process. ...
... In this regard, participatory approaches characterised by principles of bottom-up planning, networking and multi-stakeholder engagement, and capacity building to facilitate decision making and grassroots mobilisation have been featured in policy planning and activities aimed at stimulating positive social, economic and environmental wellbeing in marginalised communities. Given the claims of the efficacy of social enterprise tourism projects as a path toward empowerment for local communities, there is an even greater mandate for more focus on the benefits of the integration of evaluation processes in their design and operations in order to achieve overall developmental goals (Ofir, 2013). However with the critical turn in tourism studies leading to the currency and prominence of tourism as a developmental tool and agent for social change, there is a concomitant imperative to interrogate the key arguments and implications of PE and EE methods in CBTEs and PPT projects (McGehee, Kline, & Knollenberg, 2014;Panagiotopoulou & Stratigea, 2014;Papineau & Kiely, 1996). ...
Full-text available
The evaluation of social enterprise projects has focused mainly on devising effective performance measurement methods and processes to justify the investment of resources and time committed to such activities. With increasing demands for accountability, effectiveness, evidence of return on investment and value-added results, evaluation activities have been driven by imperatives of objectivity in assessments and the development of tools that monetize the social outcomes and impacts of social enterprise projects. These traditional approaches to evaluation have also been widely adapted in tourism based social enterprises that seek to attain goals of poverty alleviation, empowerment of local communities, and improved livelihoods for those marginalized from mainstream tourism economic activities. This chapter argues that traditional approaches to evaluation may be limited in supporting social entrepreneurship projects with development objectives of empowerment and societal change. It is proposed that social enterprise projects involving community participation may be better positioned to achieve their developmental objectives by incorporating more of the principles of Participatory Evaluation (PE) and Empowerment Evaluation (EE) in the quest to harness the economic prowess of tourism for human development.
... Unfortunately, in NESs and elsewhere, few impact evaluations are commissioned in such a manner as to allow for this elaborate groundwork. Ofir (2013) asserted the following in her rousing call to action for revolutionising evaluation for development in Africa: [T]his is not about 'cultural sensitivity', but rather about the fundamental questioning of worldviews, frameworks and definitions on which evaluation theory and practice -and resultant development -have been built. The potential for new ...
Full-text available
Background: Growing numbers of developing countries are investing in National Evaluation Systems (NESs). A key question is whether these have the potential to bring about meaningful policy change, and if so, what evaluation approaches are appropriate to support reflection and learning throughout the change process. Objectives: We describe the efforts of commissioned external evaluators in developing an evaluation approach to help critically assess the efficacy of some of the most important policies and programmes aimed at supporting South African farmers from the past two decades. Method: We present the diagnostic evaluation approach we developed. The approach guides evaluation end users through a series of logical steps to help make sense of an existing evidence base in relation to the root problems addressed, and the specific needs of the target populations. No additional evaluation data were collected. Groups who participated include government representatives, academics and representatives from non-governmental organisations and national associations supporting emerging farmers. Results: Our main evaluation findings relate to a lack of policy coherence in important key areas, most notably extension and advisory services, and microfinance and grants. This was characterised by; (1) an absence of common understanding of policies and objectives; (2) overly ambitious objectives often not directly linked to the policy frameworks; (3) lack of logical connections between target groups and interventions and (4) inadequate identification, selection, targeting and retention of beneficiaries. Conclusion: The diagnostic evaluation allowed for uniquely cross-cutting and interactive engagement with a complex evidence base. The evaluation process shed light on new evaluation review methods that might work to support a NES.
In a world of unprecedented uncertainty and complexity, and post-truth politics, the ethical challenges for the independent evaluator are greater than ever. At a time when the opinions of independent experts are increasingly considered to be part of an out-of-touch elite, what are the ethical dilemmas and challenges for independent evaluation consultants working in a highly competitive evaluation marketplace? How can professional practice be protected from the politicization of the evaluation process in an increasingly polarized policy space (Schwandt, 2018)? If evaluations of public policies are inevitably intertwined with political interests (Englert et al., 1977; Vestman & Conner, 2006), and commercial interests (Nielsen et al., 2018), independent evaluators are caught in complex power dynamics. By following a hermeneutic phenomenology approach, this study investigates the lived experience of 9 independent evaluation consultants (5 women and 4 men) when dealing with ethical dilemmas as well as rendering ethically sensitive judgments in the framework of external international development evaluations commissioned by a variety of multilateral organizations and bilateral donors. In contrast to most other studies on evaluation ethics that examine what evaluators should do, this phenomenological study explores evaluators at the core of their being through their actual personal experiences in i) making ethical decisions dealing with good and bad in their daily work, ii) dealing with their clients and the power dynamics and ethical challenges stemming from this relationship; and iii) drawing upon the evaluation professional community as a support system to reinforce their ethical decision-making and ethical practices. This study argues that independent evaluators’ ethical decision-making is never solely the result of one’s cognition, reasoning or intuition but relies on the decision-maker’s subjective rationality and is often bounded by both information and power asymmetries. The thematic analysis of evaluators’ lived experience yielded the five essential themes: knowing one’s roots, navigating through the ethical fog, relying on a decision support system, negotiating one’s independence and influence, and turning to other stakeholders. With a view to deepening the understanding of the evaluation process and its relational dynamics within the evaluation community, this study highlights the need for the strengthening of moral thinking and ethical practice among all involved actors, in order to foster action for a more reflective practice in the evaluation consulting field.
State development cooperation (DC) is facing growing pressure of legitimizing its budget expenditure and relevance, the most crucial instrument for this purpose being evaluations. This thesis therefore deals with the evaluation in German DC by referring to the following research questions: How is development cooperation evaluated in Germany? To what extent do the central evaluation units differ especially regarding effectiveness and participation? Two main development institutions are in charge of evaluating the German state aid: the German Corporation for International Cooperation (GIZ) following its predecessor the German Corporation for Technical Cooperation (GTZ) as the leading implementing organization for projects and programs and the German Institute for Development Evaluation (DEval), which has taken over the task of evaluating superordinate research topics from the Federal Ministry for Economic Cooperation and Development (BMZ). The quality of these institutions’ evaluation systems is measured upon four selected categories from the internationally accepted criteria by the Organization for Economic Cooperation and Development (OECD): impartiality and independence, effectiveness, transparency and participation. Those categories are being applied not only on the level of project, program and superordinate evaluations, but also for meta-evaluations, as these “evaluations of evaluations” are also among the institutions’ task range. The assessment furthermore draws on two time frames which are determined by the 1999 and 2009 system analyses on evaluation in German development cooperation which had been commissioned by the BMZ. The time frame of 1999-2009 (investigating the GTZ/BMZ) helps understand the evolvement of German DC and concludes with recommendations for action and the one of 2009-2017 (investigating the GIZ/DEval) shows how the recommendations of action were adopted and outlines remaining weaknesses in the system. Much information was found not only through literature and online research, but through personal- and phone interviews with three relevant agencies: the DEval and the GIZ for an inside view and the German Development Institute (DIE) for an outside opinion. Two theses are foremost important: Firstly, the evaluation system in Germany changed drastically after the publication of the 2009 system analysis and secondly a convergence can be observed between project evaluations (represented by the GTZ/GIZ) and superordinate evaluations (represented by the BMZ/DEval) despite their different learning processes. The comparison of the institutions based on the above differentiations verified the theses. Yet, a third finding emerged: meta-evaluations were and still are poorly established but shall be put more into focus in the future. Additionally, interesting research topics are being described in the conclusion such as enhanced evaluation capacity development (recipient participation in evaluation). Furthermore, it is confirmed that the German evaluation system for DC still faces challenges but is under constant transformation and improvement.
Motivation Authoritarian states receive development funding from international donors for programmes and interventions, some aimed at improving their governance systems. This article reports on the evaluation of an EU‐funded programme in Kazakhstan, seen as the most progressive reformer in Central Asia. The EU Programme was aimed at enhancing Kazakhstan’s business competitiveness through better regulation and civil service modernization. Purpose This article addresses two research questions. What was the impact of the EU‐funded intervention? What role, if any, did the evaluation play in reflective policy learning for the future? Approach and Methods The research draws on quantitative and qualitative evidence. This involved analysing secondary data sources on the effectiveness of governance over time in Kazakhstan and interviews for 34 key stakeholders on the impact of the EU interventions. Findings We find no significant improvements in governance over time. While the donor responded in a flexible way to meet the changing strategic goals of the state (which were at the personal behest of the President), this did not help to embed evaluation as part of the policy cycle for future learning. The key beneficiary here was the Government of Kazakhstan. Policy implications Wider systemic change from upward accountability to downward accountability to citizens is needed to make evaluation relevant for authoritarian states. Upward accountability to the President is a feature of authoritarian regimes, which precludes citizens from being able to hold the state to account. Without these systemic changes, autocracies simply engage in development evaluation as a perfunctory exercise to meet donor requirements.
Full-text available
Background: A recent study of African evaluations identified deficiencies in present evaluation practices. Due to limited public sector expertise for the design of policy impact evaluations, expertise for such complex designs is largely external to the public sector. Consequently, recommendations made sometimes pay insufficient attention to variations in local contexts. Objectives: The bold idea presented in this article is that theory-based evaluation (TBE) in its most recent participatory versions offers promising opportunities towards more flexible epistemology. When properly tweaked, tuned and adapted to local needs and demands in African contexts, better theory-based evaluations are possible. Method: Three TBE-inspired criteria for better evaluations are suggested. The usefulness of including broad perspectives in theory-making was illustrated with a recent policy example, that is, the provision of tablets to school children in South Africa. Results: A model of collaborative theory-making is presented. The pros and cons of the proposed hybrid model are discussed. Conclusion: Recent trends in TBE point towards more participation of stakeholders in the theory-making process and towards more flexible epistemologies. The proposed innovation of TBE may have broader implications and serve as a promising inspiration for better evaluation practices in African contexts, given that existing research has demonstrated a need for such visions.
Full-text available
Background: There is increasing global resistance against a perceived Eurocentric value hegemony in knowledge generation, implementation and evaluation. A persistent colonial value mindset is accused of imposing outdated and inappropriate policies on former colonised and other countries and needs to be changed to more appropriate processes and results to improve conditions in those countries in the 21st century. Objectives: This article intends to summarise some lessons from the impact of historical colonial value systems and practices in current knowledge generation, transfer and application processes and results in Africa (especially in South Africa). The objective is to identify concrete directions towards ‘decolonising’ research and evaluation processes and products to be more relevant, appropriate and, therefore, more effective to achieve sustainable empowerment and other desired developmental outcomes not only in lesser developed countries but also in traditionally more developed Western nations. Method: A comparative literature review was undertaken to identify and assess the current state of the debate on the perceived need to decolonise research and evaluation practices in different contexts. The Africa-rooted evaluation movement was used as a case study for this purpose. Results: The current decoloniality discourse is ineffective and needs to be taken in another direction. Mainstreaming culturally sensitive and responsive, contextualised participatory research and evaluation designs and methodology implementation in all facets and at all stages of research and evaluation projects has the potential to fulfil the requirements and demands of the research and evaluation decoloniality movement. Conclusion: This will improve the effectiveness of research and evaluation processes and results.
In this article we provide a comprehensive review of 71 studies on evaluation in international development contexts published over the past 18 years. The primary purpose of the review is to explore how culture is being conceptualized and defined in international development contexts and how evaluation practitioners,scholars, and/or policymakers who work in international development evaluation frame the role of culture and cultural context in these settings. In this article we ask: How is culture framed in the international development evaluation literature? To what extent do descriptions of evaluation (design, processes, and outcomes) reflect other knowledge and value systems and perspectives? Whose values and worldviews inform the evaluation design and methodology? How does the community’s cultural context inform the evaluation methodology and methods used? Based on our analysis, we identify and discuss five themes: the manifestation of culture along a continuum from explicit to implicit, a cultural critique of participatory practice in international development, the limits of social constructivist epistemologies and representations of voice, evaluation as a cultural practice, and cultural engagement and the multifaceted evaluator role.
ResearchGate has not been able to resolve any references for this publication.