Article

Outcome Evidencing: A Method for Enabling and Evaluating Program Intervention in Complex Systems

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract and Figures

This article describes the development and use of a rapid evaluation approach to meet program accountability and learning requirements in a research for development program operating in five developing countries. The method identifies clusters of outcomes, both expected and unexpected, happening within areas of change. In a workshop, change agents describe the causal connections within outcome clusters to identify outcome trajectories for subsequent verification. Comparing verified outcome trajectories with existing program theory allows program staff to question underlying causal premises and adapt accordingly. The method can be used for one-off evaluations that seek to understand whether, how, and why program interventions are working. Repeated cycles of outcome evidencing can build a case for program contribution over time that can be evaluated as part of any future impact assessment of the program or parts of it.
Content may be subject to copyright.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In light of the importance to include the social dimension into sustainability [5], evaluation practices have moved from traditional linear models to development-oriented evaluation practices. The rationale of development-oriented evaluation practices is to enhance the engagement of stakeholders along the evaluation process and integrate their feedback in real time [8][9][10]. Along the project´s pathway, stakeholder´s involvement in the evaluation can significantly alter the strategy and development of the project and adapt the necessary resources according to the changing context and collectively identified priorities. ...
... There is room to explore traditional impact assessment practices that have been limited to using linear models along with methods measuring impacts on economic returns such as cost-benefit analysis using quantitative methods [8,16] that failed to address the proper needs of the intended users [30,31]. Mixed-method approaches have been suggested to give immediate feedback with explicit answers and value-added dimensions that are not revealed through quantitative and qualitative analysis alone [9,28,32]. ...
... This can be achieved by actively involving and winning the commitment of scientists and farmers, which then creates a community-oriented environment [46,51,59]. However, the reason why the adaptation of innovation-and of innovative solutions-has been slow is due to the difficulties of identifying the underlying cause of the problem and taking proper actions accordingly, while not forgetting the needs of stakeholders [9] and making needed changes throughout the process. The ability to bring people together in the innovation process, both those directly and indirectly engaged, is one of the main obstacles to move forward in the co-creation process. ...
Article
Full-text available
Assessing impacts in innovation contexts/settings with the aim of fostering sustainability requires tackling complex issues. Literature shows that key sources of this complexity relate to the need to integrate the local context; identify the underlying problems; engage key stakeholders; and reflect on their feedback throughout the innovation process. A systematic literature review on innovation impact assessment reveals that social impacts have been the most studied, thus, where promising methods and tools were used. Nevertheless, there are many unresolved issues beyond assessing social impacts in innovation processes. Literature highlights that building on co-creating innovation processes that respond to stakeholders’ real needs and context, and adapting to changing circumstances by integrating timely feedback from stakeholders are two critical challenges calling for a systems thinking approach. This study proposes Developmental Evaluation (DE) as a systemic approach to evaluation which supports adaptive development in complex environments and that adds value by integrating continuous feedback from diverse stakeholders. As a non-prescriptive evaluation approach in terms of methods and tools, DE can provide meaningful guidance to use diverse methods and tools in furthering ongoing development and adaptation in innovation processes by linking the evaluation activities—impact assessment among them—with the DE principles that are situational, adaptive and continuously responsive.
... Human factors and systems thinking approaches such as the Systems Engineering Initiative for Patient Safety (SEIPS) (Pascale Carayon et al. 2020;Holden et al. 2013) and Cognitive Work Analysis (CWA) (Rasmussen, Pejtersen, and Goodstein 1994;Read, Salmon, Lenn e, and Stanton 2015) recognise and understand outcomes as an important part of system development. However, there is still a need to provide a more straightforward understanding of outcome interactions and how other healthcare systems elements vary in different contexts (Holden et al. 2013;Paz-Ybarnegaray and Douthwaite 2017;Petticrew et al. 2019). Furthermore, limited practical approaches exist for gathering, discussing, understanding, and communicating outcomes as interrelated systems (Akinluyi, Ison, and Clarkson 2019;Holden et al. 2013). ...
... Hence, methods for better understanding and communicating the interactions of multiple outcomes are needed to develop better healthcare systems interventions (Flemming et al. 2019;Petticrew et al. 2019). Thanks to this understanding, it could be possible to define a complex systembased monitoring and evaluation strategy (Paterson et al. 2009;Paz-Ybarnegaray and Douthwaite 2017) to inform the continuous adaptations needed in healthcare systems (Holden et al. 2013). Furthermore, a holistic understanding of the priorities can positively impact the design of more human healthcare services that embrace the different priorities of stakeholders. ...
Article
Full-text available
Outcomes, which are the result state or condition from a process or intervention, are essential elements of healthcare system design and an important indicator of performance. They are included in well-known system analysis frameworks such as the Systems Engineering Initiative for Patient Safety (SEIPS) and Cognitive Work Analysis (CWA). However, fewer practical approaches exist for understanding and communicating interactions among healthcare outcomes. This study applies a novel mapping method as a practical approach to collect, aggregate and visualise interrelations among multiple healthcare outcomes. Graphic facilitation mapping sessions with eleven healthcare providers and ten patients with chronic conditions were conducted. Participants created outcome interrelationship maps following a six-step process. Two outcome-based network visualisations were synthesised using network analysis. This outcome-based approach advances how we frame healthcare systems, focussing on accommodating multiple stakeholders’ visions, understanding interrelations, and defining trade-offs. This practical approach may complement frameworks such as SEIPS and CWA.
... We take as given that, even in complex systems, change happens through relatively stable patterns of activities that emerge and die away over time. These patterns have been called technology trajectories (Ekboir, 2003), innovation trajectories (Douthwaite and Gummert, 2010) outcome trajectories (PazYbarnegaray and Douthwaite, 2016) and beneficial coherence within attractors (Snowden, 2010). An empirically-based ToC should give a sense of recurring patterns of behavior that programs may have catalyzed or contributed to catalyzing, along with other factors. ...
... For this paper, we re-analyzed the two selected case histories to understand what outcomes were achieved and how, with a particular focus on understanding the dynamics of causality present in the cases. We cross-checked and supplemented the initial case material with data from a separate Outcome Evidencing process conducting by AAS staff between March 2014–2015, subsequently published as a methods note in the American Journal of Evaluation (PazYbarnegaray and Douthwaite, 2016). Outcome Evidencing involved identifying, clustering, and verifying outcomes and impact pathways for each of the hubs, conducted with participation from hub-level staff, local stakeholders, international research staff from AAS, and independent evaluators. ...
Article
Agricultural innovation systems (AIS) are increasingly recognized as complex adaptive systems in which interventions cannot be expected to create predictable, linear impacts. Nevertheless, the logic models and theory of change (ToC) used by standard-setting international agricultural research agencies and donors assume that agricultural research will create impact through a predictable linear adoption pathway which largely ignores the complexity dynamics of AIS, and which misses important alternate pathways through which agricultural research can improve system performance and generate sustainable development impact. Despite a growing body of literature calling for more dynamic, flexible and “complexity-aware” approaches to monitoring and evaluation, few concrete examples exist of ToC that takes complexity dynamics within AIS into account, or provide guidance on how such theories could be developed. This paper addresses this gap by presenting an example of how an empirically-grounded, complexity-aware ToC can be developed and what such a model might look like in the context of a particular type of program intervention. Two detailed case studies are presented from an agricultural research program which was explicitly seeking to work in a “complexity-aware” way within aquatic agricultural systems in Zambia and the Philippines. Through an analysis of the outcomes of these interventions, the pathways through which they began to produce impacts, and the causal factors at play, we derive a “complexity-aware” ToC to model how the cases worked. This middle-range model, as well as an overarching model that we derive from it, offer an alternate narrative of how development change can be produced in agricultural systems, one which aligns with insights from complexity science and which, we argue, more closely represents the ways in which many research for development interventions work in practice. The nested ToC offers a starting point for asking a different set of evaluation and research questions which may be more relevant to participatory research efforts working from within a complexity-aware, agricultural innovation systems perspective.
... • Development and use of an integrated database on soil and agronomic data by advisory services in Ethiopia. 3 • Inclusion of solar power as a remunerative crop (SPaRC) as part of a large government-funded program in India, Kisan Urja Suraksha evam Utthaan Mahabhiyan (KUSUM) • Development and use of cassava clean seed systems in Tanzania We assume that the main outcomes in each case resulted from an outcome trajectory (Paz and Douthwaite 2017). We define an outcome trajectory (OT) as the interacting and co-evolving system of actors, knowledge, technology and institutions that produce, sustain and sometimes scale a coherent set of outcomes over time. ...
Article
Full-text available
At the end of 2021, CGIAR Research Programs (CRPs) will be replaced by Initiatives housed within One CGIAR. This new modality is intended to achieve higher levels of impact at a faster rate and at reduced cost compared to the CRPs. As One CGIAR begins, there is a unique opportunity to reflect on what has worked in different contexts. In this paper, we provide findings that relate to One CGIAR’s overarching view of how it will achieve positive and measurable impacts, and for agricultural research for development (AR4D) more generally. Specifically, we draw from three related CRP evaluations to identify how different types of AR4D approaches have contributed to successful outcomes. In the final section of the paper, we present our conclusions and provide a list of recommendations for the science and technology policy of One CGIAR and possibly other integrated research for development programs.
... This case study and the overall evaluation use a version of outcome harvesting called outcome evidencing (Paz and Douthwaite, 2017). Outcome harvesting is 'backward looking' in that it starts with an outcome and works backward to identify and understand the patterns of interactions between people, institutions and technology that contributed to it over time. ...
... This case study and the overall evaluation uses a version of outcome harvesting called outcome evidencing (Paz and Douthwaite, 2017). Outcome harvesting is 'backward looking' in that it starts with an outcome and works backwards to identify and understand the patterns of interactions between people, institutions and technology that contributed to it, over time. ...
Experiment Findings
Full-text available
See uploaded evaluation report
... 6 MSC has been widely recognized for various adaptive management purposes, for example, to track changes continuously; facilitate learning, responsive feedback, and decision making; and capture program contributions to intended and unintended outcomes. [7][8][9][10][11] However, to date, the evidence of the effectiveness of the MSC method is still lacking in the monitoring and adaptive management field. Most well-documented experiences and lessons learned in MSC application come from the final or summative evaluation activities. ...
Article
Full-text available
Introduction: The Most Significant Change (MSC) technique is a complex-aware monitoring and evaluation tool, widely recognized for various adaptive management purposes. The documentation of practical examples using the MSC technique for an ongoing monitoring purpose is limited. We aim to fill the current gap by documenting and sharing the experience and lessons learned of The Challenge Initiative (TCI), which is scaling up evidence-based family planning (FP) and adolescent and youth sexual and reproductive health (AYSRH) interventions in 11 countries in Asia and sub-Saharan Africa. Methods: The qualitative assessment took place in early 2021 to document TCI's use and adaptation of MSC and determine its added value in adaptive management, routine monitoring, and cross-learning efforts. Focus group discussions and key informant interviews were conducted virtually with staff members involved in collecting and selecting MSC stories. Results: TCI has had a positive experience with using MSC to facilitate adaptive management in multiple countries. The use of MSC has created learning opportunities that have helped diffuse evidence-based FP and AYSRH interventions both within and across countries. The responsive feedback step in the MSC process was viewed as indispensable to learning and collaboration. There are several necessary inputs to successful use of the method, including buy-in about the benefits, training on good interviewing techniques and qualitative research, and dedicated staff to manage the process. Conclusion: Our assessment results suggest that the MSC technique is an effective qualitative data collection tool to strengthen routine monitoring and adaptive management efforts that allows for flexibility in how project stakeholders implement the process. The MSC technique could be an important tool for global health practitioners, policy makers, and researchers working on complex interventions because they continually need to understand stakeholders' needs and priorities, learn from lessons and evidence-based practices, and be agile about addressing potential challenges.
... Thus, we designed a process called SPARC (a CASPR anagram), drawing from related learning in the outcome mapping and OH learning communities, and its antecedent, Most Significant Change approach (Church, 2016;Davies & Dart, 2005;Earl, Carden, & Smutlyo, 2001;Reinelt, Yamashiro-Omi, & Meehan, 2010;van Wessel, 2018;Wilson-Grau, 2015, Wilson-Grau, 2018. We revisited the six-step approach of OH and reviewed other literature for further grounding (Church, 2016;Howard et al., 2011;Majot et al., 2010;Nissen & Castellani, 2020;Paz-Ybarnegaray & Douthwaite, 2016;Rassmann & Smith, 2016;Rassmann et al., 2012;Wilson-Grau et al., 2016;World Bank, 2014). ...
Article
Full-text available
Evaluation processes that facilitate learning among advocates must be nimble, creative, and meaningful while transcending putative performance and accountability management. This article describes the experience, lessons, and trajectory of one such approach, Simple, Participatory Assessment of Real Change (SPARC), that a transnational HIV prevention research advocacy coalition pilot‐tested in sub‐Saharan Africa. Inspired by the pioneering work of the outcome harvesting (OH) and participatory evaluation community, we recuperate advocates' centrality as storytellers, sense‐makers, and strategists in advocacy evaluation and describe how we recalibrated SPARC to meet their evaluation and learning needs. This article highlights the normative value of deliberative discourse in evaluation as it contributes to the interpretation of OH and the enrichment of the theory and practice of advocacy evaluation.
... Nevertheless, funders of such interventions still want to know if their funding has made a diff erence-if the interventions have improved the lives of people-and in what manner. While a range of evaluation approaches might address these questions, theory-based methods are often used, including contribution analysis (Befani & Mayne, 2014;Befani & Stedman-Bryce, 2016;Mayne, 2001Mayne, , 2011Mayne, , 2012Paz-Ybarnegaray & Douthwaite, 2016;Punton & Welle, 2015;Stern et al., 2012;Wilson-Grau & Britt, 2012). ...
Article
Full-text available
The basic ideas behind contribution analysis were set out in 2001. Since then, interest in the approach has grown and contribution analysis has been operationalized in different ways. In addition, several reviews of the approach have been published and raise a few concerns. In this article, I clarify several of the key concepts behind contribution analysis, including contributory causes and contribution claims. I discuss the need for reasonably robust theories of change and the use of nested theories of change to unpack complex settings. On contribution claims, I argue the need for causal narratives to arrive at credible claims, the limited role that external causal factors play in arriving at contribution claims, the use of robust theories of change to avoid bias, and the fact that opinions of stakeholders on the contribution made are not central in arriving at contribution claims.
... Fig. 2 includes some of the major feedback loops between steps. While Fig. 2 resembles other participatory evaluation methodologies (e.g., Paz-Ybarnegaray & Douthwaite, 2017 ), the synthesis of methodological principles and methods (as noted above) was designed to be specific to the challenges facing the RCS and was intended to address recognised evaluation bottlenecks (a point we return to in the discussion section). ...
Preprint
Community environmental management (CEM) involves the achievement of environmental objectives through the facilitation of community partnerships, local dialogue, consultation and participative decision-making. CEM is increasingly seen as a solution to complex environmental issues facing regulatory authorities. However, evaluating CEM projects is problematic given complex relationships between community participation and environmental outcomes. This paper reports on a project that developed a novel evaluation methodology and trialled it in an intervention with a regional council in New Zealand. The methodology shows promise in addressing common evaluation bottlenecks and helping stakeholders to more fully articulate links between community participation and environmental outcomes. While the local participants in the CEM project were pleased with the evaluation findings and acted upon them, they hoped that it would also stimulate wider organisational change in the regional council. However, this did not happen. Reflections on the project, informed by institutional theory, reveal that the framing of ‘participation’ in the findings was appropriate for those involved with the CEM project, but non-participating regional council stakeholders read the findings through different frames, and therefore the evaluation failed to communicate the necessity of wider change. The paper concludes that this evaluation methodology has the potential to be adapted for other contexts where there is a need for more robust evidence of the value (or otherwise) of CEM. However, if there is a desire to stimulate wider organisational change, care must be taken to anticipate the different institutional framings of stakeholders who might be unfamiliar with, or even hostile to, CEM.
... Second, the QuIP first asks respondents what major changes they have experienced in each domain during a specified time period and then encourages them to elaborate on what they think is driving these changes. This feature of working backwards from outcomes connects QuIP strongly with 'outcome harvesting' and 'outcome evidencing' as described respectively by Wilson-Grau and Britt (2013) and Paz-Ybarnegaray and Douthwaite (2016). Third, by blindfolding interviewers and respondents to reduce the threat of confirmation and pro-project biases, QuIP resembles 'goal-free evaluation', which also avoids being explicit about intervention goals in order to reduce 'goal-related tunnel vision' (Youker, 2013). ...
... Paz-Ybarnegaray and Douthwaite suggest that the implementation of 'Outcomes Evidencing' at regular intervals as one of the effective methods of programme evaluation to estimate the level of contribution in terms of the impact of the programme (Paz-Ybarnegaray & Douthwaite, 2017) Bray (2008) suggests programme evaluation as an effective tool of quality assurance that helps the degree awarding institutions improve the academic quality. Similarly, some other evaluators are also of the opinion that the programme evaluation can be considered as a systematic or scientific procedure to evaluate the programme organisation, delivery and its outcomes (Rossi & Freeman, 1993;Short, Hennessy, & Campbell, 1996). ...
Article
Full-text available
Quality assurance in higher education in Pakistan was formally initiated when Quality Assurance Agency (QAA) was established under Higher Education Commission of Pakistan. The current study is a descriptive study which was conducted to review the impact of programme evaluation on Pakistani universities. Data available with QAA, Pakistan, Self-Assessment Report available for the programmes and field notes were used as tool in this study. Programme evaluation reports were graded on a rubric in order to rank departments within a university. The study shows that quality assurance mechanism has got its firm roots at micro level, that is, at university level in Pakistan under the supervision of QAA of Pakistan. The study would be of interest for all educationists as it shows both the role of QAA, Pakistan and the role of quality enhancement cells whose combined efforts have resulted into a systematic programme evaluation in Pakistani universities.
... Relationships between elements of a simple results-chain model transforming resources into outputs to help achieve policy outcomes is illustrated in Figure 1. Differences between each concept are also important, as outlined in Table 1, with definitions distilled from multiple sources (Bourgon, 2007, Cook, 2004, Funnell and Rogers, 2011, Garland, 1996, Kristensen et al., 2002, Mayne, 2001, Mayne, 2004, Mulgan, 2008, OECD, 2002, Paz-Ybarnegaray and Douthwaite, 2017, Productivity Commission, 2013, Productivity Commission, 2015. Strictly confidential. ...
Article
Purpose This article constructively critiques the new global methodology for evaluating the effectiveness of anti-money laundering regimes against defined outcomes. Design/methodology/approach With surprisingly little discussion at the intersection of the money laundering and policy effectiveness and outcomes scholarship and practice, this article combines elements of these disciplines, and recent peer-review evaluations, to qualitatively assess the Financial Action Task Force’s (FATF's) anti-money laundering ‘effectiveness’ methodology. Findings FATF’s ‘effectiveness’ methodology does not yet reflect an outcome-oriented framework as it purports. Misapplication of outcome labels to outputs and activities miss an opportunity to evaluate outcomes, as the impact and effect of anti-money laundering policies. Practical implications If the 'outcomes' of the 'effectiveness' framework do not match the crime and terrorism prevention policy goals of nation states, the new "main" component for assessing the effectiveness of anti-money laundering regimes potentially detracts focus and resources from, rather than towards, intended policy objectives. Originality/value There is a dearth of scholarship whether the global anti-money laundering ‘effectiveness’ framework is sufficiently robust to assess effectiveness as it purports. This article begins addressing that gap. Summary: https://goo.gl/BkK8ja
... Program staff working in all five hubs developed the outcome evidencing approach to identify and make sense of emerging program outcomes, both expected and unexpected. A paper describing the approach was published in the American Journal of Evaluation (Paz-Ybarnegaray and Douthwaite, 2016). In the approach, staff and change agents on the ground identify outcomes resulting from program intervention in each hub, making sense and validating them. ...
Article
There is a growing recognition that programs that seek to change people’s lives are intervening in complex systems, which puts a particular set of requirements on program monitoring and evaluation (M&E). Developing complexity-aware M&E systems within existing organizations is difficult because they challenge traditional orthodoxy. Little has been written about the practical experience of doing so. This article describes the development of a complexity-aware evaluation approach in the CGIAR Research Program on Aquatic Agricultural Systems. We outline the design and methods used including trend lines, panel data, after action reviews, building and testing theories of change, outcome evidencing and realist synthesis. We identify and describe a set of design principles for developing complexity-aware M&E. Finally, we discuss important lessons and recommendations for other programs facing similar challenges. These include developing evaluation designs that meet both learning and accountability requirements; making evaluation as part of a program’s overall approach to achieving impact; and, ensuring evaluation cumulatively builds useful theory as to how different types of program trigger change in different contexts.
... ISPC also wanted the program to establish counterfactual research designs, including control vil- lages, maintaining that AAS would not be able to justify causal claims otherwise. The AAS leadership and science team resubmitted the proposal clarifying that it was using theory-driven evaluation that is able to both understand how RinD works and to make causal claims where experimental or quasi-experimental approaches involving control villages may be unethical or inappropriate (Paz-Ybanegaray and Douthwaite, 2016). A later external evaluation supported AAS use of theory-driven approaches (CGIAR-IEA, 2015). ...
Article
There have been repeated calls for a ‘new professionalism’ for carrying out agricultural research for development since the 1990s. At the centre of these calls is a recognition that for agricultural research to support the capacities required to face global patterns of change and their implications on rural livelihoods, requires a more systemic, learning focused and reflexive practice that bridges epistemologies and methodologies. In this paper, we share learning from efforts to mainstream such an approach through a large, multi-partner CGIAR research program working in aquatic agricultural systems. We reflect on four years of implementing research in development (RinD), the program’s approach to the new professionalism. We highlight successes and challenges and describe the key characteristics that define the approach. We conclude it is possible to build a program on a broader approach that embraces multidisciplinarity and engages with stakeholders in social-ecological systems. Our experience also suggests caution is required to ensure there is the time, space and appropriate evaluation methodologies in place to appreciate outcomes different to those to which conventional agricultural research aspires.
Article
Full-text available
Research-engaged decision making and policy reform processes are critical to advancing resilience, adaptation, and transformation in social-ecological systems under stress. Here we propose a new conceptual framework to assess opportunities for research engagement in the policy process, building upon existing understandings of power dynamics and the political economy of policy reform. We retrospectively examine three cases of research engagement in small-scale fisheries policy and decision making, at national level (Myanmar) and at regional level (Pacific Islands region and sub-Saharan Africa), to illustrate application of the framework and highlight different modes of research engagement. We conclude with four principles for designing research to constructively and iteratively engage in policy and institutional reform: (a) nurture multi-stakeholder coalitions for change at different points in the policy cycle, (b) engage a range of forms and spaces of power, (c) embed research communications to support and respond to dialogue, and (d) employ evaluation in a cycle of action, learning, and adaptation. The framework and principles can be used to identify entry points for research engagement and to reflect critically upon the choices that researchers make as actors within complex processes of change. Key Words: action research; dialogue; governance; partnerships; policy reform; power
Article
Improving policies-broadly defined-is at the heart of the structural transformation agenda. This paper describes the use of a new evaluation method-outcome trajectory evaluation (OTE), based on both evaluation and policy process theory-to explore the influence of HarvestPlus, a large and complex research for development program focused on improving nutrition, on a specific policy outcome, namely the establishment of biofortification crop breeding programs in national agricultural research institutes in Bangladesh, India, and Rwanda. The findings support claims of significant HarvestPlus contributions while also raising issues that need to be monitored to ensure sustainability. The paper also discusses the pros and cons of the OTE approach in terms of methodological rigor and the accumulation of learning from one evaluation to the next. Keywords Theory-based evaluation · Policy process evaluation · Middle-range theory · Biofortification Résumé L'amélioration des politiques, au sens large, est au coeur du programme de transformation structurelle. Cet article décrit l'utilisation d'une nouvelle méthode d'évaluation-l'évaluation de la trajectoire des résultats (outcome trajectory evaluation ou OTE en anglais), basée à la fois sur la théorie de l'évaluation et du processus politique-pour * Boru Douthwaite
Article
Full-text available
Influencing policy is an important scaling mechanism. However, if a program is to plausibly claim that it has or can influence policy, it needs to explain how. This is not straightforward because of the complex nature of policy change. Scholars suggest the use of theory to help answer the ‘how’ question. In this article, we show how, in practice, a middle-range policy change theory—Kingdon’s Policy Window theory—helped us model the workings of four outcome trajectories that produced agricultural policy outcomes in four cases. By providing a common framework, the middle-range theory helped accumulate learning from one evaluation to the next, generating specific and generalizable insights in the process. Accumulation learning in this way can help organizations become more convincing in the proposals they write to donors, more accountable and better able to identify and deliver on their goals.
Article
Full-text available
While the key role that policy plays in sustainable development has long been recognized, rigorously documenting the influence of research on policy outcomes faces conceptual, empirical and even political challenges. Addressing these challenges is increasingly urgent since improving policies—broadly defined—is at the heart of the structural transformation agenda. This paper describes the use of a new evaluation method—outcomes trajectory evaluation (OTE), based on both evaluation and policy process theory—to explore the influence of HarvestPlus, a large and complex research for development program focused on improving nutrition, on a specific policy outcome, namely the establishment of crop biofortification breeding programs in national agricultural research institutes in Bangladesh, India and Rwanda. The findings support claims of significant HarvestPlus contribution s to the establishment of the sure sustainability. The paper also discusses the pros and cons of the OTE approach in terms of both methodological rigor and program learning. In particular, the fact that HarvestPlus is a long-running program allows us to reflect on how a “backward looking” approach such as OTE builds on and complements the more “forward looking,” theory of change-based approaches that informed HarvestPlus programming and evaluation during its earlier, highly-successful phases. Such a long-run perspective is rare in development evaluation and it offers important lessons for how to think about and plan for evaluation over the course of a complex agriculture research for development program.
Article
Full-text available
Since their inception in 2012, the CGIAR research programs (CRPs) on Roots, Tubers and Bananas (RTB) and Agriculture for Nutrition and Health (A4NH) have been generating innovations, testing interventions, and providing science-based evidence and advice to policy and decision makers at local, national and supra-national levels with the expectation that this advice will contribute to policy changes that in turn helps create an enabling environment for agri-food systems innovations. In 2019, the two CRP leadership teams commissioned a systematic assessment to validate four significant policy outcomes to which they hadcontributed. The four policy outcomes are: 1. Mainstreaming of biofortification in the African Union (AU) 2. Development of a cassava seed certification system in Tanzania 3. Development of a cassava seed certification system in Rwanda 4. Control of potato purple top in Ecuador1 This report derives lessons from considering similarities and differences between the four cases. The first objective of this synthesis is to generate deeper and more generalizable understanding of how CGIAR contributes to policy change than would be possible from any single case. The second is to present a broadly applicable theory of change that can help understand and accumulate learning about how policy changes in different contexts.
Article
Full-text available
This paper argues for more creativity and flexibility in agricultural research for development (AR4D) scaling and impact evaluation in complex contexts. While acknowledging the importance of setting reasonable end-of-project targets and outcomes, we argue that the achievement of outcomes and impacts, particularly in complex contexts, requires adaptive management and acknowledgment that significant positive outcomes and impacts may occur after the project funding cycle is complete. The paper presents a practitioner-developed approach to scaling AR4D innovations called Impact Tracking (IT). We illustrate IT in practice by presenting three case studies from Ethiopia in which IT proved crucial to achieving impact. The paper concludes by drawing lessons from the case studies and discussing what implications IT may have for development practitioners.
Experiment Findings
Full-text available
See methods section of uploaded evaluation report
Article
Full-text available
Interdisciplinary scholarly literature considers how research processes may adversely affect their participants. Building on this work, this article addresses the processes and practices of applied research in contexts in which imbalances of power exist between researchers and those being researched. We argue that research activities in international development and humanitarian work that are typically operational, such as needs assessments, baseline studies, and monitoring and evaluation, represent interventions in the lives of participants, with the potential to create value or harm, delight or distress. The ethical and methodological dilemmas of this intervention have received less attention than purely academic discussions of human subject research. How can applied researchers meaningfully reckon with the effects of the research process on both those conducting it and those participating in it throughout the research cycle? In response, we introduce an approach co-developed over seven years through engagement with applied researchers across sectors. We discuss four interrelated principles—relevance, respect, right-sizing, and rigor—intended to invite a commitment to ongoing process improvement in the conduct of applied research. We also propose a framework to guide the implementation of these principles and illustrate the tensions that may arise in the process of its application. These contributions extend conversations about research ethics and methods to the operational research realm, as well as provide concrete tools for reflecting on the processes of operational research as sites of power that ought to be considered as seriously as the findings of data collection activities.
Article
Full-text available
Evaluating within complex systems is challenging because of how complexity affects the identification and observation of outcomes. U.S. Department of Defense (U.S. DoD) capacity building global health engagements are often difficult to measure due to the conflation of levels of analysis and confounding variables, hindering the explanation of change effects. This article will illustrate two case examples where a boundary-driven systems framework was utilized to integrate systems thinking into U.S. DoD capacity building programs and associated evaluations. The findings from the first case led to developing a theory of change that was later tested and refined in the second case to establish the multilevel system (MLS) concept model. Based on these findings, the four distinct system boundaries and subcomponents of the MLS concept model were refined to include changes within the organizational system. The development of the MLS model allowed for the explicit framing of efforts, measurement and analysis, and the alignment of program activities and observed outcomes; while still allowing for the illumination of emergent change effects in a complex system.
Technical Report
Full-text available
This report presents the findings of a study on the effect of Plantwise on the performance and responsiveness of the plant health system (PHS) in Nepal. In September 2017 a oneday workshop with PHS stakeholders was followed by a two week period of interviews with PHS stakeholders as well as farmers, in six different districts in the Central, Western and Mid-western Regions of Nepal. The qualitative data collected were used to explore changes in the PHS since the start of the Plantwise programme in 2011, and the underlying drivers of those changes, including the effects of Plantwise. The PHS functions are defined as: 1. Farmer advisory services; 2. Plant health information management; 3. Diagnostic services; 4. Research and technology development; 5. Input supply; and 6. Policy, regulation and control. This report is structured according to these functions.
Technical Report
Full-text available
This report presents the findings of a qualitative study on the effect of Plantwise on the plant health system (PHS) in Ethiopia. In August 2017, stakeholders and farmers in the regions of Oromia and Tigray were interviewed on the major changes that occurred in the PHS in recent years. These stakeholders had been involved in, or benefited from, Plantwise activities. The qualitative data was used to explore changes observed in the different functions of the plant health system (PHS) since the start of the Plantwise programme in Ethiopia in 2013, and the underlying drivers of change. The PHS functions are defined as: 1. Farmer advisory services; 2. Plant health information management; 3. Diagnostic services; 4. Research and technology development; 5. Input supply; and 6. Policy, regulation and control. The effects of the changes on the PHS performance and responsiveness were assessed through the following indicators: timeliness, availability, affordability, acceptability, coherence and reach.
Article
Full-text available
Community environmental management (CEM) involves the facilitation of community partnerships, local dialogue, consultation and participative decision-making. This is increasingly seen as a solution to some of the more complex environmental issues faced by regulatory authorities. Anecdotal evidence suggests that CEM programmes have much potential, but the evaluation of them is problematic. This paper reports on the development of a new CEM evaluation approach (inspired by soft systems methodology, developmental work research and systemic intervention), which was trialled with a New Zealand regional council. The approach shows promise in addressing common evaluation bottlenecks and helping stakeholders to develop causal narratives that more fully account for the complex relationship between community participation and environmental outcomes. While the local participants in the CEM initiative acted on the evaluation findings, they hoped that it would stimulate wider organisational change, and this did not happen. Project reflections, informed by institutional theory, reveal that the logics of ‘participation’ and ‘community’ implicit in the findings were appropriate for local participants, but non-participating regional council stakeholders read the findings with different logics, and therefore the evaluation failed to communicate the necessity for wider change. The reflections highlight a previously unrecognised evaluation bottleneck. While the CEM evaluation methodology has the potential to be adapted for other contexts, there is a need for more robust evidence of the value of CEM. However, if wider organisational change is required, care must be taken to anticipate the different institutional logics of stakeholders who might be unfamiliar with, or even hostile to, CEM.
Article
High-quality education is essential to produce competent graduates in the field of dietetics. Assessment is a fundamental component of education and driver of learning, yet little is known about methods used to assess dietetics trainees. The objective of this review is to evaluate the practices and outcomes of methods used to assess dietetics trainees. A systematic review of the literature was undertaken. MEDLINE, the Cumulative Index to Nursing and Allied Health Literature Plus, Embase, and the Education Resources Information Center databases were searched from inception until May 31, 2017, using key terms that identified studies reporting practices for the assessment of dietetics trainees. Abstract and title screening was completed by three independent reviewers followed by full-text screening using the eligibility criteria. Quantitative and qualitative data were extracted. Study outcomes were evaluated using Miller's Pyramid, Kirkpatrick's Hierarchy, and the principles of programmatic assessment. Thirty-seven studies were identified. Assessments targeted all levels of Miller's Pyramid with the does level being the most prevalent (n=23). Most studies focussed on evaluating Level 1 (participation) (n=16) and Level 2b (n=16) (knowledge and skills) of Kirkpatrick's Hierarchy. Studies described single assessment instruments that focussed on instrument validity and reliability. Few studies considered a program of assessment or the role of expert judgment. Six themes were identified from qualitative data: (1) assessment for learning and professional development, (2) assessment requires motivated and skilled assessors, (3) trainees value authentic and global assessment, (4) assessment is evolving and context-sensitive, (5) poor assessment has negative implications, and (6) assessment evokes an emotional response. Studies focused on the development and evaluation of single quantitative-based instruments applied in isolation, with low-level outcomes sought. There is room to improve practices and design programs of assessment that combine quantitative and qualitative data for meaningful trainee feedback and credible assessment decisions. Comprehensive evaluation of assessment practices is required and must consider the contribution to improved health outcomes in all practice settings.
Chapter
All research for development programs wish to achieve impact, but understanding how to plan for and document this has been challenging. One of the newest and most popular approaches is the use of theories of change (ToC). This paper looks at how ToCs can be used in agricultural research for development (AR4D) programs. ToCs have been widely used in evaluation of development programs. In this chapter, we will describe their use in international AR4D and in CGIAR.
Article
Full-text available
The Challenge Program on Water and Food pursues food security and poverty alleviation through the efforts of some 50 researchfor-development projects. These involve almost 200 organizations working in nine river basins around the world. An approach was developed to enhance the developmental impact of the program through better impact assessment, to provide a framework for monitoring and evaluation, to permit stakeholders to derive strategic and programmatic lessons for future initiatives, and toprovide information that can be used to inform public awareness efforts. The approach makes explicit a project's program theory by describing its impact pathways in terms of a logic model and network maps. A narrative combines the logic model and the network maps into a single explanatory account and adds to overall plausibility by explaining the steps in the logic model and the key risks and assumptions. Participatory Impact Pathways Analysis is based on concepts related to program theory drawn from the fields of evaluation, organizational learning, and social network analysis.
Research
Full-text available
by Rick Davies and Jess Dart, 2005
Technical Report
Full-text available
Natural resource management research (NRMR) has a key role in improving food security and reducing poverty and malnutrition in environmentally sustainable ways, especially in rural communities in the developing world. Demonstrating this through impact evaluation poses distinct challenges. This report sets out ways in which these challenges can be met. NRMR combines technological innovation with real-world changes in agricultural practice that involve many stakeholders at farm, community, scientific and policymaking levels. These programs generally seek to integrate multiple inputs or interventions—scientific, institutional, human and environmental; engage participatively with beneficiaries and other implicated parties; and mobilise stakeholders, both to support innovative programs and to carry lessons learned into the future. Simple attribution of productivity and socioeconomic outcomes to NRMR interventions is difficult when NRMR itself is a ‘package’ of different actions adapted to diverse settings by farmers and other stakeholders, often over extended periods. This report outlines impact evaluation strategies that accept that NRMR is likely to be a ‘contributory cause’ rather than the sole cause of program results. It builds on recent reports that demonstrate that, in many development settings, impact evaluation should be seen as contributing to an adaptive learning process that supports the successful implementation of innovative programs. Change is nearly always the result of a ‘causal package’ and for an NRMR intervention to make a contribution it must be a necessary part of the package. This contrasts with an ‘impact assessment’ perspective that is mainly concerned with forms of accountability that measure and attribute impacts to particular programs or interventions. Starting from a learning perspective, impact evaluation still addresses accountability by demonstrating that NRMR programs make a difference by contributing to outcomes and impacts, and improve performance through continuous learning. The proposed evaluation strategy pays special attention to the causal links between NRMR programs and intended outcomes. As these programs are expected to produce generalised answers that can be replicated and scaled up to tackle global problems, evaluation also has to be able to explain why and under what circumstances programs are effective. This is why the proposed evaluation strategy includes approaches to explanation, and why theories of change are an essential part of the proposed approach. A theory of change both helps to unpick the assumptions about how programs bring about change and takes into account the way programs are implemented. Such a theory-based approach also allows programs to be tested against what is known from wider research literatures and, at the same time, allows evaluation results to contribute to these literatures. Against this background, an overarching evaluation framework is put forward that aims to answer impact evaluation questions by selecting appropriate evaluation designs that take into account NRMR program ‘attributes’ or characteristics. The report argues that, in a complex program setting, an evaluation must begin with appropriate evaluation questions that interest policymakers, donors and other stakeholders. Key evaluation questions should be about what difference the program is making (i.e. the contribution being made), about understanding the progress being made and why results are occurring, and about the learning that is taking place. This is distinguishable from the kinds of evaluation questions that are appropriate for more straightforward interventions such as: ‘Did our program cause the intended change?’ The evaluation questions to be considered are broader than those dealing solely with causality, and include questions of rationale and implementation, and of measuring results, in terms of both their sustainability and transferability. The report suggests a framework for defining evaluation questions that takes account of both the outcomes and processes of change, and tries to explain how change occurs in different settings and can be generalised or scaled up. A broad range of different evaluation designs and methods is considered, including theory-based, case- based and participatory approaches. However, although not specifically discussed in this report, more traditional approaches such as experimental and statistical methods are not dismissed—they will often be valuable as part of an overall ‘nested’ evaluation strategy. The attributes of NRMR programs also pose evaluation challenges and have consequences for impact evaluation design. These challenges and consequences are reviewed. For example, multi-stakeholder programs require methods capable of assessing collective action, and time-extended programs require iterative and longitudinal methods. The approaches laid out in the report have been ‘walked through’ and refined in relation to several specific programs including: the CGIAR Research Program on Aquatic Agricultural Systems, the CGIAR Challenge Program on Water and Food’s Ganges Basin Development Challenge, and the CSIRO–AusAID African Food Security Initiative. The report proposes a ‘general evaluation framework’ that would allow the evaluation design principles outlined to be turned into an overall operational plan, and suggests what activities are necessary to put together such a plan. It concludes with summary recommendations, appendixes giving sample evaluation questions and an example of a mixed methods statistical design evaluation, and details of literature cited.
Article
Full-text available
Within development cooperation, development issues are increasingly recognized as complex problems requiring new paths towards solving them. In addition to the commonly used two dimensions of complex problems (uncertainty and disagreement), we introduce a third dimension: systemic stability; that is, stability provided by rules, relations and complementary technology. This article reflects on how development evaluation methodologies and especially those introducing a complexity perspective address these three dimensions. Inferring that this third dimension deserves more attention, we explore the characteristics of reflexive evaluation approaches that challenge systemic stability and support processes of learning and institutional change. We conclude that reflexive evaluation approaches may well complement current system approaches in development evaluation practice.
Article
Full-text available
Theory-based evaluations have helped open the ‘black box’ of programmes. An account is offered of the evolution of this persuasion, through the works of Chen and Rossi, Weiss, and Pawson and Tilley. In the same way as the ‘theory of change’ approach to evaluation has tackled the complexity of integrated and comprehensive programmes at the community level, it is suggested that a theory-oriented approach based on the practice of realistic cumulation be developed for dealing with the vertical complexity ofmulti-level governance.
Article
Full-text available
The Most Significant Change (MSC) technique is a dialogical, story-based technique. Its primary purpose is to facilitate program improvement by focusing the direction of work towards explicitly valued directions and away from less valued directions. MSC can also make a contribution to summative evaluation through both its process and its outputs. The technique involves a form of continuous values inquiry whereby designated groups of stakeholders search for significant program outcomes and then deliberate on the value of these outcomes in a systematic and transparent manner. To date, MSC has largely been used for the evaluation of international development programs, after having been initially developed for the evaluation of a social development program in Bangladesh (Davies, 1996). This article provides an introduction to MSC and discusses its potential to add to the basket of choices for evaluating programs in developed economies. We provide an Australian case study and outline some of the strengths and weaknesses of the technique. We conclude that MSC can make an important contribution to evaluation practice. Its unusual methodology and outcomes make it ideal for use in combination with other techniques and approaches.
Article
Full-text available
A central issue in the use of rapid evaluation and assessment methods (REAM) is achieving a balance between speed and trustworthiness. In this article, the authors review the key differences and common features of this family of methods and present a case example that illustrates how evaluators can use rapid evaluation techniques in their own work. In doing so, the authors hope to (a) introduce readers to a family of techniques with which they may be unfamiliar, (b) highlight their strengths and limitations, and (c) suggest appropriate contexts for use. Ultimately, the authors hope that REAM becomes a valuable addition to evaluators' toolkits.
Chapter
Full-text available
Over the years, there has been an evolution of systemic thinking in agricultural innovation studies, culminating in the agricultural innovation systems perspective. In an attempt to synthesize and organize the existing literature, this chapter reviews the literature on agricultural innovation, with the threefold goal of (1) sketching the evolution of systemic approaches to agricultural innovation and unravelling the different interpretations; (2) assessing key factors for innovation system performance and demonstrating the use of system thinking in the facilitation of processes of agricultural innovation by means of innovation brokers and re fl exive process monitoring; and (3) formulating an agenda for future research. The main conclusion is that the agricultural innovation systems perspective provides a comprehensive view on actors and factors that co-determine innovation, and in this sense allows understanding the complexity of agricultural innovation. However, its holism is also a pitfall as it allows for many interpretations, which complicates a clear focus of this research fi eld and the building of cumulative evidence. Hence, more work needs to be done conceptually and empirically.
Article
Full-text available
This article proposes ways to use programme theory for evaluating aspects of programmes that are complicated or complex. It argues that there are useful distinctions to be drawn between aspects that are complicated and those that are complex, and provides examples of programme theory evaluations that have usefully represented and address both of these. While complexity has been defined in varied ways in previous discussions of evaluation theory and practice, this article draws on Glouberman and Zimmerman's conceptualization of the differences between what is complicated (multiple components) and what is complex (emergent). Complicated programme theory may be used to represent interventions with multiple components, multiple agencies, multiple simultaneous causal strands and/or multiple alternative causal strands. Complex programme theory may be used to represent recursive causality (with reinforcing loops), disproportionate relationships (where at critical levels, a small change can make a big difference — a `tipping point') and emergent outcomes.
Article
Full-text available
Networks aiming for fundamental changes bring together a variety of actors who are part and parcel of a problematic context. These system innovation projects need to be accompanied by a monitoring and evaluation approach that supports and maintains reflexivity to be able to deal with uncertainties and conflicts while challenging current practices and related institutions. This article reports on experiences with reflexive process monitoring (RPM)-an approach that has been applied in several networks in the Dutch agricultural sector, which strive for sustainable development. Particular attention is paid to conducting system analyses-a core element of the methodology. The first results show that system analyses indeed have the potential to enhance reflexivity if carried out collectively. However, regular patterns of thinking and acting within projects interfere in subtle ways with the new knowledge generated and limit the transformation of the reflexive feedback and insights into action
Chapter
IWRM requires creative tools such as systems thinking to size up complex situations. Three types of systems need attention: the systems to be managed, the management system to apply, and the interrelationships among systems. The body of knowledge of systems thinking is shared among disciplines, and its common core involves models of systems, their behaviors, and their interactions with other systems. Water problems can be conceptualized as social-technical systems. For IWRM, useful tools include systems identification, system diagrams, process mapping, modeling, and case studies.
Article
There is growing interest in the concept of ‘‘mechanism’’ across many areas of the social sciences. In the field of program and policy evaluation, a number of scholars have also emphasized the importance of causal mechanisms for explaining how and why programs work. However, there appears to be some ambiguity about the meaning and uses of mechanism-based thinking in both the social science and evaluation literature. In this article we attempt to clarify what is meant by mechanisms in the context of program evaluation by identifying three main characteristics of mechanisms and outlining a possible typology of mechanisms. A number of theoretical and practical implications for evaluators are also discussed, along with some precautions to consider when investigating mechanisms that might plausibly account for program outcomes.
Article
The unsustainability of the present trajctories of technical change in sectors such as transport and agriculture is widely recognized. It is far from clear, however, how a transition to more sustainable modes of development may be achieved. Sustainable technologies that fulful important user requirements in terms of performance and price are most often not available on the market. Ideas of what might be more sustainable technologies exist, but the long development times, uncertainty about market demand and social gains, and the need for change at different levels in organization, technology, infastructure and the wider social and institutional context-provide a great barrier. This raises the question of how the potential of more sustainable technologies and modes of development may be exploited. In this article we describe how technical change is locked into dominant technological regimes, and present a perspective, called strategic niche management, on how to expedite a transition into a new regime. The perspective consists of the creation and/or management of nichesfor promising technologies.
Book
This book is about ways of dealing with uncertainty in the management of renewable resources, such as fisheries and wildlife. The author's basic theme is that management should be viewed as an adaptive process: one learns about the potentials of natural populations to sustain harvesting mainly through experience with management itself, rather than through basic research or the development of general ecological theory. The need for an adaptive view of management has become increasingly obvious over the last two decades, as management has turned more often to quantitative model building as a tool for prediction of responses to alternative harvesting policies. The model building has not been particularly successful, and it keeps drawing attention to key uncertainties that are not being resolved through normal techniques of scientific investigation. The author's major conclusion is that actively adaptive, probing, deliberately experimental policies should indeed be a basic part of renewable resource management.
Article
This article discusses empirical findings and conceptual elaborations of the last 10 years in strategic niche management research (SNM). The SNM approach suggests that sustainable innovation journeys can be facilitated by creating technological niches, i.e. protected spaces that allow the experimentation with the co-evolution of technology, user practices, and regulatory structures. The assumption was that if such niches were constructed appropriately, they would act as building blocks for broader societal changes towards sustainable development. The article shows how concepts and ideas have evolved over time and new complexities were introduced. Research focused on the role of various niche-internal processes such as learning, networking, visioning and the relationship between local projects and global rule sets that guide actor behaviour. The empirical findings showed that the analysis of these niche-internal dimensions needed to be complemented with attention to niche external processes. In this respect, the multi-level perspective proved useful for contextualising SNM. This contextualisation led to modifications in claims about the dynamics of sustainable innovation journeys. Niches are to be perceived as crucial for bringing about regime shifts, but they cannot do this on their own. Linkages with ongoing external processes are also important. Although substantial insights have been gained, the SNM approach is still an unfinished research programme. We identify various promising research directions, as well as policy implications.
Article
Impact assessment is important to agricultural research. It quantifies benefits from both proposed and past research, and allows comparisons of the cost effectiveness of different research investments as a basis for priority setting. Though there are important economies of scale in the global and regional level research done by the international centres of the Consultative Group for International Agricultural Research (CGIAR), this research covers only one small sector in the sequence of research and development reaching down into farmers' fields. CGIAR impact thus depends on the rest of the research and development (R&D) sequence operating effectively, something largely beyond its control. In deciding a strategy for impact assessment, the international centres need to resolve two dilemmas: first, how much impact assessment they should do, for their own programme planning and monitoring, and to satisfy their stakeholder constituencies; and secondly, how sophisticated this assessment should be. Impact measurement is itself research and, like all research, is very much concerned with how much is gained from the extra costs of collecting further information. Centros agrícolas internacionales
Article
Climate change and variability present new challenges for agriculture, particularly for smallholder farmers who continue to be the mainstay of food production in developing countries. Recent global food crises have exposed the structural vulnerability of globalized agri-food systems, highlighting climate change as just one of a complex set of environmental, demographic, social and economic drivers generating instability and food insecurity, the impacts of which disproportionately affect poorer groups in marginal environments. Rather than search for single causes, there is a need to understand these changes at a systemic level. Improved understanding of and engagement with the adaptive strategies and innovations of communities living in conditions of rapid change provides an appropriate starting point for those seeking to shape agricultural innovation systems responsive to food insecurity and climate change. This paper draws lessons from selected country experiences of adaptation and innovation in pursuit of food security goals. It reviews three cases of systems of innovation operating in contrasting regional, socio-economic and agro-ecological contexts, in terms of four features of innovation systems more likely to build, sustain or enhance food security in situations of rapid change: (i) recognition of the multifunctionality of agriculture and opportunities to realize multiple benefits; (ii) access to diversity as the basis for flexibility and resilience; (iii) concern for enhancing capacity of decision makers at all levels; and (iv) continuity of effort aimed at securing the well-being of those who depend on agriculture. Finally, implications for policymakers and other stakeholders in agricultural innovation systems are presented.
Book
The overall goal of the CGIAR Research Program on Aquatic Agricultural Systems is to improve the well-being of aquatic agricultural system-dependent peoples. The Program will focus initially on three aquatic agricultural systems: (i) Asia‘s mega deltas, targeting Bangladesh and Cambodia; (ii) Asia-Pacific islands, targeting the Philippines and Solomons; and (iii) African freshwater systems, targeting first Zambia, then Uganda and Mali.
Article
Agricultural development is fundamentally a social process in which people construct solutions to their problems, often by modifying both new technologies and their own production systems to take advantage of new opportunities offered by the technologies. Hence, agricultural change is an immensely complex process, with a high degree of non-linearity. However, current ‘best practice’ economic evaluation methods commonly used in the CGIAR system ignore complexity. In this paper we develop a two-stage monitoring, evaluation and impact assessment approach called impact pathway evaluation. This approach is based on program-theory evaluation from the field of evaluation, and the experience of the German development organization GTZ (Deutsche Gesellschaft für Technische Zusammenarbeit GmbH). In the first stage of this approach, a research project develops an impact pathway for itself, which is an explicit theory or model of how the project sees itself achieving impact. The project then uses the impact pathway to guide project management in complex environments. The impact pathway may evolve, based on learning over time. The second stage is an ex post impact assessment sometime after the project has finished, in which the project's wider benefits are independently assessed. The evaluator seeks to establish plausible links between the project outputs and developmental changes, such as poverty alleviation. We illustrate the usefulness of impact pathway evaluation through examples from Nigeria and Indonesia.
Article
Some current research and theory on organizational decision making from the political science literature is examined, in which the potential role of learning and feedback in the decision-making process is largely ignored. An espoused theory of action based on single-loop learning is found to be the most general model of action. A double-loop model is proposed as providing feedback and more effective decision making.
Article
Consider the qualitative approach to evaluation design (as opposed to measurement) to be typified by a case study with a sample of just one. Although there have certainly been elaborate and emphatic defenses of the qualitative approach to program evaluation, such defenses rarely attempt to qualify the approach explicitly and rigorously as a method of impact analysis. The present paper makes that attempt. The problem with seeking to advance a qualitative method of impact analysis is that impact is a matter of causation and a non-quantitative approach to design is apparently not well suited to the task of establishing causal relations. The root of the difficulty is located in the counterfactual definition of causality, which is our only broadly accepted formal definition of causality for social science. It is not, however, the only definition we use informally. Another definition, labeled “physical causality,” is widely used in practice and has recently been formalized. Physical causality can be applied to the present problem. For example, it explains the persuasiveness of Striven’s “Modus Operandi” approach tailored case study design with a sample size of one in principle as strong a basis for making inferences program impact as a randomized experiment. Crucial program evaluation finding that people’s “operative reasons” for doing what they do are the physical actions. it is shown that external validity using this qualitative approach would have exceptional strengths. Peer Reviewed http://deepblue.lib.umich.edu/bitstream/2027.42/67113/2/10.1177_109821409902000106.pdf
Article
The abstract for this document is available on CSA Illumina.To view the Abstract, click the Abstract button above the document title.
Outcome Evidencing Report
AAS (2014). Outcome Evidencing Report 2014. VisMin Hub, Philippines. Accessed May 2016 from https://goo.gl/vVwXC6
Impact evaluation of natural resource management research programs: a broader view
  • J Mayne
  • E Stern
Mayne, J. and E. Stern. 2013. Impact evaluation of natural resource management research programs: a broader view. ACIAR Impact Assessment Series Report No, 84. Australian Center for International Agricultural Research: Canberra. 79p.
How to combine multiple research options: Practical Triangulation )
  • P Kennedy
Kennedy, P. 2009. How to combine multiple research options: Practical Triangulation ). Accessed
Informed by knowledge: Expert performance in complex situations
  • D Snowden
Snowden, D. (2010). Naturalizing sensemaking. Informed by knowledge: Expert performance in complex situations, 223-234.
Complexity-aware monitoring. Discussion Note, Monitoring and Evaluation Series
  • H Britt
  • M Patsalides
Britt, H., & Patsalides, M. (2013). Complexity-aware monitoring. Discussion Note, Monitoring and Evaluation Series. Washington, DC: USAID, December.
Adaptive management of renewable resources Realist Evaluation: An introduction
  • C Walters
  • Macmillan
  • G Westhorp
Walters, C. (1986). Adaptive management of renewable resources. Macmillan, NY Westhorp, G. (2014). Realist Evaluation: An introduction. Methods Lab-ODI.
Outcome Harvesting. Ford Foundation. Accessed, 6
  • R Wilson-Grau
  • H Britt
Wilson-Grau, R., & Britt, H. (2012). Outcome Harvesting. Ford Foundation. Accessed, 6, 2012.
Monitoring and Evaluations Strategy Brief. CGIAR Research Program on Aquatic Agricultural Systems
  • B Douthwaite
  • M Apgar
  • C Crissman
Douthwaite, B., Apgar, M., Crissman, C. (2014a). Monitoring and Evaluations Strategy Brief. CGIAR Research Program on Aquatic Agricultural Systems. Penang, Malaysia. Program Brief: AAS-2014-04.
The Science of Evaluation
  • R Pawson
Pawson, R. (2013). The Science of Evaluation. SAGE Publications. Kindle Edition.
The logic of scientific discovery Diffusion of innovations
  • K Popper
Popper, K. (1992). The logic of scientific discovery. Routledge Rogers, E. M. (2010). Diffusion of innovations. Simon and Schuster.
Addressing attribution of cause and effect in small n impact evaluations: towards an integrated framework. Working Paper 15. International Initiative for Impact Evaluation
  • H White
  • D Phillips
White, H. and Phillips, D. (2012). Addressing attribution of cause and effect in small n impact evaluations: towards an integrated framework. Working Paper 15. International Initiative for Impact Evaluation, New Delhi.
The Systems Thinking Tool Box Accessed 1 st
  • S Burge
Burge, S. 2013. The Systems Thinking Tool Box. Accessed 1 st May 2016 from http://www.burgehugheswalsh.co.uk/Uploaded/1/Documents/Multiple-Cause-Diagram- Tool-Box-v1.pdf
Impact evaluation: a guide for commissioners and managers. Bond Accessed in October 2015 from https://www.bond.org.uk/data/files/Impact_Evaluation_Guide_0515 Enhancing the reflexivity of system innovation projects with system analyses
  • E B Stern
  • M Arkesteijn
  • C Leeuwis
Stern, E., (2015). Impact evaluation: a guide for commissioners and managers. Bond. Accessed in October 2015 from https://www.bond.org.uk/data/files/Impact_Evaluation_Guide_0515.pdf van Mierlo, B., Arkesteijn, M., & Leeuwis, C. (2010). Enhancing the reflexivity of system innovation projects with system analyses. American Journal of Evaluation, 31(2), 143-161.
CGIAR Consortium headquarters in Montpellier
  • Agropolis International
Agropolis International (2015) CGIAR Consortium headquarters in Montpellier. Available at: www.agropolis.org/cooperation/headquarters-cgiar-consortium.php (accessed 11 September 2015).
Research in development: The Approach of AAS. CGIAR Research Program on Aquatic Agricultural Systems. Penang, Malaysia (Working Paper)
  • P Dugan
  • M Apgar
  • B Douthwaite
Dugan, P., Apgar, M., & Douthwaite, B. (2013). Research in development: The Approach of AAS. CGIAR Research Program on Aquatic Agricultural Systems. Penang, Malaysia (Working Paper). Retrieved October 3, 2016, from https://goo.gl/a99Zgk
Addressing attribution of cause and effect in small n impact evaluations: Towards an integrated framework (Working Paper 15)
  • H White
  • D Phillips
White, H., & Phillips, D. (2012). Addressing attribution of cause and effect in small n impact evaluations: Towards an integrated framework (Working Paper 15). New Delhi, India: International Initiative for Impact Evaluation.
Outcome harvesting. Cairo, Egypt: Ford Foundation
  • R Wilson-Grau
  • H Britt
Wilson-Grau, R., & Britt, H. (2012). Outcome harvesting. Cairo, Egypt: Ford Foundation. Retrieved October 20, 2012, from http://www.outcomemapping.ca/resource/outcome-harvesting
Realist Evaluation: An introduction
  • G Westhorp
Westhorp, G. (2014). Realist Evaluation: An introduction. Methods Lab-ODI.
The Systems Thinking Tool Box
  • S Burge
Burge, S. 2013. The Systems Thinking Tool Box. Accessed 1 st May 2016 from http://www.burgehugheswalsh.co.uk/Uploaded/1/Documents/Multiple-Cause-Diagram-Tool-Box-v1.pdf