Natural resource management research (NRMR) has
a key role in improving food security and reducing poverty and malnutrition in environmentally sustainable ways, especially in rural communities in the developing world. Demonstrating this through impact evaluation poses distinct challenges. This report sets out ways in which these challenges can be met.
NRMR combines technological innovation with real-world changes in agricultural practice that involve many stakeholders at farm, community, scientific and policymaking levels. These programs generally seek to integrate multiple inputs or interventions—scientific, institutional, human and environmental; engage participatively with beneficiaries and other implicated parties; and mobilise stakeholders, both to support innovative programs and to carry lessons learned into the future.
Simple attribution of productivity and socioeconomic outcomes to NRMR interventions is difficult when NRMR itself is a ‘package’ of different actions adapted to diverse settings by farmers and other stakeholders, often over extended periods.
This report outlines impact evaluation strategies that accept that NRMR is likely to be a ‘contributory cause’ rather than the sole cause of program results. It builds on recent reports that demonstrate that, in many development settings, impact evaluation should be seen as contributing to an adaptive learning process that supports the successful implementation of innovative programs. Change is nearly always the result of a ‘causal package’ and for an NRMR intervention to make a contribution it must be a necessary part of the package. This contrasts with an ‘impact assessment’ perspective that is mainly concerned with forms of accountability that measure and attribute impacts to particular programs or interventions. Starting from a learning perspective, impact evaluation still addresses accountability by demonstrating that NRMR programs make a difference by contributing to outcomes and impacts, and improve performance through continuous learning.
The proposed evaluation strategy pays special attention to the causal links between NRMR programs and intended outcomes. As these programs are expected
to produce generalised answers that can be replicated and scaled up to tackle global problems, evaluation
also has to be able to explain why and under what circumstances programs are effective. This is why the proposed evaluation strategy includes approaches to explanation, and why theories of change are an essential part of the proposed approach. A theory of change both helps to unpick the assumptions about how programs bring about change and takes into account the way programs are implemented. Such a theory-based approach also allows programs to be tested against what is known from wider research literatures and, at the same time, allows evaluation results to contribute to these literatures.
Against this background, an overarching evaluation framework is put forward that aims to answer impact evaluation questions by selecting appropriate evaluation designs that take into account NRMR program ‘attributes’ or characteristics.
The report argues that, in a complex program setting, an evaluation must begin with appropriate evaluation questions that interest policymakers, donors and other stakeholders. Key evaluation questions should be about what difference the program is making (i.e. the contribution being made), about understanding the progress being made and why results are occurring, and about the learning that is taking place. This is distinguishable from the kinds of evaluation questions that are appropriate for more straightforward interventions such as: ‘Did our program cause the intended change?’ The evaluation questions to be considered are broader than those dealing solely
with causality, and include questions of rationale and implementation, and of measuring results, in terms of both their sustainability and transferability.
The report suggests a framework for defining evaluation questions that takes account of both the outcomes and processes of change, and tries to explain how change occurs in different settings and can be generalised or scaled up.
A broad range of different evaluation designs and methods is considered, including theory-based, case- based and participatory approaches. However, although not specifically discussed in this report, more traditional approaches such as experimental and statistical methods are not dismissed—they will often be valuable as part of an overall ‘nested’ evaluation strategy.
The attributes of NRMR programs also pose evaluation challenges and have consequences for impact evaluation design. These challenges and consequences are reviewed. For example, multi-stakeholder programs require methods capable of assessing collective action, and time-extended programs require iterative and longitudinal methods.
The approaches laid out in the report have been ‘walked through’ and refined in relation to several specific programs including: the CGIAR Research Program on Aquatic Agricultural Systems, the CGIAR Challenge Program on Water and Food’s Ganges Basin Development Challenge, and the CSIRO–AusAID African Food Security Initiative.
The report proposes a ‘general evaluation framework’ that would allow the evaluation design principles outlined to be turned into an overall operational plan, and suggests what activities are necessary to put together such a plan.
It concludes with summary recommendations, appendixes giving sample evaluation questions and an example of a mixed methods statistical design evaluation, and details of literature cited.