Questions of cause and effect are critical to assessing the performance of programmes and projects. When it is not
practical to design an experiment to assess performance, contribution analysis can provide credible assessments of
cause and effect. Verifying the theory of change that the programme is based on, and paying attention to other
factors that may influence the outcomes, provides reasonable evidence about the contribution being made by the
A key question in the assessment of programmes and projects
is that of attribution: to what extent are observed results due
to programme activities rather than other factors? What we
want to know is whether or not the programme has made a
difference—whether or not it has added value. Experimental or
quasi-experimental designs that might answer these questions
are often not feasible or not practical. In such cases,
contribution analysis can help managers come to reasonably
robust conclusions about the contribution being made by
programmes to observed results.
Contribution analysis explores attribution through
assessing the contribution a programme is making to observed
results. It sets out to verify the theory of change behind a
programme and, at the same time, takes into consideration
other influencing factors. Causality is inferred from the
1. The programme is based on a reasoned theory of
change: the assumptions behind why the program is
expected to work are sound, are plausible, and are
agreed upon by at least some of the key players.
2. The activities of the programme were implemented.
3. The theory of change is verified by evidence: the chain
of expected results occurred.
4. Other factors influencing the programme were assessed
and were either shown not to have made a significant
contribution or, if they did, the relative contribution
Contribution analysis is useful in situations where the
programme is not experimental—there is little or no scope for
varying how the program is implemented—and the
programme has been funded on the basis of a theory of
change. Many managers and evaluators assessing the
performance of programmes face this situation. Kotvojs (2006)
describes one way of using contribution analysis in a
development context, "as a means to consider progress
towards outputs and intermediate and end outcomes" (p. 1).
Conducting a contribution analysis
There are six iterative steps in contribution analysis (Box 1),
each step building the contribution story and addressing
weaknesses identified in the previous stage. If appropriate,
many of the steps can be undertaken in a participatory mode.
Step 1: Set out the attribution problem to be addressed
Too often the question of
attribution is ignored in programme evaluations. Observed
results are reported with no discussion as to whether they
were the result of the programme's activities. At the outset, it
should be acknowledged that there are legitimate questions
about the extent to which the programme has brought about
the results observed.
A variety of questions about causes and effects can
be asked about most programmes. These range from traditional
causality questions, such as
To what extent has the programme caused the outcome?
to more managerial questions, such as
Is it reasonable to conclude that the programme has made
a difference to the problem?
Care is needed to determine the relevant cause–effect question
in any specific context, and whether or not the question is
reasonable. In many cases the traditional causality question
may be impossible to answer, or the answer may simply lack
any real meaning given the numerous factors influencing a
result. However, managerial-type cause–effect questions are
generally amenable to contribution analysis.
The level of
proof required needs to be determined. Issues that need to be
considered are, for example: What is to be done with the
findings? What kinds of decisions will be based on the
findings? The evidence sought needs to fit the purpose.
It is worth
exploring the nature and extent of the contribution expected
from the programme. This means asking questions such as:
What do we know about the nature and extent of the
contribution expected? What would show that the programme
made an important contribution? What would show that the
programme 'made a difference'? What kind of evidence would
we (or the funders or other stakeholders) accept?
determining the nature of the expected contribution from the
programme, the other factors that will influence the outcomes
will also need to be identified and explored, and their
ILAC Brief 16 May 2008
Box 1. Contribution Analysis
Set out the attribution problem to be addressed
Develop a theory of change and risks to it
Gather the existing evidence on the theory of change
Assemble and assess the contribution story,
and challenges to it
Seek out additional evidence
Revise and strengthen the contribution story
An approach to exploring cause and effect
Is the expected contribution of the programme
plausible? Assessing this means asking questions such as: Is the
problem being addressed well understood? Are there baseline data?
Given the size of the programme intervention, the magnitude and
nature of the problem and the other influencing factors, is an important
contribution by the programme really likely? If a significant contribution
by the programme is not plausible, the value of further work on causes
and effects needs to be reassessed.
Step 2: Develop the theory of change and the risks to it
The key tools of contribution
analysis are theories of change and results chains. With these tools the
contribution story can be built. Theories of change (Weiss, 1997)
explain how the programme is expected to bring about the desired
results—the outputs, and subsequent chain of outcomes and impacts
(impact pathways of Douthwaite et al., 2007). In development aid, a
logframe is often used to set out funders' and/or managers' expectations
as to what will happen as the programme is implemented. The theory
of change, as well as simply identifying the steps in the results chain,
should identify the assumptions behind the various links in the chain
and the risks to those assumptions. One way of representing a theory
of change including its assumptions and risks is shown in Figure 1.
chains/theories of change can be shown at almost any level of detail.
Contribution analysis needs reasonably straightforward, not overly
detailed logic, especially at the outset. Refinements may be needed but
can be added later.
statements about the contribution of programmes to outputs is quite
straightforward, but it is considerably more challenging to make
statements about the contribution that programmes make to final
outcomes (impacts). Three 'circles of influence' (Montague et al., 2002)
are useful here:
direct control—where the programme has fairly direct control of
the results, typically at the output level;
direct influence—where the programme has a direct influence on
the expected results, such as the reactions and behaviours of its
clients through direct contact, typically the immediate outcomes
and perhaps some intermediate outcomes; and
indirect influence—where the programme can exert significantly
less influence on the expected results due to its lack of direct
contact with those involved and/or the significant influence of
The theory of change is probably much better developed and
understood—and expectations are clearer—at the direct control and
direct influence levels than at the level of indirect influence.
models focus on the results expected at different levels, i.e., the boxes
in the results chain in Figure 1. But a theory of change needs to spell
out the assumptions behind the theory, for example to explain what
conditions have to exist for A to lead to B, and what key risks there are
to that condition. Leeuw (2003) discusses different ways of eliciting and
illustrating these behind-the-scenes assumptions.
. A well thought out theory of change not only shows the
results chain of a programme but also how external factors may affect
the results. In Figure 1, other influences (not shown) might be pressure
from donors and/or a government-wide initiative to improve PM&E.
Although it is not realistic to do primary research on external factors
that may affect results, reasonable efforts should be made to gather
available information and opinions on the contribution they might have.
may differ about how a programme is supposed to work. If many players
contest the theory of change, this may suggest that overall
understanding of how the programme is supposed to work is weak. If,
ILAC Brief 16
Figure 1. A Theory of Change for Enhancing Planning, Monitoring and Evaluation (PM&E) Capacity in
Agricultural Research Organisations (AROs)
More effective, efficient and
relevant agricultural programmes
Adapted from Horton et al. (2000).
Results Chain Theory of Change: Assumptions and Risks
Strengthened management of
Enhanced planning processes, evaluation
systems, monitoring systems, and
professional PM&E capacities
training and workshops
facilitation of organisational change
Institutionalisation of integrated
PM&E systems and strategic
Assumptions: Better management will result in more effective, efficient and relevant
Risks: New approaches do not deliver (great plans but poor delivery); resource cut-backs
affect PM&E first; weak utilisation of evaluation information.
Assumptions: The new planning, monitoring and evaluation approaches will enhance the
capacity of the AROs to better manage their resources.
Risks: Management becomes too complicated; PM&E systems become a burden;
information overload; evidence not really valued for managing.
Assumptions: Over time and with continued participatory assistance, AROs will
integrate these new approaches into how they do business. The project's activities
complement other influencing factors.
Risks: Trial efforts do not demonstrate their worth; pressures for greater accountability
dissipate; PM&E systems sidelined.
Assumptions: Intended target audience received the outputs. With hands-on,
participatory assistance and training, AROs will try enhanced planning, monitoring and
Risks: Intended reach not met; training and information not convincing enough for
AROs to make the investment; only partially adopted to show interest to donors.
after discussion and debate, key players cling to alternative theories of
change, then it may be necessary to assess each of these—specifically
the links in the results chain where the theories of change differ. The
process of gathering evidence to confirm or discard alternative theories
of change should help decide which theory better fits reality.
Step 3: Gather existing evidence on the theory of change
strengths and weaknesses of the logic, the plausibility of the various
assumptions in the theory and the extent to which they are contested,
will give a good indication of where concrete evidence is most needed.
Evidence to validate the theory of change is
needed in three areas: observed results, assumptions about the theory
of change, and other influencing factors.
Evidence on results and activities
Evidence on the occurrence or not of key results (outputs, and
immediate, intermediate and final outcomes/impacts) is a first step for
analysing the contribution the programme made to those results.
Additionally, there must be evidence that the programme was
implemented as planned. Were the activities that were undertaken and
the outputs of these activities, the same as those that were set out in
the theory of change? If not, the theory of change needs to be revised.
Evidence on assumptions
Evidence is also needed to demonstrate that the various assumptions in
the theory of change are valid, or at least reasonably so. Are there
research findings that support the assumptions? Many interventions in
the public and not-for-profit sectors have already been evaluated.
Mayne and Rist (2006) discuss the growing importance of synthesising
existing information from evaluations and research. Considering and
synthesising evidence on the assumptions underlying the theory of
change will either start to confirm or call into question how programme
actions are likely to contribute to the expected results.
Evidence on other influencing factors
Finally, there is a need to examine other significant factors that may
have an influence. Possible sources of information on these are other
evaluations, research, and commentary. What is needed is some idea of
how influential these other factors may be.
Gathering evidence can be an iterative process, first gathering
and assembling all readily available material, leaving more exhaustive
investigation until later.
Step 4: Assemble and assess the contribution story, and
challenges to it
The contribution story, as developed so far, can now be assembled and
assessed critically. Questions to ask at this stage are:
Which links in the results chain are strong (good evidence
available, strong logic, low risk, and/or wide acceptance) and
which are weak (little evidence available, weak logic, high risk,
and/or little agreement among stakeholders)?
How credible is the story overall? Does the pattern of results and
links validate the results chain?
Do stakeholders agree with the story—given the available
evidence, do they agree that the programme has made an
important contribution (or not) to the observed results?
Where are the main weaknesses in the story? For example: Is it
clear what results have been achieved? Are key assumptions
validated? Are the impacts of other influencing factors clearly
understood? Any weaknesses point to where additional data or
information would be useful.
So far, no 'new' data has been gathered other than from discussions with
programme individuals and maybe experts, and perhaps a literature
search. At this point, the robustness of the contribution story, with
respect to the attribution question(s) raised at the outset, is known and
will guide further efforts.
Step 5: Seek out additional evidence
Based on the assessment of the
robustness of the contribution story in Step 4, the information needed
to address challenges to its credibility can now be identified, for
example, evidence regarding observed results, the strengths of certain
assumptions, and/or the roles of other influencing factors.
It may be useful at this point to
review and update the theory of change, or to examine more closely
certain elements of the theory. To do this, the elements of the theory
may need to be disaggregated so as to understand them in greater
Having identified where more evidence is
needed, it can then be gathered. Multiple approaches to assessing
performance, such as triangulation, are now generally recognised as
useful and important in building credibility. Some standard approaches
to gathering additional evidence for contribution analysis (Mayne,
Surveys of, for example, subject matter experts, programme
managers, beneficiaries, and those involved in other programmes
that are influencing the programme in question.
Case studies, which might suggest where the theory of change
could be amended.
Tracking variations in programme implementation, such as over
time and between locations.
Conducting a component evaluation on an issue or area where
performance information is weak.
Synthesising research and evaluation findings, for example using
cluster evaluation and integrative reviews, and synthesising
Step 6: Revise and strengthen the contribution story
New evidence will build a more credible contribution story, buttressing
the weaker parts of the earlier version or suggesting modifications to
the theory of change. It is unlikely that the revised story will be
foolproof, but it will be stronger and more credible.
Contribution analysis works best as an iterative process. Thus,
at this point the analysis may return to Step 4 (Box 1) and reassess the
strengths and weaknesses of the contribution story.
Box 2 illustrates some of the steps in contribution analysis in
one evaluation and makes suggestions about what else could have been
Levels of contribution analysis
Three levels of contribution analysis lead to different degrees of
robustness in statements of contribution:
At this level, the analysis (1)
develops the theory of change, and (2) confirms that the expected
outputs were delivered. Statements of contribution are based on the
inherent strength of the theory of change and on evidence that the
expected outputs were delivered. For example, in a vaccination
programme, if the outputs (vaccinations) are delivered, then the
outcome of immunisation can be assumed based on the results of
previous vaccination programmes. The weaknesses of this level of
analysis are any perceived weaknesses in the theory of change.
This level of analysis
starts with minimalist analysis and gathers and builds evidence that (1)
the expected results in areas of direct influence of the theory of change
were observed, and (2) the programme was influential in bringing about
those results, taking other influencing factors into consideration.
ILAC Brief 16
Statements of contribution are based on (1) observed results, (2)
confirmation that the assumptions about direct influence are supported
by factual evidence, and (3) the inherent strength of the theory of
change in areas of indirect influence. An example of where this level of
analysis would be appropriate is an intervention to get an agricultural
research organisation to work collaboratively to solve complex
problems—an approach, say, that has proven effective elsewhere. If
there is evidence that the research organisation has indeed adopted the
new approach (the desired behavioural change) as a result of the
intervention, the subsequent benefits may not have to be
demonstrated, as they will have already been established from previous
This level extends the
analysis into the more challenging area of indirect influence. It measures
the intermediate and final outcomes/impacts (or some of them) and
gathers evidence that the assumptions (or some of them) in the theory
of change in the areas of indirect influence were borne out. Statements
of contribution at this level attempt to provide factual evidence for at
least the key parts of the entire theory of change.
Douthwaite, B., Schulz, S., Olanrewaju, A.S. and Ellis-Jones, J. 2007.
Impact pathway evaluation of an integrated Striga hermonthica
control project in Northern Nigeria. Agricultural Systems 92:
Horton, D., Mackay, R., Andersen, A. and Dupleich, L. 2000.
Evaluating capacity development in planning, monitoring, and
evaluation: a case from agricultural research. Research Report no.
17. Available at
Kotvojs, F. 2006. Contribution analysis: a new approach to evaluation
in international development. Paper presented at the Australian
Evaluation Society 2006 International Conference, Darwin.
Available at http://www.aes.asn.au/conferences/2006/papers/
Leeuw, F.L. 2003. Reconstructing program theories: methods available
and problems to be solved. American Journal of Evaluation 24:
Mayne, J. 2001. Addressing attribution through contribution analysis:
using performance measures sensibly. Canadian Journal of
Program Evaluation 16: 1-24. Earlier version available at
Mayne, J. and Rist, R.C. 2006. Studies are not enough: the necessary
transformation of evaluation. Canadian Journal of Program
Evaluation 21: 93-120.
Montague, S., Young, G. and Montague, C. 2003. Using circles to tell
the performance story. Canadian Government Executive 2: 12-16.
Weiss, C.H. 1997. Theory-based evaluation: past, present, and future.
New Directions for Evaluation 76(Winter): 41-55.
About the author
John Mayne (email@example.com) is an independent advisor on
public sector performance. Previously, he was with the Office of the
Auditor General of Canada and the Treasury Board Secretariat.
7. Outcome mapping
8. Learning alliances
9. The Sub-Saharan Africa Challenge Program
10. Making the most of meetings
11. Human resources management
12. Linking diversity to organizational effectiveness
13. Horizontal evaluation
14. Engaging scientists through institutional histories
15. Evaluación horizontal: Estimulando el aprendizaje
social entre "pares"
ILAC Brief 16
The Institutional Learning and Change (ILAC) Initiative (www.cgiar-ilac.org), hosted by Bioversity
International, seeks to increase the contributions of agricultural research to sustainable reductions in
poverty. The ILAC Initiative is currently supported by the Netherlands Ministry of Foreign Affairs.
ILAC Briefs aim to stimulate dialogue and to disseminate ideas and experiences that researchers
and managers can use to strengthen organizational learning and performance. An ILAC brief may
introduce a concept, approach or tool; it may summarize results of a study; or it may highlight an event
and its significance. To request copies, write to firstname.lastname@example.org. The ILAC Initiative encourages fair use of
the information in its Briefs and requests feedback from readers on how and by whom the publications
Layout and printing: www.scriptoria.co.uk
Box 2. Contribution Analysis in Evaluating Capacity
Development in Planning, Monitoring and Evaluation
In the evaluation of the project on evaluating capacity
development in planning, monitoring and evaluation (Figure
1) outlined and described by Horton et al. (2000), a number
of steps in contribution analysis were undertaken:
A theory of change was developed.
There was clear recognition that the project activities were not
the only influences on adoption of PM&E approaches—other
influencing factors were identified, such as the general
pressure for public sector reform and pressure from donors.
Surveys asked explicitly for views on the nature and extent of
the project's contribution to enhanced capacity, and attempts
were made to triangulate the findings.
The lessons learned on how future projects could enhance
their contribution represent de facto refinements of the theory
Additional contribution analysis steps that might
have been useful include:
A more structured approach to assessing contribution
from the outset.
More analysis of the other influencing factors, perhaps through
clearer articulation up front, comparisons with similar
organisations not part of the project, and through asking
about the relative contribution of the project efforts.
More attention to the risks facing the project.