Content uploaded by John Mayne
Author content
All content in this area was uploaded by John Mayne on Nov 28, 2014
Content may be subject to copyright.
Questions of cause and effect are critical to assessing the performance of programmes and projects. When it is not
practical to design an experiment to assess performance, contribution analysis can provide credible assessments of
cause and effect. Verifying the theory of change that the programme is based on, and paying attention to other
factors that may influence the outcomes, provides reasonable evidence about the contribution being made by the
programme.
Introduction
A key question in the assessment of programmes and projects
is that of attribution: to what extent are observed results due
to programme activities rather than other factors? What we
want to know is whether or not the programme has made a
difference—whether or not it has added value. Experimental or
quasi-experimental designs that might answer these questions
are often not feasible or not practical. In such cases,
contribution analysis can help managers come to reasonably
robust conclusions about the contribution being made by
programmes to observed results.
Contribution analysis explores attribution through
assessing the contribution a programme is making to observed
results. It sets out to verify the theory of change behind a
programme and, at the same time, takes into consideration
other influencing factors. Causality is inferred from the
following evidence:
1. The programme is based on a reasoned theory of
change: the assumptions behind why the program is
expected to work are sound, are plausible, and are
agreed upon by at least some of the key players.
2. The activities of the programme were implemented.
3. The theory of change is verified by evidence: the chain
of expected results occurred.
4. Other factors influencing the programme were assessed
and were either shown not to have made a significant
contribution or, if they did, the relative contribution
was recognised.
Contribution analysis is useful in situations where the
programme is not experimental—there is little or no scope for
varying how the program is implemented—and the
programme has been funded on the basis of a theory of
change. Many managers and evaluators assessing the
performance of programmes face this situation. Kotvojs (2006)
describes one way of using contribution analysis in a
development context, "as a means to consider progress
towards outputs and intermediate and end outcomes" (p. 1).
Conducting a contribution analysis
There are six iterative steps in contribution analysis (Box 1),
each step building the contribution story and addressing
weaknesses identified in the previous stage. If appropriate,
many of the steps can be undertaken in a participatory mode.
Step 1: Set out the attribution problem to be addressed
Acknowledge
the
attribution
problem.
Too often the question of
attribution is ignored in programme evaluations. Observed
results are reported with no discussion as to whether they
were the result of the programme's activities. At the outset, it
should be acknowledged that there are legitimate questions
about the extent to which the programme has brought about
the results observed.
Determine
the
specific
cause
–
effect
question
being
addressed.
A variety of questions about causes and effects can
be asked about most programmes. These range from traditional
causality questions, such as
To what extent has the programme caused the outcome?
to more managerial questions, such as
Is it reasonable to conclude that the programme has made
a difference to the problem?
Care is needed to determine the relevant cause–effect question
in any specific context, and whether or not the question is
reasonable. In many cases the traditional causality question
may be impossible to answer, or the answer may simply lack
any real meaning given the numerous factors influencing a
result. However, managerial-type cause–effect questions are
generally amenable to contribution analysis.
Determine
the
level
of
confidence
required.
The level of
proof required needs to be determined. Issues that need to be
considered are, for example: What is to be done with the
findings? What kinds of decisions will be based on the
findings? The evidence sought needs to fit the purpose.
Explore
the
type
of
contribution
expected.
It is worth
exploring the nature and extent of the contribution expected
from the programme. This means asking questions such as:
What do we know about the nature and extent of the
contribution expected? What would show that the programme
made an important contribution? What would show that the
programme 'made a difference'? What kind of evidence would
we (or the funders or other stakeholders) accept?
Determine
the
other
key
influencing
factors.
In
determining the nature of the expected contribution from the
programme, the other factors that will influence the outcomes
will also need to be identified and explored, and their
significance judged.
ILAC Brief 16 May 2008
1
Box 1. Contribution Analysis
Step
1:
Set out the attribution problem to be addressed
Step
2:
Develop a theory of change and risks to it
Step
3:
Gather the existing evidence on the theory of change
Step
4:
Assemble and assess the contribution story,
and challenges to it
Step
5:
Seek out additional evidence
Step
6:
Revise and strengthen the contribution story
Contribution analysis:
An approach to exploring cause and effect
John Mayne
Assess
the
plausibility
of
the
expected
contribution
in
relation
to
the
size
of
the
programme.
Is the expected contribution of the programme
plausible? Assessing this means asking questions such as: Is the
problem being addressed well understood? Are there baseline data?
Given the size of the programme intervention, the magnitude and
nature of the problem and the other influencing factors, is an important
contribution by the programme really likely? If a significant contribution
by the programme is not plausible, the value of further work on causes
and effects needs to be reassessed.
Step 2: Develop the theory of change and the risks to it
Build
a
theory
of
change
and
a
results
chain.
The key tools of contribution
analysis are theories of change and results chains. With these tools the
contribution story can be built. Theories of change (Weiss, 1997)
explain how the programme is expected to bring about the desired
results—the outputs, and subsequent chain of outcomes and impacts
(impact pathways of Douthwaite et al., 2007). In development aid, a
logframe is often used to set out funders' and/or managers' expectations
as to what will happen as the programme is implemented. The theory
of change, as well as simply identifying the steps in the results chain,
should identify the assumptions behind the various links in the chain
and the risks to those assumptions. One way of representing a theory
of change including its assumptions and risks is shown in Figure 1.
Determine
the
level
of
detail.
Logic models/results
chains/theories of change can be shown at almost any level of detail.
Contribution analysis needs reasonably straightforward, not overly
detailed logic, especially at the outset. Refinements may be needed but
can be added later.
Determine
the
expected
contribution
of
the
programme.
Making
statements about the contribution of programmes to outputs is quite
straightforward, but it is considerably more challenging to make
statements about the contribution that programmes make to final
outcomes (impacts). Three 'circles of influence' (Montague et al., 2002)
are useful here:
direct control—where the programme has fairly direct control of
the results, typically at the output level;
direct influence—where the programme has a direct influence on
the expected results, such as the reactions and behaviours of its
clients through direct contact, typically the immediate outcomes
and perhaps some intermediate outcomes; and
indirect influence—where the programme can exert significantly
less influence on the expected results due to its lack of direct
contact with those involved and/or the significant influence of
other factors.
The theory of change is probably much better developed and
understood—and expectations are clearer—at the direct control and
direct influence levels than at the level of indirect influence.
List
the
assumptions
underlying
the
theory
of
change.
Typical logic
models focus on the results expected at different levels, i.e., the boxes
in the results chain in Figure 1. But a theory of change needs to spell
out the assumptions behind the theory, for example to explain what
conditions have to exist for A to lead to B, and what key risks there are
to that condition. Leeuw (2003) discusses different ways of eliciting and
illustrating these behind-the-scenes assumptions.
Include
consideration
of
other
factors
that
may
influence
outcomes
. A well thought out theory of change not only shows the
results chain of a programme but also how external factors may affect
the results. In Figure 1, other influences (not shown) might be pressure
from donors and/or a government-wide initiative to improve PM&E.
Although it is not realistic to do primary research on external factors
that may affect results, reasonable efforts should be made to gather
available information and opinions on the contribution they might have.
Determine
how
much
the
theory
of
change
is
contested.
Views
may differ about how a programme is supposed to work. If many players
contest the theory of change, this may suggest that overall
understanding of how the programme is supposed to work is weak. If,
ILAC Brief 16
2
Figure 1. A Theory of Change for Enhancing Planning, Monitoring and Evaluation (PM&E) Capacity in
Agricultural Research Organisations (AROs)
More effective, efficient and
relevant agricultural programmes
Final Outcomes
(impacts)
Adapted from Horton et al. (2000).
Results Chain Theory of Change: Assumptions and Risks
Intermediate
Outcomes
Immediate
Outcomes
Outputs
Strengthened management of
agricultural research
Enhanced planning processes, evaluation
systems, monitoring systems, and
professional PM&E capacities
information
training and workshops
facilitation of organisational change
Institutionalisation of integrated
PM&E systems and strategic
management principles
Assumptions: Better management will result in more effective, efficient and relevant
agricultural programmes.
Risks: New approaches do not deliver (great plans but poor delivery); resource cut-backs
affect PM&E first; weak utilisation of evaluation information.
Assumptions: The new planning, monitoring and evaluation approaches will enhance the
capacity of the AROs to better manage their resources.
Risks: Management becomes too complicated; PM&E systems become a burden;
information overload; evidence not really valued for managing.
Assumptions: Over time and with continued participatory assistance, AROs will
integrate these new approaches into how they do business. The project's activities
complement other influencing factors.
Risks: Trial efforts do not demonstrate their worth; pressures for greater accountability
dissipate; PM&E systems sidelined.
Assumptions: Intended target audience received the outputs. With hands-on,
participatory assistance and training, AROs will try enhanced planning, monitoring and
evaluation approaches.
Risks: Intended reach not met; training and information not convincing enough for
AROs to make the investment; only partially adopted to show interest to donors.
after discussion and debate, key players cling to alternative theories of
change, then it may be necessary to assess each of these—specifically
the links in the results chain where the theories of change differ. The
process of gathering evidence to confirm or discard alternative theories
of change should help decide which theory better fits reality.
Step 3: Gather existing evidence on the theory of change
Assess
the
logic
of
the
links
in
the
theory
of
change.
Reviewing the
strengths and weaknesses of the logic, the plausibility of the various
assumptions in the theory and the extent to which they are contested,
will give a good indication of where concrete evidence is most needed.
Gather
the
evidence.
Evidence to validate the theory of change is
needed in three areas: observed results, assumptions about the theory
of change, and other influencing factors.
Evidence on results and activities
Evidence on the occurrence or not of key results (outputs, and
immediate, intermediate and final outcomes/impacts) is a first step for
analysing the contribution the programme made to those results.
Additionally, there must be evidence that the programme was
implemented as planned. Were the activities that were undertaken and
the outputs of these activities, the same as those that were set out in
the theory of change? If not, the theory of change needs to be revised.
Evidence on assumptions
Evidence is also needed to demonstrate that the various assumptions in
the theory of change are valid, or at least reasonably so. Are there
research findings that support the assumptions? Many interventions in
the public and not-for-profit sectors have already been evaluated.
Mayne and Rist (2006) discuss the growing importance of synthesising
existing information from evaluations and research. Considering and
synthesising evidence on the assumptions underlying the theory of
change will either start to confirm or call into question how programme
actions are likely to contribute to the expected results.
Evidence on other influencing factors
Finally, there is a need to examine other significant factors that may
have an influence. Possible sources of information on these are other
evaluations, research, and commentary. What is needed is some idea of
how influential these other factors may be.
Gathering evidence can be an iterative process, first gathering
and assembling all readily available material, leaving more exhaustive
investigation until later.
Step 4: Assemble and assess the contribution story, and
challenges to it
The contribution story, as developed so far, can now be assembled and
assessed critically. Questions to ask at this stage are:
Which links in the results chain are strong (good evidence
available, strong logic, low risk, and/or wide acceptance) and
which are weak (little evidence available, weak logic, high risk,
and/or little agreement among stakeholders)?
How credible is the story overall? Does the pattern of results and
links validate the results chain?
Do stakeholders agree with the story—given the available
evidence, do they agree that the programme has made an
important contribution (or not) to the observed results?
Where are the main weaknesses in the story? For example: Is it
clear what results have been achieved? Are key assumptions
validated? Are the impacts of other influencing factors clearly
understood? Any weaknesses point to where additional data or
information would be useful.
So far, no 'new' data has been gathered other than from discussions with
programme individuals and maybe experts, and perhaps a literature
search. At this point, the robustness of the contribution story, with
respect to the attribution question(s) raised at the outset, is known and
will guide further efforts.
Step 5: Seek out additional evidence
Identify
what
new
data
is
needed.
Based on the assessment of the
robustness of the contribution story in Step 4, the information needed
to address challenges to its credibility can now be identified, for
example, evidence regarding observed results, the strengths of certain
assumptions, and/or the roles of other influencing factors.
Adjust
the
theory
of
change.
It may be useful at this point to
review and update the theory of change, or to examine more closely
certain elements of the theory. To do this, the elements of the theory
may need to be disaggregated so as to understand them in greater
detail.
Gather
more
evidence.
Having identified where more evidence is
needed, it can then be gathered. Multiple approaches to assessing
performance, such as triangulation, are now generally recognised as
useful and important in building credibility. Some standard approaches
to gathering additional evidence for contribution analysis (Mayne,
2001) are:
Surveys of, for example, subject matter experts, programme
managers, beneficiaries, and those involved in other programmes
that are influencing the programme in question.
Case studies, which might suggest where the theory of change
could be amended.
Tracking variations in programme implementation, such as over
time and between locations.
Conducting a component evaluation on an issue or area where
performance information is weak.
Synthesising research and evaluation findings, for example using
cluster evaluation and integrative reviews, and synthesising
existing studies.
Step 6: Revise and strengthen the contribution story
New evidence will build a more credible contribution story, buttressing
the weaker parts of the earlier version or suggesting modifications to
the theory of change. It is unlikely that the revised story will be
foolproof, but it will be stronger and more credible.
Contribution analysis works best as an iterative process. Thus,
at this point the analysis may return to Step 4 (Box 1) and reassess the
strengths and weaknesses of the contribution story.
Box 2 illustrates some of the steps in contribution analysis in
one evaluation and makes suggestions about what else could have been
done.
Levels of contribution analysis
Three levels of contribution analysis lead to different degrees of
robustness in statements of contribution:
Minimalist
contribution
analysis.
At this level, the analysis (1)
develops the theory of change, and (2) confirms that the expected
outputs were delivered. Statements of contribution are based on the
inherent strength of the theory of change and on evidence that the
expected outputs were delivered. For example, in a vaccination
programme, if the outputs (vaccinations) are delivered, then the
outcome of immunisation can be assumed based on the results of
previous vaccination programmes. The weaknesses of this level of
analysis are any perceived weaknesses in the theory of change.
Contribution
analysis
of
direct
influence.
This level of analysis
starts with minimalist analysis and gathers and builds evidence that (1)
the expected results in areas of direct influence of the theory of change
were observed, and (2) the programme was influential in bringing about
those results, taking other influencing factors into consideration.
3
ILAC Brief 16
Statements of contribution are based on (1) observed results, (2)
confirmation that the assumptions about direct influence are supported
by factual evidence, and (3) the inherent strength of the theory of
change in areas of indirect influence. An example of where this level of
analysis would be appropriate is an intervention to get an agricultural
research organisation to work collaboratively to solve complex
problems—an approach, say, that has proven effective elsewhere. If
there is evidence that the research organisation has indeed adopted the
new approach (the desired behavioural change) as a result of the
intervention, the subsequent benefits may not have to be
demonstrated, as they will have already been established from previous
research.
Contribution
analysis
of
indirect
influence.
This level extends the
analysis into the more challenging area of indirect influence. It measures
the intermediate and final outcomes/impacts (or some of them) and
gathers evidence that the assumptions (or some of them) in the theory
of change in the areas of indirect influence were borne out. Statements
of contribution at this level attempt to provide factual evidence for at
least the key parts of the entire theory of change.
Further reading
Douthwaite, B., Schulz, S., Olanrewaju, A.S. and Ellis-Jones, J. 2007.
Impact pathway evaluation of an integrated Striga hermonthica
control project in Northern Nigeria. Agricultural Systems 92:
201-222.
Horton, D., Mackay, R., Andersen, A. and Dupleich, L. 2000.
Evaluating capacity development in planning, monitoring, and
evaluation: a case from agricultural research. Research Report no.
17. Available at
http://www.ifpri.org/isnararchive/Publicat/PDF/rr-17.pdf
Kotvojs, F. 2006. Contribution analysis: a new approach to evaluation
in international development. Paper presented at the Australian
Evaluation Society 2006 International Conference, Darwin.
Available at http://www.aes.asn.au/conferences/2006/papers/
022%20Fiona%20Kotvojs.pdf.
Leeuw, F.L. 2003. Reconstructing program theories: methods available
and problems to be solved. American Journal of Evaluation 24:
5-20.
Mayne, J. 2001. Addressing attribution through contribution analysis:
using performance measures sensibly. Canadian Journal of
Program Evaluation 16: 1-24. Earlier version available at
http://www.oagbvg.gc.ca/domino/other.nsf/html/99dp1_e.html/
$file/99dp1_e.pdf
Mayne, J. and Rist, R.C. 2006. Studies are not enough: the necessary
transformation of evaluation. Canadian Journal of Program
Evaluation 21: 93-120.
Montague, S., Young, G. and Montague, C. 2003. Using circles to tell
the performance story. Canadian Government Executive 2: 12-16.
Available at
http://pmn.net/library/usingcirclestotelltheperformancestory.htm
Weiss, C.H. 1997. Theory-based evaluation: past, present, and future.
New Directions for Evaluation 76(Winter): 41-55.
About the author
John Mayne (john.mayne@rogers.com) is an independent advisor on
public sector performance. Previously, he was with the Office of the
Auditor General of Canada and the Treasury Board Secretariat.
Recent briefs
7. Outcome mapping
8. Learning alliances
9. The Sub-Saharan Africa Challenge Program
10. Making the most of meetings
11. Human resources management
12. Linking diversity to organizational effectiveness
13. Horizontal evaluation
14. Engaging scientists through institutional histories
15. Evaluación horizontal: Estimulando el aprendizaje
social entre "pares"
.
ILAC Brief 16
4
The Institutional Learning and Change (ILAC) Initiative (www.cgiar-ilac.org), hosted by Bioversity
International, seeks to increase the contributions of agricultural research to sustainable reductions in
poverty. The ILAC Initiative is currently supported by the Netherlands Ministry of Foreign Affairs.
ILAC Briefs aim to stimulate dialogue and to disseminate ideas and experiences that researchers
and managers can use to strengthen organizational learning and performance. An ILAC brief may
introduce a concept, approach or tool; it may summarize results of a study; or it may highlight an event
and its significance. To request copies, write to ilac@cgiar.org. The ILAC Initiative encourages fair use of
the information in its Briefs and requests feedback from readers on how and by whom the publications
were used.
Layout and printing: www.scriptoria.co.uk
Box 2. Contribution Analysis in Evaluating Capacity
Development in Planning, Monitoring and Evaluation
In the evaluation of the project on evaluating capacity
development in planning, monitoring and evaluation (Figure
1) outlined and described by Horton et al. (2000), a number
of steps in contribution analysis were undertaken:
A theory of change was developed.
There was clear recognition that the project activities were not
the only influences on adoption of PM&E approaches—other
influencing factors were identified, such as the general
pressure for public sector reform and pressure from donors.
Surveys asked explicitly for views on the nature and extent of
the project's contribution to enhanced capacity, and attempts
were made to triangulate the findings.
The lessons learned on how future projects could enhance
their contribution represent de facto refinements of the theory
of change.
Additional contribution analysis steps that might
have been useful include:
A more structured approach to assessing contribution
from the outset.
More analysis of the other influencing factors, perhaps through
clearer articulation up front, comparisons with similar
organisations not part of the project, and through asking
about the relative contribution of the project efforts.
More attention to the risks facing the project.