ArticlePDF Available

Abstract and Figures

The basic ideas behind contribution analysis were set out in 2001. Since then, interest in the approach has grown and contribution analysis has been operationalized in different ways. In addition, several reviews of the approach have been published and raise a few concerns. In this article, I clarify several of the key concepts behind contribution analysis, including contributory causes and contribution claims. I discuss the need for reasonably robust theories of change and the use of nested theories of change to unpack complex settings. On contribution claims, I argue the need for causal narratives to arrive at credible claims, the limited role that external causal factors play in arriving at contribution claims, the use of robust theories of change to avoid bias, and the fact that opinions of stakeholders on the contribution made are not central in arriving at contribution claims.
Content may be subject to copyright.
Revisiting Contribution Analysis
John Mayne
Abstract: e basic ideas behind contribution analysis were set out in 2001. Since
then, interest in the approach has grown and contribution analysis has been opera-
tionalized in dierent ways. In addition, several reviews of the approach have been
published and raise a few concerns. In this article, I clarify several of the key concepts
behind contribution analysis, including contributory causes and contribution claims.
I discuss the need for reasonably robust theories of change and the use of nested
theories of change to unpack complex settings. On contribution claims, I argue the
need for causal narratives to arrive at credible claims, the limited role that external
causal factors play in arriving at contribution claims, the use of robust theories of
change to avoid bias, and the fact that opinions of stakeholders on the contribution
made are not central in arriving at contribution claims.
Keywords: causal factors, causal narratives, contribution analysis, contribution
claims, contributory causes, theories of change
Résumé : Les principes fondamentaux de lanalyse de contribution ont été établis en
2001. Depuis lors, l’intérêt porté à cette approche a crû et l’analyse de contribution
a été opérationnalisée de diérentes façons. De plus, plusieurs examens de cette ap-
proche ont été publiés et ont soulevé quelques inquiétudes. Dans cet article, je clari e
plusieurs concepts de l’analyse de contribution, incluant les causes contributives et
les énoncés de contribution. Je discute de la nécessité de faire appel à des théories du
changement raisonnablement robustes et d’utiliser des théories complémentaires du
changement pour comprendre des contextes complexes. Au chapitre des énoncés de
contribution, je soutiens qu’il est nécessaire d’élaborer des narratifs causaux pour
arriver à des attributions crédibles; que les facteurs causaux externes jouent un rôle
limité dans l’atteinte des énoncés de contribution; que l’utilisation de théories du
changement robustes permet déviter les biais; et que les opinions des intervenant.e.s
sur la contribution ne devraient pas jouer un rôle central dans l’établissement des
énoncés de contribution.
Mots clé: facteurs causaux, narratifs causaux, analyse de contribution, attributions
de contribution, causes contributives, théories du changement
Increasingly, interventions that evaluators are asked to assess are quite complicated
and complex. ey may involve a number of major components, dierent levels of
government and/or numerous partners, and have a long timeframe, perhaps with
Corresponding author: John Mayne,
© 2019 Canadian Journal of Program Evaluation / La Revue canadienne d’évaluation de programme
34.2 (Fall / automne), 171–191 doi: 10.3138/cjpe.68004
172 Mayne
emerging outcomes (Byrne, 2013; Gerrits & Verweij, 2015; Schmitt & Beach, 2015).
Nevertheless, funders of such interventions still want to know if their funding has
made a di erence—if the interventions have improved the lives of people—and
in what manner. While a range of evaluation approaches might address these
questions, theory-based methods are oen used, including contribution analysis
(Befani & Mayne, 2014; Befani & Stedman-Bryce, 2016; Mayne, 2001, 2011, 2012;
Paz-Ybarnegaray & Douthwaite, 2016; Punton & Welle, 2015; Stern et al., 2012;
Wilson-Grau & Britt, 2012).
Contribution analysis (CA) has continued to evolve since its introduction in
2001 (Budhwani & McDavid, 2017; Dybdal, Nielsen, & Lemire, 2010). It was  rst
presented in the setting of using monitoring data to say something about causal
issues related to an intervention. Since then, most thinking about and application
of CA has been as an evaluation approach to get at causal issues and understand-
ing about how change is brought about. At the same time, my concepts and ideas
about theories of change—the basic tool used for CA—have evolved considerably
(Mayne, 2015, 2017, 2018). In this article, I would like to set out my current think-
ing on several key issues and some misunderstandings around CA:
how causality is understood and addressed in CA,
useful theories of change for CA in complex settings,
inferring causality for contribution claims, and
generalizing CA ndings on contribution claims.
ose using or reviewing contribution analysis have raised several concerns about
its application (Budhwani & McDavid, 2017; Delahais & Toulemonde, 2012, 2017;
Dybdal et al., 2010; Lemire, Nielsen, & Dybdal, 2012; Schmitt & Beach, 2015). I will
address these concerns and issues as the article unfolds. e article aims to correct a
number of misinterpretations around CA. It builds on several previous publications
and assumes some working knowledge of CA.
First, here is a review of the terms being used:
Impact pathways describe causal pathways showing the linkages be-
tween a sequence of steps in getting from activities to impact.
A theory of change (ToC) adds to an impact pathway by describing the
causal assumptions behind the links in the pathway—what has to happen
for the causal linkages to be realized.
Causal link assumptions are the events or conditions necessary or likely
necessary for a particular casual link in a ToC pathway to be realized.
Results are the outputs, outcomes, and impacts associated with an inter-
A discussion of these terms can be found in Mayne (2015 ). It should be noted that
these terms are not always dened or used by others as set out above, and indeed
there is no universal agreement on them. It is important, therefore, to de ne care-
fully how the terms are being used in a particular setting.
© 2019 CJPE 34.2, 171–191 doi: 10.3138/cjpe.68004
Revisiting Contribution Analysis 173
Contribution analysis is an approach for addressing causality, producing credible
claims about the intervention as a contributory cause (Mayne, 2011, 2012). As
such, it explores how and why interventions are working and for whom. Contribu-
tion analysis is increasingly being used in evaluations (Buckley, 2016; Buregeya,
Brousselle, Nour, & Loignon, 2017; Delahais & Toulemonde, 2017; Downes,
Novicki, & Howard, 2018; Kane, Levine, Orians, & Reinelt, 2017; Noltze, Gais-
bauer, Schwedersky, & Krapp, 2014; Ton, 2017), and in particular to address causal
issues in complex settings (Koleros & Mayne, 2019; Palladium, 2015).
e basis of the contribution claim is the empirical evidence con rming a
solid ToC of an intervention, that is, conrming the impact pathways, the as-
sumptions behind the causal links in the ToC, and the related causal narratives
explaining how causality is inferred. e ToC is the outline for the contribution
story of the intervention. e steps usually undertaken in contribution analysis
are shown in Box 1 (Mayne, 2011).
Box 1 . Steps in contribution analysis
Step 1: Set out the speci c cause-eect questions to be addressed
Step 2: Develop robust theories of change for the intervention and its
Step 3: Gather the existing evidence on the components of the theory
of change model of causality:
e results achieved
e causal link assumptions realized
Step 4: Assemble and assess the resulting contribution claim, and the
challenges to it
Step 5: Seek out additional evidence to strengthen the contribution
Step 6: Revise and strengthen the contribution claim
Step 7: Return to Step 4 if necessary
Causality is always a key element of an evaluation, and hence what perspective
to take on causality is important. Contribution analysis—and other theory-based
evaluation approaches—uses a generative view of causality, talking of causal pack-
ages and contributory causes.
Generative Causality
In many situations a counterfactual perspective on causality—which is the tradi-
tional evaluation perspective—is unlikely to be useful; experimental designs are
doi: 10.3138/cjpe.68004 CJPE 34.2, 171–191 © 2019
174 Mayne
oen neither feasible nor practical. Rather, a more useful perspective is that of
generative causality:1 seeing causality as a chain of cause-eect events (Gates &
Dyson, 2017, p.36; Pawson & Tilley, 1997). is is what we see with models of in-
terventions: a series or several series of causal steps—impact pathways—between
the activities of the intervention and the desired impacts. Taking the generative
or this stepwise perspective on causality and setting out an impact or contribution
pathway is essential in understanding and addressing the contribution made by
the intervention. e associated ToC model sets out what is needed if the expected
results are to be realized.
Contributory Causes
Contribution analysis aims at arriving at credible claims on the intervention as
a contributory cause, namely, that the intervention was one of several necessary
or likely necessary2 factors in a causal package that together brought about or
contributed to the changes observed (Cartwright & Hardie, 2012; Mackie, 1974;
Mayne, 2012 ). at is, it is this causal package of factors that will bring about
change, and all of these factors are necessary to bring about the change—they
are all INUS conditions3—and hence in a logical sense all are of equal impor-
tance. In more complex settings, interventions may comprise a number of di er-
ent components, and for each, one can ask if the component was a contributory
Contribution analysis uses this stepwise perspective on causality to assess
whether the intervention has “made a dierence,” which in this context means
that the intervention had a positive impact on people’s lives—that is, it made a
contribution, it played a causal role. And it did so because it was a necessary part
of a causal package that brought about or contributed to change.  is interpreta-
tion of making a dierence needs to be distinguished from the meaning associated
with the counterfactual perspective on causality, where “made a di erence” means
“what would have happened without the intervention.” is concept of a contribu-
tory cause responds to the question posed by Budhwani and McDavid (2017 ) on
the specic meaning of a contribution within CA.
Contribution Claims
Contribution claims have been discussed in previous articles (Mayne, 2011, 2012).
But some elaboration and extension is needed. Contribution claims are not just
about whether the intervention made a contribution or not. Certainly, a key con-
tribution claim is the yes/no evaluation question:
1. Has the intervention (or component) made a dierence? Has it played a
positive causal role in bringing about change?
But a more interesting and important contribution claim is around this evalua-
tion question:
© 2019 CJPE 34.2, 171–191 doi: 10.3138/cjpe.68004
Revisiting Contribution Analysis 175
2. How and why has the intervention (or component) made a di erence,
or not, and for whom?
e contribution claim here is about the intervention (or an intervention com-
ponent) causal package at work. How and in what manner did the intervention
support factors and the intervention eorts bring about, or contribute to, change?
e contribution claim provides the evidence on why change occurred, that is, the
causal narrative. It might also explain why the expected change was not realized,
why the intervention did not make a di erence.
Demonstrating Contribution Claims
As noted above, the basis for contribution analysis is the intervention ToC, and
verifying the ToC—the results, the assumptions, and the causal links—with em-
pirical evidence.
Several authors have suggested that in contribution analysis, contribution claims
are indeed based on opinions. Schmitt and Beach (2015, p.436) claim that “[i]n
CA, stakeholders [being] interviewed to nd out whether they believe the program
worked” is the basis for contribution claims. However, this is not what CA is about at
all. e aim of contribution analysis is to get beyond basing a contribution claim on
opinions of stakeholders about the contribution made. Interviews may be conducted
as part of the process to gather information on the results achieved and if assump-
tions were realized, but basing contribution claims on opinions about the claims is
not part of the process. Rather, the evidence gathered on the ToC is used to analyze
and make conclusions about contribution claims. Any reports or articles that rely
solely on opinions are not reporting on a CA, despite what their authors may claim.
Such studies should have a dierent label to remove references to actual CA.
A second issue related to contribution claims focuses on the role of external
factors in arriving at a credible contribution claim. ere is indeed some confu-
sion over the role of external inuences and especially alternative or rival explana-
tions in CA, confusion that I have contributed to. In Mayne (2011 ), I suggested
that a contribution claim could be made when external factors were shown not to
have contributed signicantly, and in Mayne (2012 ), I raised the need to explore
rival explanations. ese statements were incorrect in that they did not fully rec-
ognize the implication of having multiple causal factors at work, some of which
may be associated with the intervention and others with external in uences.
However, external causal factors are usually not alternative or rival explanations.
ey are simply other causal factors at work.
erefore, in my view, the “alternative” and “rival” terms are inappropriate
in the context of complex causality. But there is a more important implication,
namely, that one can explore whether or not a causal factor in a causal package
made a contribution and how it did so without considering the other causal fac-
tors at play, outside the package, such as external inuences, except of course if
they are causally linked. A robust ToC sets out the intervention as a contributory
cause. Empirically verifying the ToC allows the contribution claim to be made.
doi: 10.3138/cjpe.68004 CJPE 34.2, 171–191 © 2019
176 Mayne
Budhwani and McDavid (2017, p.4) write that “[CA] relies on tests of alter-
native explanations to act as substitute candidates in place of counterfactuals to
determine the plausibility of a proposed theory of change.” As discussed above,
this is not the case, and of course CA uses a stepwise (generative) not a counterfac-
tual approach to causality. Lemire et al. (2012 ) also argue for the need to examine
alternative or rival explanations to prove plausible association. Again, this is not
correct, but in this case the authors seem to realize this in a footnote, saying that
examining alternative explanations is only needed if the aim is to compare the
relative contribution of the intervention. And that is true, although I would still
argue that the alternative/rival explanations terminology is misleading, since all
such factors may be contributing to the results: they are not rivals or alternatives.
e extent to which an evaluation explores the causal factors other than the
intervention depends of course on the evaluation question being addressed. If the
evaluation question is about assessing what brought about an observed impact,
then these other factors would indeed need to be explored. If addressing the nar-
rower question of whether the intervention made a contribution to the impact and
how it did so, then these other factors need not play a major role in the analysis
(Befani & Mayne, 2014).
If an analysis uses a weak ToC with insucient causal link assumptions, then a
credible contribution claim based on this ToC is not possible. In this case, exploring
other external inuences might allow some conclusions to be reached concerning
the intervention; however, this approach is not CA as discussed in this article.
Step 1 in contribution analysis (Box 1) is setting out the causal questions to be ad-
dressed in the analysis. is is an important rst step that is oen not adequately
addressed. e challenge here is that it is relatively easy to set out evaluation causal
questions that sound reasonable and meaningful—such as “Has the intervention
been eective?”—but are actually not. e basic reason is that most interventions
on their own are not the cause of observed results (Mayne, forthcoming).  e focus
in CA is on the contribution an intervention is making to an expected result.  us,
(1)the particular result(s) of interest need(s) to be clearly specied, and (2) CA is
not trying to explain what brought about the result, but rather if and how the inter-
vention made a contribution. erefore, for example, as discussed above, the need to
explore other inuencing factors depends on just what the causal question is.
The Need in CA for Robust ToCs
Previous articles (Mayne, 2001, 2011, 2012) on contribution analysis generally
assume that the ToC used is reasonably detailed and sound, although they do not
elaborate. However, using a weak ToC in a contribution analysis can only lead to
weak contribution analysis ndings.
© 2019 CJPE 34.2, 171–191 doi: 10.3138/cjpe.68004
Revisiting Contribution Analysis 177
I have suggested criteria for robust theories of change, based on the ToC
being both structurally sound and plausible. e d e tailed criteria, drawn in part
from Davies (2012), are discussed in Mayne (2017 ) for all elements of a ToC: each
result, each assumption, each causal link, and overall. For example, if the ToC is
not understandable, the causal links in the model cannot be conrmed or, if seem-
ingly “conrmed,” would not lead to credible causal claims. Similarly, if terms are
ambiguous, the specic results cannot be empirically con rmed.
As a result of this expanded thinking, Step 2 in Box 1 now highlights the need
for a robust theory of change. However, the full set of the robust criteria is quite
demanding, and the aim is oen to ensure that a reasonably robust ToC is avail-
able for contribution analysis. e proposed criteria can support this analysis and
help strengthen the ToC. In addition, a good ToC should be supported as much
as possible by prior social science research and evidence. is type of support will
help build credible causal narratives.
Both Budhwani and McDavid (2017 ) and Delahais and Toulemonde (2012 )
raise concerns about bias in arriving at contribution claims. I would argue that if
one is using a reasonably robust ToC and empirically conrming it in a CA, then
the likelihood of bias is greatly reduced, when all of the necessary assumptions
associated with each causal link in the ToC are conrmed with empirical evidence.
And, of course, if, as Delahais and Toulemonde (2012) argue, one is able to use
more than one source of data for the veri cation, then the chance of any bias is
even further reduced. Remember that one is not simply looking to conrm a yes/
no issue of contribution but probably, more importantly, from the collection of
veried assumptions building a credible causal narrative on how and why the
intervention contributed, and for whom.
Some have questioned the need for the “necessity” of causal link assumptions—
a robust criterion, noting, in particular, that assumptions are oen not 0–1 variables
but stated as conditions that could be partially met. What then does necessity mean
for a partially met assumption? Results in most ToCs are not de ned as a speci c
amount of the result. Consider an intervention aimed at educating mothers about
the benets of a nutritious diet for their children (see White, 2009, for a discussion
of such an intervention). One result here would be “mothers adopt new feeding
practices,” and a related assumption could be “supportive husbands and mother-in-
law.” en a partially met assumption (somewhat supportive) would mean less of
the result (adopting some practices) but one that is still necessary to get that result.
For a robust ToC, it is always better to dene the result as clearly as possible,
such as, for example, “fully adopting new practices” to relate better to the causal
link assumptions. It is still the case that if the assumption is not realized at all,
then there will be no result.
Unpacking Complex Intervention Settings: Di erent
ToCs for Di erent Purposes
It should be evident that there is not a unique representation of a theory of
change for a given intervention, so deciding on how much detail to include can
doi: 10.3138/cjpe.68004 CJPE 34.2, 171–191 © 2019
178 Mayne
be a challenge. In most cases, several dierent depictions of a theory of change
are needed to meet dierent purposes (Mayne, 2015). Further, ToCs can quickly
become overly complicated and less useful if too much detail is used in any one
representation. In Mayne (2015 ), several levels of ToCs are presented and their
uses discussed to help with this problem:
A narrative ToC describes briey how the intervention is intended to
e overview ToC indicates the various pathways to impact that comprise
the intervention showing some of steps in each pathway along the way to
impact. It can also set out the rationale assumptions or premises behind
the intervention, but usually not the causal link assumptions.
Nested ToCs are developed to unpack a more complicated/complex in-
tervention and include the explicit causal link assumptions.  ere can
be a nested ToC, for example, for each pathway, for each pathway in a
dierent geographical area, and/or for dierent targeted reach groups.
Koleros and Mayne (2019 ) discuss using nested ToCs for di erent actor
groups for a complex police reform intervention.
Budhwani and McDavid (2017, p.19) suggest that CA may not work well in
complex settings due to the di culty of building useful ToCs in such a context.
Actual experience is quite the opposite. Using nested ToCs to unpack a complex
intervention and its context has worked well in numerous situations (see, for
example, Douthwaite, Mayne, McDougall, & Paz-Ybarnegaray (2017 ); Koleros &
Mayne (2019 ); Mayne & Johnson (2015 ); Riley et al. (2018 ).
The Need for Evaluable ToC Models
Usually, the evaluator needs to develop or guide the development of ToCs that can
be used for evaluation purposes. Oen, the evaluator nds an already developed
ToC of the intervention being evaluated, but it may not be suitable for evaluation
purposes (for instance, it may be well suited for acquiring funding or communica-
tion purposes). Something more evaluable is needed, such as developing nested
ToCs to unpack the complexity of the intervention, with careful thought given to
the causal assumptions at play.
Developing “good” ToCs is itself a challenge, but equally it is oen a serious
challenge to bring on board those who “own” the existing ToC and may not want
to see a new ToC brought into play. Koleros and Mayne (2019 ) discuss handling
this situation.
Behaviour-Change ToC Models
Most interventions involve changing the behaviour of one or more actor groups,
so behaviour change needs to be a key focus (Earl, Carden, & Smutylo, 2001).  e
detailed ToCs needed for CA can be based on the generic behaviour-change ToC
model, shown in Figure 1 (Mayne, 2017, 2018). e model is a synthesis of social
© 2019 CJPE 34.2, 171–191 doi: 10.3138/cjpe.68004
Revisiting Contribution Analysis 179
science research on behaviour change by Michie, Atkins, and West (2014 ), which
argues that behaviour (B) is changed through the interaction of three necessary
elements: capabilities (C), opportunities (O), and motivation (M). Hence the
name: the COM-B model.
e COM-B ToC model has proven very useful for building robust nested
ToCs and for undertaking contribution analysis, because it is quite intuitive and
is based on a synthesis of empirical evidence on behaviour change. It is especially
helpful in explaining how behaviour changes were brought about. at is, the
COM-B model is a model of the mechanisms4 at work in bringing about behaviour
change and thus provides the basis for inferring causality about behaviour change.
A number of authors who have used contribution analysis in complex set-
tings have noted, though, that it can be quite data- and analysis-demanding, when
one has to work with a large number of nested ToCs (Delahais & Toulemonde,
2012; Freeman, Mayhew, Matsinhe, & Nazza, 2014; Noltze et al., 2014; Schmitt
and Beach (2015 ) make a similar note regarding process tracing, which is closely
related to CA (Befani & Mayne, 2014).
Capacity Change
Reach &
Behaviour Change
Direct Benefits
Capability Opportunity
Figure 1. The COM-B Theory of Change model
doi: 10.3138/cjpe.68004 CJPE 34.2, 171–191 © 2019
180 Mayne
With that in mind, intermediate level ToCs would be useful—more than the
overview ToC but less detailed than an operational nested ToC. is is where a
simpli ed ToC could be useful. e idea of a simplied ToC is to develop a less
complex ToC in the context of a contribution analysis, especially when there may
be quite a few pathways to analyze. So, for example, rather than the more detailed
generic behaviour-change ToC (Figure 1), we might have, more simply, activities/
outputs, behaviour change, direct benets, and impact (Figure 2) as the pathway
ToC. Figure 2 shows the essence of the pathway getting from activities/outputs to
impact, explicitly identifying results that are usually straightforward to measure.
e associated causal link assumptions would normally include the following
the intended target groups were reached, and
adequate improvements in capabilities, opportunities, and motivation
were achieved.
In setting out the causal link assumptions, a detailed nested ToC for the pathway
is almost essential for their identi cation. e aim would be to have a minimum
number of higher-level assumptions in the simpli ed ToC, perhaps arrived at by
aggregating assumptions from the more detailed ToC.
Figure 2 shows one model for a simplied ToC. Even simpler pathways could
be developed, such as dropping the behaviour change box, or the direct bene t
Time line
Wellbeing Change
Direct Benefits
Behaviour Change
Assumptions +
Assumptions about
reaching target groups
Assumptions about
bringing about the
needed capacity change
Figure 2. A simplied COM-B Theory of Change
© 2019 CJPE 34.2, 171–191 doi: 10.3138/cjpe.68004
Revisiting Contribution Analysis 181
box. en the pathway assumptions would have to include behaviour change and
direct benets, respectively, in order to keep those in the model.
Simplied ToCs would reduce the amount of data required to carry out a
contribution analysis to determine if the intervention had made a contribution or
not. However, in order to understand why the intervention did or did not work,
one would need to focus on the behaviour-change level. But determining why the
expected behaviour changes did not come about, for instance, can be done ret-
rospectively, asking those involved about reach and capacity change (capabilities,
opportunities, and motivation).
Experience to date does suggest the need to rst develop a detailed nested
ToC, and then the simpli ed version. In this way it becomes clear what is being
suppressed in the simplied ToC and needs to be kept in mind, even though the
simplied ToC would actually be used in Contribution Analysis.
ere remains in CA a desire to say something about the quantitative size of the
contribution a causal factor is making. Budhwani and McDavid (2017, p. 20) talk
about measuring the degree of contribution so that the CA can reach  ndings sim-
ilar to cost-benet analysis. is is not possible because of the nature of complex
causality. ere are multiple causal factors at work, and it is packages of necessary
causal factors that bring about change, not any individual factor. Although others
have attempted to examine the issue of estimating size eects within contribution
analysis (Ton et al., 2019), CA does not, on its own, estimate the size or indeed the
relative importance of the causal factors at work.
But exploring the relative importance question is possible (Mayne, 2019).
ere is a need, then, to carefully decide (a) which causal factors one wants to
compare and (b) how one wants to interpret “importance.” A variety of perspec-
tives are possible: perceived importance, the roles played by the factors, the funds
expended, and the extent of the constraints to change. All are plausible ways of
assessing the relative importance of causal factors.
CA aims to result in claims about the contribution made by an intervention to
observed results. A rst question, then, is which results? In looking at an interven-
tion and its ToC, it is clear that there could be a number of interesting contribution
claims, namely, claims associated with any of the results along the impact pathway.
Contribution claims for early results would probably be quite easily established,
while more distant outcomes and impact are likely to present more of a chal-
lenge. But it would be important to identify just which contribution claims were
of prime interest.
doi: 10.3138/cjpe.68004 CJPE 34.2, 171–191 © 2019
182 Mayne
And of course, claims for more distant results need to be built on credible
claims for the earlier pathway results. Hence the need to consider approaches
to verifying a single causal link in a ToC. In a more complex intervention there
would be several dierent pathways to impact, each with its own ToC. And o en,
it is useful to know if each of these pathways contributed to the success (or not)
of the intervention. For example, in the case where actor-based ToCs have been
developed for the intervention, it is of considerable interest to understand how
and why the various actor groups contributed to bring about results.
Causal Inference Analysis
Key to credible contribution claims are credible arguments inferring causality—the
logic and evidence used to justify a causal link—which would be used in Step4 to
assess the contribution story to date. An evidence-based contribution claim has
two parts:
1. e intervention (or a component) contributed to an observed change—
it played a positive role in bringing about change, and
2. It did so in the following manner …
Showing that the intervention was a contributory cause accomplishes both of
these aims: the intervention is part of a causal package that was su cient to bring
about the change—which explains how the change was brought about (2), and
that the intervention was a necessary part of the causal package (1), and hence a
causal factor in bringing about the change. Process tracing is a useful alternative
way for getting at (1), but it does not provide the information needed for (2).
Befani and Mayne (2014 ) and Befani and Stedman-Bryce (2016 ) have noted
correctly that while CA seeks to verify the causal links in a theory of change, pre-
vious discussions (Mayne, 2011, 2012) do not say much about how to go about
doing the verication. Yet this is a key step and more of a challenge when examin-
ing complex interventions. is article looks more closely at making these causal
inferences and builds on the approach of process tracing and related insights on
causality, arguing the need for solid causal narratives.
In the traditional CA approach, showing that the intervention was a con-
tributory cause and hence made a dierence—that is, contributed to an observed
impact and how it did so—requires demonstrating that
the theory of change (the causal package) was su cient, and
the intervention activities were an essential part of the causal package,
and hence a causal factor in bringing about change.
Suciency is demonstrated by showing that each causal link in the theory of
change (ToC) with its assumptions was realized. Suciency was always a weak
point in the argument, and I would now say that data showing the ToC was real-
ized is not enough. One needs in addition to build credible causal narratives for
the ToC. is picks up a key point made by Pearl and Mackenzie (2018 ) in their
© 2019 CJPE 34.2, 171–191 doi: 10.3138/cjpe.68004
Revisiting Contribution Analysis 183
e Book of Why, namely that statistics alone are not enough to infer causality;
one also needs good explanatory causal theory. As mentioned previously, good
ToCs are oen based on some social science theory and not just the thoughts of
a program team, so that they can provide the basis for solid causal explanations.
What is needed is good causal reasoning (Davidson, 2013).
Let me rst note again that CA is expected to be done on a reasonably robust
ToC, and many of the criteria for robustness are indeed criteria for inferring
causality, forming the elements of a credible causal narrative. Table 1 sets out
four tools for inferring causality, all of which are embedded in a robust ToC and
described in more detail below.
e evidence tools in Table 1 can be used to build credible causal narratives.
Causal narratives provide the argument and evidence related to how the causal
factors at work played a positive role in bringing about change. ey explain the
how a causal link worked, or the causal mechanisms at play. In Table 1, the
“Robust ToC #” values are references to the robust criteria in Mayne (2017).
Causal Inference Evidence Tools
Checking that Change Occurred
1. Verifying the ToC. With a robust ToC, verifying that the pathway results and
associated assumptions were realized lays the basis for the plausibility of a con-
tribution claim. As Weiss (1995, p.72) argues, “Tracking the micro-stages of the
eects as they evolve makes it more plausible that the results are due to program
activities and not to outside events or artifacts of the evaluation, and that the re-
sults generalize to other programs of the same type.” Verifying the ToC provides
the empirical evidence on which causal narratives are built. If aspects of the ToC
cannot be veried, then causal claims cannot be made about those aspects.
e next three tools are hoop tests used in process tracing. If the veri ed ToC
does not reect them, then causality is unlikely. However, conrming these three
tests does not conrm causality, as there may be other causal factors at work.
Hoop Tests for Conrming Plausibility
2. Logical and plausible time sequence of results and assumptions . e evidence
sought here is that
the results along a pathway were realized in a logical time sequence (i.e.,
cause preceded eect along the causal chain);
the assumptions for each causal link were realized aer the preceding
result, i.e., were pre-events and conditions for the subsequent result; and
the timing of when the results were realized was plausible and consistent
with the ToC timeline.
is may seem like an obvious criterion, but in practice it can prove quite useful.
Too oen, for example, ToCs do not have a timeline and hence the third com-
ponent of the criterion cannot be applied. It can easily be the case that a result
doi: 10.3138/cjpe.68004 CJPE 34.2, 171–191 © 2019
184 Mayne
Table 1. Evidence for inferring causality
Tools References Comment
Checking that change occurred
1. Verifying pathway • Robust ToC #9
and assumptions, • Contribution Analysis
including at-risk
assumptions Weiss (1995 )
Hoop tests for conrming plausibility
2. Logic and plausible • Robust ToC #3 : timing
time sequence
3. Reasonable e ort
4. Expect-to-see
e ects realized
Davidson (2009 )
• Robust ToC #4: Logical
• Robust ToC #11: level
of e ort
Davidson (2009 )
• Process tracing: hoop
Building the causal narrative
5. Causal packages • Robust ToC #10: A
are sucient su cient set
• Robust ToC #5:
necessary or
likely necessary
Conrming a causal factor
6. Some unique • Process tracing:
e ects observed smoking gun tests
Are the pathway and assumptions
veried? This forms the evidence
base for making the contribution
Needed to explain causality.
Link : Are assumptions pre-events
and conditions for the result?
ToC : Is sequence of results plausible?
Is the timing of the occurrence of the
results plausible?
Is it reasonable that the level of e ort
expended will deliver the results?
If eects not seen, causality very
unlikely. But eects might have other
Is it reasonable that the collection of
causal package factors is su cient to
bring about the result?
Are the mechanisms at work
identi ed?
Have the barriers to change been
Result only possible if intervention is
the cause.
was not realized because not enough time has elapsed. Conversely, if a result has
indeed appeared but earlier than expected, it may suggest something other than
the intervention at work. Furthermore, conrming that the assumptions were
realized in a timely fashion means that the basis for the causal narrative for the
link is sound. Taken together, the three points above argue that the causal link is
quite plausible.
3. Reasonable e ort expended. Again this is a plausibility test. If, in implementa-
tion, the size of the actual intervention, including the eorts of any partners,
© 2019 CJPE 34.2, 171–191 doi: 10.3138/cjpe.68004
Revisiting Contribution Analysis 185
appears quite small in comparison to the observed results, then a contribution
claim may seem implausible (Davidson, 2009).
4. Expected-to-see e ects realized . is is the process-tracing hoop test (Punton &
Welle, 2015) whereby if the causal link has worked, then there are e ects, o en
secondary e ects, that one would expect to see. If those e ects are not realized,
then the causal link is doubtful.
Building the Causal Narrative
5. Causal packages are su cient . is is the essential tool in building the causal
narrative. We are trying to build an argument that the causal link between one
result (R1) along an impact pathway—the cause—and its subsequent result (R2)—
the eect —worked. We would have shown that R1 and R2 did occur, as did the
associated causal link assumptions. e set of assumptions in particular set out
the framework for the argument, for the causal “story.” In bringing about R2, one
can imagine various constraints or barriers to change. e assumptions are events
and conditions that are expected to overcome these barriers. is can be a useful
way to develop the causal narrative.
A related approach is using causal mechanisms. Realist evaluation (Westhorp,
2014) argues that causality can be inferred by identifying and explaining the
causal mechanisms at work and the contexts in which the intervention occurs.
In a ToC approach, the context and the mechanisms at work are captured by the
causal link assumptions.
Schmitt and Beach (2015 ) have claimed that ToCs “hide” the mechanism at
work. While the realist causal mechanisms are not explicit in many ToC models
and hence CA, CA uses a dierent paradigm to conceptualize causality, namely
causal packages. Further, the causal mechanisms can oen be readily identi ed
by working through the causal package at work. Delahais and Toulemonde (2017,
p. 385), in discussing their contribution analysis work, make this link:
In the process of translating the “framing pathway” into a “framing mechanism,” we may
consider that we have just re ned the description of the causal package, i.e. deepened
the exercise without changing its nature. We have oen had this impression while read-
ing illustrations of the concept of mechanism .... In fact the very change in the nature of
the exercise occurs when the mechanism is given a name and referred to the literature,
i.e. when we assume that it remains the same in dierent contexts and then acquires its
generalization potential.
at is, the advantage of using causal mechanisms is that they refer to more
general causal forces at work, as referred to in the literature, and hence provide
common-sense logical explanations of causality. Let me note in particular that the
social science research–based COM-B ToC model explicitly identies the causal
mechanisms at work, namely capability, opportunity, and motivation to bring
about behaviour change.
e bottom line is to set out a sound and valid argument—a causal narrative—
of why the causal package at work did indeed contribute to R2.
doi: 10.3138/cjpe.68004 CJPE 34.2, 171–191 © 2019
186 Mayne
Conrming a Causal Factor
6. Unique e ects realized . is is the process-tracing smoking gun test for a causal
factor. Unique eects with respect to a specic causal factor are eects that can
be realized only if the causal factor was indeed part of the causal package bring-
ing about change (Befani & Mayne, 2014; Punton & Barnett, 2018). If they are
observed, then this is strong evidence that the causal factor played a positive
causal role in bringing about R2. But note that this test does not provide evidence
of how the change was brought about, that is, what the other factors in the causal
package are.
Contribution analysis shows that an intervention in a specic location contributed
to an observed result and how it did so. What might be said about the interven-
tion implemented in a dierent location? is is the issue of external validity or
generalization of CA  ndings.
If the intervention ToC worked in the new location, would it play the same
positive causal role there? To conclude this, one would need to show that it was
likely that in the new location
the intervention could deliver the same (or quite similar) outputs,
the causal link assumption would be realized, and
the causal narratives would remain valid.
e likelihood of each of these could be assessed. To the extent that higher-level
causal assumptions have been used in the ToC, such as when causal mechanisms
have been identied, then the argument that the causal narratives remain valid
will be stronger. In the nutrition example mentioned earlier, a key assumption
needed was that mothers control food distribution in the household (an assump-
tion that was missed initially). However, the more general causal assumption is
that there is a need to education the person(s) in power in the household—which
might not be the mother—a higher-level assumption.
One would need to carefully assess the conditions outlines above to produce a
nding about the generalizability of an intervention. Cartwright and Hardie (2012,
p. 7) argue that generalizing follows if, in a new location, the intervention played the
same causal role and the support factors (the causal link assumptions) are in place.
is is the same rationale as the CA argument above, using slightly di erent terms.
Clearly, if there is something very unique about the original location in some
causal link assumptions, then generalizing is unlikely to be possible.
Contribution analysis was set out some years ago as a set of general steps to take
in addressing causality. As such, over a number of years it led to a variety of ways
© 2019 CJPE 34.2, 171–191 doi: 10.3138/cjpe.68004
Revisiting Contribution Analysis 187
of operationalizing the concepts and principles, with numerous suggestions be-
ing made for applying CA in specic cases. is has all been for the good.  ere
have also been numerous articles raising legitimate questions about CA and its
application. In this article I have tried to look back at how CA has been applied
and consider the concerns that have been expressed.
In the last few years, I have seen a signicant rise in applications of CA, par-
ticularly as applied to complex settings, which are becoming more common. And
indeed, given that it assumes that multiple causal factors and interventions can
play a contributory role, it can be well suited to address causality in those settings,
especially using nested ToCs. I expect to see more and more applications of CA
in a variety of settings. But there is a need to be clear about what contribution
analysis can and cannot do.
Contribution analysis is not a quick-and-dirty approach to addressing causality.
On the downside, (1) it oen does require a substantial amount of data, along with
rigorous thinking, (2) it requires reasonably robust theories of change, and (3) it can-
not determine how much of an outcome result can be attributed to an intervention.
On the other hand, it oers several advantages: (1) it can be used to make
causal inferences when experimental and quasi-experimental designs are not
possible, or not needed/desired; (2) it explores why and how an intervention has
inuenced change, and for whom; (3) it can be part of a mixed-method approach
to an evaluation, such as when using comparative groups to assess how much
change has occurred; (4) it allows for making causal inferences about the interven-
tion without necessarily examining external causal factors; and (5) it addresses
cases where there are numerous causal factors at work by assessing contributory
causes leading to credible contribution claims.
Overall, CA has been found to be a practical way to explore causal relationships
and to better understand how changes have been brought about, and for whom.
1 For a discussion on dierent perspectives on causality, see Befani’s Appendix in Stern
etal. (2012 ).
2 “Likely necessary” allows for a probabilistic interpretation of an assumption (Mahoney,
2008, p.421). See Mayne (2015, p.126) for a discussion.
3 at is, they are an Insucient but Necessary part of a condition that is itself Unneces-
sary but Sucient for the occurrence of the eect (Mackie, 1974). See Mayne, (2012,
p.276) for a discussion of these INUS conditions.
4 Realist evaluations use the concept of mechanisms to infer causality ( Pawson & Tilley,
1997; Westhorp, 2014).
Befani, B., & Mayne, J. (2014). Process tracing and contribution analysis: A combined
approach to generative causal inference for impact evaluation. IDS Bulletin, 45(6),
doi: 10.3138/cjpe.68004 CJPE 34.2, 171–191 © 2019
188 Mayne
Befani, B., & Stedman-Bryce, G. (2016). Process tracing and Bayesian updating for impact
evaluation. Evaluation, 23 (1), 42–60.
Buckley, A. P. (2016). Using Contribution Analysis to evaluate small & medium enterprise
support policy. Evaluation, 22 (2), 129–148.
Budhwani, S., & McDavid, J. C. (2017). Contribution analysis: eoretical and practical
challenges and prospects for evaluators. Canadian Journal of Program Evaluation,
32 (1), 1–24.
Buregeya, J. M., Brousselle, A., Nour, K., & Loignon, C. (2017). Comment évaluer les e ets
des évaluations d’impact sur la santé : le potentiel de l’analyse de contribution. Cana-
dian Journal of Program Evaluation, 32 (1), 25–45.
Byrne, D. (2013). Evaluating complex social interventions in a complex world. Evaluation,
19 (3), 217–228.
Cartwright, N., & Hardie, J. (2012). Evidence-based policy: Doing it better. A practical guide
to predicting if a policy will work for you. Oxford, England: Oxford University Press.
Davidson, E. J. (2009). Causation inference: Nuts and bolts. A Mini Workshop for the ANZEA
Wellington branch. Welington, New Zealand. Retrieved from http://realevaluation.
Davidson, E. J. (2013). Understanding causes and outcomes of impacts . BetterEvaluation
coffee break webinars. Retrieved from
co ee_break_webinars_2013#webinarPart5
Davies, R. (2012). Criteria for assessing the evaluability of eories of Change. Rick on the
road [blog]. Retrieved from
Delahais, T., & Toulemonde, J. (2012). Applying contribution analysis: Lessons from
ve years of real life experience. Evaluation, 18(3), 281–293.
Delahais, T., & Toulemonde, J. (2017). Making rigorous causal claims in a real-life con-
text: Has research contributed to sustainable forest management? Evaluation, 23(4),
Douthwaite, B., Mayne, J., McDougall, C., & Paz-Ybarnegaray, R. (2017). Evaluating com-
plex interventions: A theory-driven realist-informed approach. Evaluation, 23(3),
Downes, A., Novicki, E., & Howard, J. (2018). Using the contribution analysis approach
to evaluate science impact: A case study of the national institute for occupation-
al safety and health. American Journal of Evaluation, 40(2), 177–189. https://doi.
org/10.1177/1098214018767046 . Medline:30518992
Dybdal, L., Nielsen, S. B., & Lemire, S. (2010). Contribution analysis applied: Re ections
on scope and method. Canadian Journal of Program Evaluation, 25 (2), 29–57.
Earl, S., Carden, F., & Smutylo, T. (2001). Outcome mapping. Ottawa, ON: International
Development Research Centre.
Freeman, T., Mayhew, S., Matsinhe, C., & Nazza, D. A. (2014). Evaluation of the Danish
strategy for the promotion of sexual and reproductive health and rights 2006–2013—
Pathways to change in SRHR: Sy nthesis report. Ministry of Foreign Aairs of Denmark.
© 2019 CJPE 34.2, 171–191 doi: 10.3138/cjpe.68004
Revisiting Contribution Analysis 189
Retrieved from
Gates, E., & Dyson, L. (2017). Implications of the changing conversation about cau-
sality for evaluators. American Journal of Evaluation, 38(1), 29–46. https://doi.
Gerrits, L., & Verweij, S. (2015). Taking stock of complexity in evaluation: A discus-
sion of three recent publications. Evaluation, 21(4), 481–491. https://doi.
Kane, R., Levine, C., Orians, C., & Reinelt, C. (2017). Contribution analysis in policy work:
Assessing advocacy’s in uence. Centre for Evaluation Innovation. Retrieved from
Koleros, A., & Mayne, J. (2019). Using actor-based theories of change to conduct robust
contribution analysis in complex settings. Canadian Journal of Program Evaluation,
33 (3), 292–315.
Lemire, S. T., Nielsen, S. B., & Dybdal, L. (2012). Making contribution analysis work: A
practical framework for handling inuencing factors and alternative explanations.
Evaluation, 18 (3), 294–309.
Mackie, J. L. (1974). e cement of the universe: A study of causation. Oxford, England:
Oxford University Press.
Mahoney, J. (2008). Toward a unied theory of causality. Comparative Political Studies,
41 (4/5), 412–436.
Mayne, J. (2001). Addressing attribution through contribution analysis: Using performance
measures sensibly. Canadian Journal of Program Evaluation, 16(1), 1–24. Retrieved
from les/cjpe-entries/16-1-001.pdf
Mayne, J. (2011). Contribution analysis: Addressing cause and eect. In R. Schwartz, K.
Forss, & M. Marra (Eds.), Evaluating the complex (pp.53–96). New Brunswick, NJ:
Transaction Publishers.
Mayne, J. (2012). Contribution analysis: Coming of age? Evaluation, 18 (3), 270–280.
Mayne, J. (2015). Useful theory of change models. Canadian Journal of Program Evaluation,
30 (2), 119–142.
Mayne, J. (2017). eory of change analysis: Building robust theories of change. Canadian
Journal of Program Evaluation, 32 (2), 155–173.
Mayne, J. (2018). e COM-B theory of change model. Retrieved from e_COMB_ToC_Model4
Mayne, J. (2019). Assessing the relative importance of causal factors. CDI Practice Paper
21. Brighton, England: IDS. Retrieved from
Mayne, J. (Forthcoming). Realistic commissioning of impact evaluations: Getting what you
ask for? In A. Paulson & M. Palenberg (Eds.), Evaluation and the pursuit of impact.
Abingdon, England: Taylor and Francis.
doi: 10.3138/cjpe.68004 CJPE 34.2, 171–191 © 2019
190 Mayne
Mayne, J., & Johnson, N. (2015). Using theories of change in the CGIAR Research Program
on Agriculture for Nutrition and Health. Evaluation, 21(4), 407–428. https://doi.
Michie, S., Atkins, L., & West, R. (2014). e behaviour change wheel: A guide to designing
interventions. London, England: Silverback Publishing. Retrieved from http://www.
Noltze, M., Gaisbauer, F., Schwedersky, T., & Krapp, S. (2014). Contribution analysis as
an evaluation strategy in the context of a sector-wide approach: Performance-based
health nancing in Rwanda. African Evaluation Journal, 2(1), 8 pp. Retrieved from
Palladium. (2015). Independent evaluation of the Security Sector Accountability and Police
Reform Programme: Final evaluation report. DFID. Retrieved from http://r4d.d
Pawson, R., & Tilley, N. (1997). Realistic evaluation. London, England: SAGE.
Paz-Ybarnegaray, R., & Douthwaite, B. (2016). Outcome evidencing: A method for ena-
bling and evaluating program intervention in complex systems. American Journal of
Evaluation, 38 (2), 275–293.
Pearl, J., & Mackenzie, D. (2018). e book of why: e new science of cause and e ect . New
York, NY: Basic Books.
Punton, M., & Barnett, C. (2018). Contribution analysis and Bayesian con dence updating:
A brief introduction. Itad brieng paper. Retrieved from
Punton, M., & Welle, K. (2015). Straws-in-the-wind, hoops and smoking guns: What can
process tracing oer to impact evaluation? CDI Practice Paper, No. 10. Centre for
Development Impact. Retrieved from
Riley, B. L., Kernoghan, A., Stockton, L., Montague, S., Yessis, J., & Willis, C. D. (2018).
Using contribution analysis to evaluate the impacts of research on policy: Getting to
good enough.Research Evaluation, 27(1), 16–27.
rvx037. Retrieved from
Schmitt, J., & Beach, D. (2015). e contribution of process tracing to theory-based
evaluations of complex aid instruments. Evaluation, 21(4), 429–447. https://doi.
Stern, E., Stame, N., Mayne, J., Forss, K., Davies, R., & Befani, B. (2012). Broadening the
range of designs and methods for impact evaluations. DFID Working Paper, 38. Lon-
don, England: DFID. Retrieved from http://r4d.d
Ton, G. (2017). Contribution analysis of a Bolivian innovation grant fund: Mixing methods
to verify relevance, e ciency and e ectiveness. Journal of Development E ectiveness,
9 (1), 120–143.
Ton, G., Mayne, J., Delahais, T., Morell, J., Befani, B., Apgar, M., & O’Flynn, P. (2019).
Contribution analysis and estimating the size of eects: Can we reconcile the possible
with the impossible? CDI Practice Paper, Number 20. Centre for Development Impact.
© 2019 CJPE 34.2, 171–191 doi: 10.3138/cjpe.68004
Revisiting Contribution Analysis 191
Retrieved from
estimating-size-e ects-can-we-reconcile-possible-impossible
Weiss, C. H. (1995). Nothing as practical as good theory: Exploring theory-based evalua-
tion for comprehensive community initiatives for children and families. In J. P. Con-
nell, A. C. Kubisch, L. B. Schorr, & C. H. Weiss (Eds.), New approaches to evaluating
community initiatives: Volume 1: Concepts, methods and contexts (pp.65–92). Wash-
ington, DC: e Aspen Institute.
Westhorp, G. (2014). Realist impact evaluation: An introduction. Methods Lab. Retrieved
White, H. (2009). eory-based impact evaluation: Principles and practice. Working Pa-
per 3. International Initiative for Impact Evaluation (3ie). Retrieved from https:// les/2017-11/Working_Paper_3.pdf
Wilson-Grau, R., & Britt, H. (2012). Outcome harvesting. Retrieved from http://www.
John Mayne is an independent advisor on public sector performance. Over the past 13
years he has focused largely on international development evolution and results-based
m a n a g e m e n t w o r k .
doi: 10.3138/cjpe.68004 CJPE 34.2, 171–191 © 2019
... Mayne sought to address some of these challenges and discussed these in his last major publication on CA (Mayne, 2019). In the intervening years he mainly concentrated on clarifying his stance on causation and key constructs therein, including mechanisms. ...
... As such building a causal narrative around the theory of change is necessary to create a plausible contribution story. If confirmed by empirical evidence one could infer probabilistic causation (Mayne, 2019). ...
Full-text available
In this concluding article, we take stock of the diverse and stimulating contributions comprising this special issue. Using concept mapping, we identify eight evaluation themes and concepts central to John Mayne’s collective work: evaluation utilization, results-based management, organizational learning, accountability, evaluation culture, contribution analysis, theory-based evaluation, and causation. The overarching contribution story is that John’s work served to bridge the gaps between evaluation practice and theory; to promote cross-disciplinary synergies across program evaluation, performance auditing, and monitoring; and to translate central themes in evaluation into a cogent system for using evaluative information more sensibly. In so doing, John left a significant institutional and academic legacy in evaluation and in results-based management.
... A significant portion of his thinking was dedicated to the shared spaces between these practices. In example, some of his most seminal work came to prominence while exploring the shared space between evaluation and monitoring in the forms of monitoring and evaluation systems (Mayne & Rist, 2006), results-based management (Mayne, 2007), and contribution analysis (Mayne 1999(Mayne , 2001(Mayne , 2011(Mayne , 2012(Mayne , 2019. Other themes arose from an enduring preoccupation with the use of evaluative knowledge to improve decision-making and organizational effectiveness (such as organizational learning, accountability, and performance reporting and stories). ...
... Die Erkenntnisse aus den ToC-Workshops wurden durch zahlreiche qualitative Interviews trianguliert. Dadurch ermöglicht die Kontributionsanalyse eine Bewertung der Kausalität von Interventionen(Mayne, 2019). Entlang der Theorien des Wandels können, ausgehend von den In-und den Outputs der Instrumente, robuste Schlussfolgerungen über (potenzielle) Outcomes (Effektivität) und Impacts (entwicklungspolitische Wirkungen) gezogen werden. ...
Full-text available
Durch den Klimawandel entstehen insbesondere in Entwicklungsländern als typische Folgen residualer Klimarisiken zunehmend hohe Schäden und Verluste. Residuale Klimarisiken sind solche Klimarisiken, die nach Risikoreduzierung durch Anpassung und Klimaschutz verbleiben. Um die Ziele für nachhaltige Entwicklung zu erreichen, ist ein effektiver Umgang mit residualen Klimarisiken erforderlich. Bisher gibt es nur vereinzelt Evidenz zur Wirksamkeit von Instrumenten zum Umgang mit residualen Klimarisiken. Vor diesem Hintergrund schließt dieses Evaluierungsmodul die Wissens- und Evaluierungslücke zur Relevanz und Wirksamkeit der bislang angewendeten Instrumente im Umgang mit residualen Klimarisiken. Dafür wurde ein theoriebasierter Ansatz gewählt, der qualitative und quantitative Analysemethoden integriert. Die betrachteten Instrumente wurden vier Instrumentengruppen konzeptionell zugeordnet und analysiert: Drittfinanzierte Risikofinanzierung, Risikopooling, Risikovorsorge und Transformatives Risikomanagement. Übergreifend zeigen die Ergebnisse, dass die Relevanz der Instrumente stark von ihrer Konzeption und Implementierung abhängt, der Anspruch eines umfassenden Umgangs mit residualen Klimarisiken teilweise erfüllt ist und die deutsche Entwicklungszusammenarbeit bereits vielfältige Erfahrungen mit der Implementierung von Instrumenten zum Umgang mit residualen Klimarisiken hat. Sie erweisen sich als effektiv, wenn Eingangshürden überwunden werden. Basierend auf den Ergebnissen spricht die Evaluierung Empfehlungen in Bezug auf den Instrumenteneinsatz, die Bedürfnisorientierung, umfassendes Risikomanagement, die Portfolioausweitung und die Wirkungsausrichtung aus.
... The findings from the ToC workshops were triangulated through numerous qualitative interviews. The contribution analysis thus enables analysis of the causality of interventions (Mayne, 2019). For the Theories of Change, robust conclusions can then be drawn concerning (potential) outcomes and impacts, based on the inputs and outputs of the instruments. ...
Full-text available
Climate change is causing increasingly high losses and damages, particularly in developing countries. Typically, this is a consequence of residual climate risks. 'Residual climate risks' are those climate risks that remain after risks have been reduced through mitigation and adaptation. To achieve the Sustainable Development Goals, residual climate risks need to be managed effectively. So far, only sporadic evidence is available on the effectiveness of instruments for managing these risks. Against this background, the present evaluation module report fills the knowledge and evaluation gap on the relevance and effectiveness of the instruments applied so far to manage residual climate risks. For this purpose a theory-based approach was selected that integrates qualitative and quantitative methods of analysis. The instruments considered were assigned to four instrument groups, and then analysed: third-party risk finance, risk pooling, risk preparedness and transformative risk management. Overall, the findings show that the relevance of the instruments depends strongly on their design and implementation. They also demonstrate that the benchmark of comprehensive residual climate risk management is partly met, and that German development cooperation already has a wide range of experience with implementing instruments for residual climate risk management. These prove to be effective, once the initial obstacles are overcome. Based on the findings, the evaluation makes recommendations concerning the use of instruments, needs orientation, comprehensive risk management, portfolio expansion and results orientation.
This paper focuses on three enduring themes in John Mayne’s work. They are causality; balancing learning and accountability as meta-objectives for evaluations; and program complexity. These themes are all central in his development and elaboration of contribution analysis. Although his work was aimed at practitioners, over time, the sophistication of his approach to evaluation raises challenges for practitioners, particularly given the structure of the evaluation field. The paper concludes with a suggestion to make contribution analysis more accessible, taking advantage of the work done by contributors to the Checklist Project at the University of Western Michigan.
This article discusses differences and similarities between (methodological) rules of thumb of contribution analysis, realist evaluation, and the policy-scientific approach to (program) evaluations. John Mayne’s work and his operating procedures are presented and structured. One of the conclusions is that the three approaches form a ‘family.’ This ‘family’ can substantially contribute to at least six of the 10 “declarations of the Program Theory Manifesto” presented in 2019.
This article is a tribute to John Mayne’s work on Contribution Analysis. It focuses on the causal claims Contribution Analysis aims to address, and on how these have evolved since the approach was first published by John in 1999. It first sets out four types of causality with relevance for Contribution Analysis: counterfactual, generative, INUS, and probabilistic causation. It then describes how John integrated the INUS condition and probabilistic elements into the Contribution Analysis approach, followed by how John’s thinking evolved regarding the question of whether the approach could—and should—also address counterfactual questions. The article concludes with observations on how Contribution Analysis can flexibly integrate elements from different causality types.
We discuss how explicit thinking about a variety of causal pathways, informed by a dynamic systems lens that is responsible for exacerbating and diminishing inequities as well as different types of complexities (related to program pathways), will further help develop theory-driven evaluation approaches such as Contribution Analysis. We argue that contribution claims associated with interventions focused on addressing inequities need to consider the multiple types of causal pathways by which a program can help reduce inequities.
The paper addresses the challenges of evaluating the impact of business coaching programmes with a varied portfolio of firms working across sectors and countries. Observable indicators of changes in business management practices are rarely relevant across sectors. Therefore, evaluators need to rely on the perceptions of the managers who have received coaching. We designed an online survey to compare the effectiveness of business coaching within a portfolio and across programmes. The survey was applied to the portfolio of two private sector development programmes. We derived so-called ‘contribution scores’ from individuals’ perceptions of how business management practices had changed and their perceptions of the role of business coaching in bringing about these changes. The survey included some features to reflect on response reliability. We show that the tool seems fairly reliable for comparative analysis and helped to identify the types of firms and contexts where business coaching support appears more effective.
Full-text available
The use of theories of change (ToCs) is a hallmark of sound evaluation practice. As interventions have become more complex, the development of ToCs that adequately unpack this complexity has become more challenging. Equally important is the development of evaluable ToCs, necessary for conducting robust theory-based evaluation approaches such as contribution analysis (CA). This article explores one approach to tackling these challenges through the use of nested actor-based ToCs using the case of an impact evaluation of a complex police-reform program in the Democratic Republic of Congo, describing how evaluable nested actor-based ToCs were built to structure the evaluation.
Full-text available
While contribution analysis provides a step-by-step approach to verify whether and why an intervention is a contributory factor to development impact, most contribution analysis studies do not quantify the 'share of contribution' that can be attributed to a particular support intervention. Commissioners of evaluations, however, often want to understand the size or importance of a contribution, not least for accountability purposes. The easy (and not necessarily incorrect) response to this question would be to say that it is impossible to do so. However, in this CDI Practice Paper written by Giel Ton, John Mayne, Thomas Delahais, Jonny Morell, Barbara Befani, Marina Apgar and Peter O'Flynn, we explore how contribution analysis can be stretched so that it can give some sense of the importance of a contribution in a quantitative manner. The first part of the paper introduces the approach of contribution analysis and presents ideas to capture the change process in theories of change and system maps. The second part presents research design elements that include ranking or quantitative measures of impact in the verification of the theory of change and resulting contribution story.
Full-text available
Assessing societal impacts of research is more difficult than assessing advances in knowledge. Methods to evaluate research impact on policy processes and outcomes are especially underdeveloped, and are needed to optimize the influence of research on policy for addressing complex issues such as chronic diseases. Contribution analysis (CA), a theory-based approach to evaluation, holds promise under these conditions of complexity. Yet applications of CA for this purpose are limited, and methods are needed to strengthen contribution claims and ensure CA is practical to implement. This article reports the experience of a public health research center in Canada that applied CA to evaluate the impacts of its research on policy changes. The main goal was to experiment with methods that were relevant to CA objectives, sufficiently rigorous for making credible claims, and feasible. Methods were 'good enough' if they achieved all three attributes. Three cases on government policy in tobacco control were examined: creation of smoke-free multiunit dwellings, creation of smoke-free outdoor spaces, and regulation of flavored tobacco products. Getting to 'good enough' required careful selection of nested theories of change; strategic use of social science theories, as well as quantitative and qualitative data from diverse sources; and complementary methods to assemble and analyze evidence for testing the nested theories of change. Some methods reinforced existing good practice standards for CA, and others were adaptations or extensions of them. Our experience may inform efforts to influence policy with research, evaluate research impacts on policy using CA, and apply CA more broadly.
Full-text available
The move to Sustainable Development Goals in 2015 reflects a wider shift towards more multifaceted and complex ambitions in international development. This trend poses new challenges to measuring impact. For example, how do we measure outcomes such as empowerment, or attribute policy changes to specific advocacy initiatives? The evaluation community is increasingly recognising the limits of classic impact evaluation methodologies based on counterfactual perspectives of causality (for example, randomised controlled trials), implying the need for methodological innovation in the field. Process tracing is a qualitative method that uses probability tests to assess the strength of evidence for specified causal relationships, within a single-case design and without a control group. It offers the potential to evaluate impact 1 (including in ex post designs) through establishing confidence in how and why an effect occurred. This CDI Practice Paper explains the methodological and theoretical foundations of process tracing, and discusses its potential application in international development impact evaluations. The paper draws on two early applications of process tracing for assessing impact in international development interventions: Oxfam Great Britain (GB)'s contribution to advancing universal health care in Ghana, and the impact of the Hunger and Nutrition Commitment Index (HANCI) on policy change in Tanzania.
Full-text available
Models for theories of change vary widely as do how they are used. What constitutes a good or robust theory of change has not been discussed much. This article sets out and discusses criteria for robust theories of change. As well, it discusses how these criteria can be used to undertake a vigorous assessment of a theory of change. A solid analysis of a theory of change can be extremely useful, both for designing or assessing the designs of an intervention as well as for the design of monitoring regimes and evaluations. The article concludes with a discussion about carrying out a theory of change analysis and an example.L’utilisation qui est faite de modèles de théories du changement varie grandement. Par ailleurs, il y a peu de discussion sur ce qui constitue une bonne ou solide théorie du changement. Le présent article décrit et analyse les critères de détermination de la robustesse d’une telle théorie. De plus, il discute de la façon dont ces critères peuvent servir à l’évaluation rigoureuse d’une théorie du change-ment. Une analyse approfondie d’une théorie du changement peut être extrêmement utile, autant pour concevoir ou évaluer la conception d’une intervention, que pour concevoir des évaluations et systèmes de monitorage. L’article se termine avec une discussion sur l’analyse d’une théorie du changement et un exemple.
Full-text available
This article reflects on an evaluation commissioned by the Centre for International Forestry Research, an international research centre working on tropical forests. In the Congo basin, it took from 10 to 20 years for research works to influence the sustainability of forest management through a complex web of interactions between timber companies, national governments, international organizations, development agencies, NGOs, and consultancies. By applying the contribution analysis approach, the evaluation was able to trace several causal pathways which percolated through this web of interactions and resulted in a number of contributions always indirect and marginal but sometimes necessary. The article discusses how contribution claims were inferred from evidence, what the underlying logic of causal arguments was, how some contributions could be qualified as necessary ones, and how far the evaluation went on the way to generalization. The discussion bridges contribution analysis with process tracing and realist evaluation.
Interest from Congress, executive branch leadership, and various other stakeholders for greater accountability in government continues to gain momentum today with government-wide efforts. However, measuring the impact of research programs has proven particularly difficult. Cause and effect linkages between research findings and changes to morbidity and mortality are difficult to prove. To address this challenge, the National Institute for Occupational Safety and Health program evaluators used a modified version of contribution analysis (CA) to evaluate two research programs. CA proved to be a useful framework for assessing research impact, and both programs received valuable, actionable feedback. Although there is room to further refine our approach, this was a promising step toward moving beyond bibiliometrics to more robust assessment of research impact.
There is a growing recognition that programs that seek to change people’s lives are intervening in complex systems, which puts a particular set of requirements on program monitoring and evaluation (M&E). Developing complexity-aware M&E systems within existing organizations is difficult because they challenge traditional orthodoxy. Little has been written about the practical experience of doing so. This article describes the development of a complexity-aware evaluation approach in the CGIAR Research Program on Aquatic Agricultural Systems. We outline the design and methods used including trend lines, panel data, after action reviews, building and testing theories of change, outcome evidencing and realist synthesis. We identify and describe a set of design principles for developing complexity-aware M&E. Finally, we discuss important lessons and recommendations for other programs facing similar challenges. These include developing evaluation designs that meet both learning and accountability requirements; making evaluation as part of a program’s overall approach to achieving impact; and, ensuring evaluation cumulatively builds useful theory as to how different types of program trigger change in different contexts.
Health impact assessments (HIA) allow the potential impact of nonhealth-related actions (policy, project, program) on health to be analyzed. For example, municipal projects, for urban renewal or urban planning, have an impact on health drivers relating to the created environment, and thus, on public health. In the province of Quebec, the government and afliated organizations have to comment on potential impacts; they use HIAs to ensure health is factored into public actions. We know little about HIAs' capacity to in?uence public policies; every policy and program is di?erent and is ofen implemented only once, greatly complicating evaluation. Our goal is to analyze the potential of contribution analysis for evaluating the impact of HIAs at the municipal level. To this end, we present the HIA process as implemented in Montérégie, we identify evaluation challenges, and put forth, through contribution analysis, an evaluation strategy to analyze e?ects. © 2017 Canadian Journal of Program Evaluation/La Revue canadienne d'évaluation de programme.
Contribution analysis (CA) is a theory-based approach that has become widely used in recent years to conduct defensible evaluations of interventions for which determining attribution using existing methodologies can be problematic. This critical review of the literature explores contribution analysis in detail, discussing its methods, the evolution in its epistemological underpinnings to establishing causality, and some methodological challenges that are presented when CA is applied in practice. The study highlights potential adaptations to CA that can improve rigour, and describes areas where further work can strengthen this useful evaluation approach.L’analyse de contribution est une approche fondée sur la théorie largement utilisée ces dernières années pour réaliser des évaluations, de façon valable, d’interventions pour lesquelles il est difficile de déterminer l’attribution avec les méthodologies existantes. Cette analyse critique de la documentation explore de façon détaillée l’analyse de contribution, en discutant de ses méthodes, de l’évolution de ses fondements épistémologiques visant à établir la causalité et de certains défis méthodologiques présents lorsque l’analyse de contribution est utilisée en pratique. L’étude met en relief des adaptations possibles de l’analyse de contribution qui pourraient en améliorer la rigueur et décrit les domaines dans lesquels des travaux plus poussés pourraient renforcer cette utile approche d’évaluation.