Content uploaded by John Mayne
Author content
All content in this area was uploaded by John Mayne on Aug 17, 2015
Content may be subject to copyright.
http://evi.sagepub.com/
Evaluation
http://evi.sagepub.com/content/18/3/270
The online version of this article can be found at:
DOI: 10.1177/1356389012451663
2012 18: 270Evaluation
John Mayne
Contribution analysis: Coming of age?
Published by:
http://www.sagepublications.com
On behalf of:
The Tavistock Institute
can be found at:EvaluationAdditional services and information for
http://evi.sagepub.com/cgi/alertsEmail Alerts:
http://evi.sagepub.com/subscriptionsSubscriptions:
http://www.sagepub.com/journalsReprints.navReprints:
http://www.sagepub.com/journalsPermissions.navPermissions:
http://evi.sagepub.com/content/18/3/270.refs.htmlCitations:
What is This?
- Jul 8, 2012Version of Record >>
by Mayne John on July 16, 2012evi.sagepub.comDownloaded from
Evaluation
18(3) 270 –280
© The Author(s) 2012
Reprints and permission: sagepub.
co.uk/journalsPermissions.nav
DOI: 10.1177/1356389012451663
evi.sagepub.com
Contribution analysis: Coming of
age?
John Mayne
Independent advisor, Canada
Abstract
In this introductory article, a brief history and introduction to contribution analysis is provided
to lay the stage for the articles that follow. At the heart of contribution analysis is the aim to be
able to make credible causal claims about the contribution an intervention is making to observed
results. The key role that theories of change play is noted, and what a useful theory of change
ought to contain is discussed. The article then makes a link between the philosophical discussions
on causality and contribution analysis through a discussion of contributory causes. It is argued
that such causes, which on their own are neither necessary nor sufficient, represent the kind of
contribution role that many interventions play: where there are a number of other influencing
events and conditions at work in addition to the intervention of interest. Contribution analysis is
an approach to confirming that an intervention is a contributory cause.
Keywords
causality, causal packages, contribution analysis, contributory causes, theories of change
Introduction
This Special Issue focuses on contribution analysis (CA), a theory-based approach to evaluation
aimed at making credible causal claims about interventions and their results. Theory-based
approaches in evaluation have been discussed for many years (see Weiss, 1997a; Stame, 2004;
Rogers, 2007; White, 2009; Funnell and Rogers, 2011) and much has been written. Blamey and
Mackenzie (2007) make the useful distinction in theory-based approaches between realist evalua-
tions (Pawson and Tilley, 1997) and those approaches that develop an explicit theory of change.
The latter include Chen’s (1990) theory-driven evaluation, Weiss’s (1995, 2000) theory-based
evaluation and Mayne’s (2001, 2008, 2011) contribution analysis.
One result of the widespread interest in theory-based approaches is that there is no agreement on
the terms used and even some of the concepts. Nevertheless, there is consistency on the value of
theory-based approaches. They may be best thought of as a logic of enquiry for explaining
interventions that can complement and be used in combination with other designs and data collection
Corresponding author:
John Mayne, Canada 654 Sherbourne Rd, Ottawa, ON K2A 3H3, Canada.
Email: john.mayne@rogers.com
18310.1177/1356389012451663Mayne: Contribution analysis: Coming of age? Evaluation
2012
Article
by Mayne John on July 16, 2012evi.sagepub.comDownloaded from
Mayne: Contribution analysis: Coming of age? 271
techniques. Coryn et al. (2010) review practice with theory-based evaluation approaches over the past
decade, and White and Phillips (2012) review a number of ‘small n’ approaches to evaluation.
Most theory-based approaches rely on developing a theory of change, a logical model for an inter-
vention showing a results chain of how outputs are expected to lead to a sequence outcomes,
1
identify-
ing successive levels of desired effects. They seek to show how the intervention is expected to work or
make a difference. The theory of change is usually developed based on initial policy intentions, informed
then by a range of stakeholder views and information sources, including prior evaluations and research.
The theory of change is then verified to the extent that it matches what is observed to have happened.
CA lies within these theory-of-change approaches. I introduced the approach in 2001 (Mayne,
2001) and amplified it more recently (Mayne, 2011), and as noted in the 2011 publication, pub-
lished applications of CA were few, despite many references to the approach. Interest in CA was
evident at the 2010 EES Conference on Prague where a workshop on CA was given and several
papers presented (Lemire, 2010; Toulemonde, 2010; Wimbush and Mulherin, 2010). Subsequently,
the possibility of this Special Issue was raised.
I first discussed CA in the context of results monitoring systems (Mayne, 2001). The question I
was considering was what could be said about causality of an intervention when only monitoring
data were available. CA, it seemed to me, offered a reasonable way to make evidence-based causal
claims rather than being unable to say anything about causality – or worse, leaving readers to make
their own assumptions.
As I became more familiar with the range of theory-based approaches in evaluation – such as
those by Connell and Kubisch (1998), Davidson (2006), Gysen et al. (2006), Patton (2008b), Pawson
et al. (2004), Reynolds (1998) and Weiss (1995, 1997b) – it was clear than many shared common
features with CA. What was distinctive about CA was that it offered a more systematic way to arrive
at credible causal claims, and improve often weak evaluation practice when dealing with causality.
From an evaluation perspective, the issue was what could be done to make credible causal claims in
the absence of experimental approaches. Many evaluations seemed either to be silent on causality
or, perhaps worse, made causal claims based solely on the views of interviewees.
Contribution analysis: A quick overview
CA is based on the existence of, or more usually, the development of a postulated theory of change
for the intervention being examined. The analysis examines and tests this theory against logic and the
evidence available from results observed and the various assumptions behind the theory of change,
and examines other influencing factors. The analysis either confirms – verifies – the postulated theory
of change or suggests revisions in the theory where the reality appears otherwise. The overall aim is
to reduce uncertainty about the contribution an intervention is making to observed results through an
increased understanding of why results did or did not occur and the roles played by the intervention
and other influencing factors.
One aspect of CA that has been noted is that it suggests a structured approach to the analysis
(White and Phillips, 2012). Six key steps are set out as shown in Table 1 (Mayne, 2001). These
steps can also be part of an iterative approach to building the logic and evidence for claiming that
the intervention made a contribution.
While Table 1 sets out a number of specific steps, as the articles in this Special Issue illustrate,
those who have made use of CA have usually modified these steps to best suit the circumstances they
face and the specific analytic methods they have used. CA is still a relatively new approach in evalu-
ation. In my view, it is good practice that a variety of approaches are being developed and explored.
CA, then, argues that if one can verify or confirm a theory of change with empirical evidence,
and account for major external influencing factors, then it is reasonable to conclude that the
by Mayne John on July 16, 2012evi.sagepub.comDownloaded from
272 Evaluation 18(3)
intervention in question has made a difference. The theory of change provides the basis for the
argument that the intervention is making a difference, identifies weaknesses in the argument and
hence where evidence for strengthening such claims is most needed. Causality is inferred from the
following logic and evidence:
1. The intervention is based on a reasoned theory of change: the chain of results, and the
assumptions behind why the intervention is expected to work are plausible, sound, informed
by existing research and literature and supported by key stakeholders.
2
2. The activities of the intervention were implemented as outlined in the theory of change.
3. The theory of change is verified by evidence: the chain of expected results occurred, and
the assumptions held.
Table 1. Key Steps in Contribution Analysis.
Step 1: Set out the cause-effect issue to be addressed
• Acknowledge the causal problem.
• Scope the problem: determine the specific causal question being addressed; determine the level of
confidence needed in answering the question
• Explore the nature and extent of the contribution expected
• Determine the other key influencing factors
• Assess the plausibility of the expected contribution given the intervention size and reach
Step 2: Develop the postulated theory of change and risks to it, including rival explanations
• Set out the postulated theory of change of the intervention, including identify the risks and
assumptions and links in the theory of change,
• Identify the roles of the other influencing factors and rival explanations
• Determine how contested is the postulated theory of change
Step 3: Gather the existing evidence on the theory of change
• Assess the strengths and weaknesses of the links in the theory of change
• Gather the evidence that exists from previous measurement, past evaluations, and relevant
research (1) for the observed results, (2) for each of the links in the results chain, (3) for the other
influencing factors, and (4) for rival explanations.
Step 4: Assemble and assess the contribution claim, and challenges to it
• Set out the contribution ‘story’: the causal claim based on the analysis so far
• Assess the strengths and weaknesses in the postulated theory of change in light of the available
evidence, the relevance of the other influencing factors, and the evidence gathered to support rival
explanations
• If needed, refine or update the theory of change
Step 5: Seek out additional evidence
• Determine what kind of additional evidence is needed to enhance the credibility of the contribution
claim.
• Gather new evidence
Step 6: Revise and strengthen the contribution story
• Build the more credible contribution story
• Reassess its strengths and weaknesses
• Revisit Step 4
by Mayne John on July 16, 2012evi.sagepub.comDownloaded from
Mayne: Contribution analysis: Coming of age? 273
4. External factors – context and rival explanations – influencing the intervention are assessed
and are either shown not to have made a significant contribution or, if they did, their rela-
tive contribution is recognized.
In the end, a conclusion is reached – a contribution claim about whether the intervention made a
difference as expected. To summarize:
Contribution claim = a verified theory of change + other key influencing factors accounted for.
What does a contribution claim look like? The result of a CA is rarely definitive proof.
Causality in relation to socio-economic interventions is usually of the probabilistic form: that the
intervention is most likely to have made a difference. CA provides an argument with evidence
from which it is reasonable to conclude with confidence that the intervention has made a contri-
bution and why. It builds a compelling case – a warrant – about the contribution being made:
The aim is to get what Hendricks (1996) call ‘plausible association’: whether a reasonable person would
agree from the evidence and argument that the program has made an important contribution to the observed
result. (Mayne, 2011: 62)
While the focus has often been on using CA to make causal claims, as implied in my earlier
articles, and as the articles in this Special Issue amply point out, CA also has other uses. Wimbush
et al. in this issue discuss using CA approaches as a participatory tool in planning for results and
evaluation systems to enhance learning and understanding about intervention being planned and
reviewed. Leeuw in this issue discusses using CA approaches to be able to assess the likelihood
that a proposed policy initiative will work.
The Special Issue is evidence of both a growing CA practice and a continuing discussion and
debate about making use of CA and related theory-based approaches.
Contribution rather than attribution
There has been discussion and possible confusion between the terms attribution and contribution. Many
authors make a useful distinction between these terms (Patton, 2008a; Stern et al., 2012). In much of the
literature, attribution is used to both identify with finding the cause of an effect and with estimating quan-
titatively how much of the effect is due to the intervention. The term contribution is used here in the fol-
lowing way: in light of the multiple factors influencing a result, has the intervention made a noticeable
contribution to an observed result and in what way? The authors in this Special Issue adhere to this usage.
3
Useful theories of change
Critical to CA is the development of a well thought-out and credible theory of change. In my view,
a good theory of change goes well beyond a results chain or logical framework. I would argue that
a complete theory of change is embedded in the context of the intervention, and is developed incor-
porating the perspectives of key stakeholders, beneficiaries and the existing relevant research.
Theories of change should include:
• a results (causal) chain showing the basic logic of the intervention;
• the underlying assumptions behind the links in the results chain;
by Mayne John on July 16, 2012evi.sagepub.comDownloaded from
274 Evaluation 18(3)
• an elaboration of the risks to each of these links;
• identification of unintended effects; and
• identification of other key explanatory factors (rival explanations).
Figure 1 illustrates the various components of a theory of change.
4
The theory of change is dis-
played deliberately as a quasi-linear process, but allows for feedback loops as needed. A ‘sort of’
linear theory of change facilitates both arriving at causal claims and communicating the performance
story of the intervention. The assumption boxes can be used to reduce the number of explicit links
that might otherwise be needed in a theory of change. Other explanatory factors (rival explanations)
may be different for different links or may apply to the overall causal logic of the intervention. The
vertical ‘activities and outputs’ box allows for an implementation theory to be shown (i.e. the activi-
ties and outputs that are going to be delivered, perhaps over time, to implement the intervention).
Theories of change as causal packages
The logic used for making causal contribution claims outlined above was not related directly to the
literature on causality. There is a large and active literature on the issue of causation, and over
centuries now, a number of different perspectives have been developed to explain and understand
Final Outcomes
Intermediate
Outcomes
Immediate
Outcomes
Assumptions: How do the intervention outputs
expect to result in or effect the immediate,
intermediate and inal outcomes? What has to
happen? What contextual factors inluence
these processes?
Risks : Risks to the link not occurring.
Assumptions: How are immediate
outcomes expected to producethe
intermediate outcomes? What has to
happen? What contextual factors
inluence these processes?
Risks: Risks to the link not occurring.
Assumptions: How are intermediate
outcomes expected to producethe
inaloutcomes? What has to
happen? What contextual factors
inluence these processes?
Risks: Risks to the link not occurring.
Other Explanatory
Factors/Rival Explanations:
Socio-economic factors; other
interventions
(
can differ for different
outcomes
)
Unintended
Results
Activities and Outputs
Figure 1. Displaying a theory of change.
Terms:
Assumptions are events and conditions that need to happen for the link to work. They are developed from a mix of
stakeholder and social science theories and research.
Risks are external event and conditions that could put the causal link at risk.
Other Explanatory Factors are other factors or conditions that might help explain the occurrence of the observed result
other than the influence of the intervention.
Unintended effects are positive or – more usually – negative unanticipated effects that occur as a result of the
interventions activities and results.
by Mayne John on July 16, 2012evi.sagepub.comDownloaded from
Mayne: Contribution analysis: Coming of age? 275
causation, of which counterfactual approaches are only one. A recent study on impact evaluation
that I participated in commissioned by DFID (Stern et al., 2012) provides a discussion of these
different bases for causal inference. I would like to discuss how CA relates to the causality litera-
ture, many ideas of which arose during the work on the DFID-funded project. This has the potential
to suggest ways that we can further deepen and systematize CA within evaluation practice.
Causality involves relationships between events or conditions, and is often discussed in terms
of necessary and sufficient conditions. When we say that X ‘caused’ Y, we can mean that X is:
• Necessary but not sufficient.
A person must be infected with HIV before they can develop AIDS. HIV is therefore a
necessary cause of AIDS; however, since every person with HIV does not contract
AIDS, it is not sufficient.
• Sufficient but not necessary
Decapitation is sufficient to cause death; however, people can die in many other ways.
• Both necessary and sufficient
A gene mutation associated with Tay-Sachs is a both necessary and sufficient cause for
the development of the disease, since everyone with the mutation will eventually
develop Tay-Sachs and no-one without the mutation will ever have it.
• Neither necessary nor sufficient – a contributory cause
Smoking heavily is a contributory cause of lung cancer – it is not a necessary cause
since there are other sources of lung cancer, nor is it a sufficient cause since not all
smokers suffer from lung cancer.
I would suggest that in ordinary discussions, ‘X causes Y’ is most often taken to mean suffi-
ciency. We mean either that in this case the event X resulted in Y, or that generally the phenomena
X results in Y, recognizing in either case that there may be other events that also could produce Y.
Necessity seems a less common use of the term ‘cause’ as it quite demanding, requiring that when-
ever there is the event Y then there is X. Causes that are both necessary and sufficient are even
more rare. On the other hand, contributory causes that are neither necessary nor sufficient on their
own are quite common.
It is clear that many interventions do not act alone and that the desired outcomes will be the
result of a combination of causal factors, including other related interventions, and events and
conditions external to the intervention. Indeed, many interventions are designed to be part of such
a ‘causal package’, and even if not so designed, their evaluation needs to take these other factors
into account. Cartwright and Hardie (2012) call these supporting factors, other events and condi-
tions that need to happen in order for the intervention to work, to make a difference.
In these instances, the key causal question becomes: was the causal package consisting of the
intervention plus supporting factors sufficient to produce the intended outcome? It is recognized
that there could be other ways that the desired outcome is brought about and hence the particular
causal package in question may not be necessary to achieve the desired outcome. In addition, we
would want to know if the intervention was a necessary part of the specific causal package. Perhaps
the desired outcome could be realized through the supporting factors without the intervention. I’ll
refer to the causal package with these two characteristics – sufficiency of the package and necessity
of the intervention as part of the package – as the intervention causal package.
If these conditions hold, then the intervention is a contributory cause and as such has ‘made a
difference’, as I would define it. That is, the intervention was a necessary element of the causal
package that produced the observed result. Box 1 sets out an example of a causal package related
by Mayne John on July 16, 2012evi.sagepub.comDownloaded from
276 Evaluation 18(3)
to an intervention aiming to improve education outcomes for girls. It is clear in this example that
there are other ways of improving educational outcomes of girls, such as through offering extracur-
ricular help to girls.
What is of particular interest is that, as I have described them, theories of change are in fact
postulated causal packages, and more. They identify the supporting factors as assumptions and
identify the risks (the confounding factors) and, as well, set out the relationships between support-
ing factors and the intervention. The assumptions are the supporting factors needed for the theory
of change to work. Overall, a theory of change is a model of how the intervention is expected to act
as a contributing cause.
A theory of change can be constructed for the example in Box 1. Its outline would be something
like: teachers with more training and skills in educating girls would provide teaching that is more
attuned to girls’ needs and is of more interest to them, resulting in girls being more actively engaged
in studying and wanting an education. This will lead to better education outcomes for the girls
concerned. Among the assumptions here would be (1) that teachers want to help girls get a better
education and hence work to acquire new skills, (2) parents support their daughters more active
engagement in school, (3) girls are able to attend schools, and (4) are comfortable there.
In the philosophy literature, contributory causes are called INUS causes: an Insufficient but
Necessary part of a condition that is itself Unnecessary but Sufficient for the occurrence of the
effect (Mackie, 1974) and there is a large literature on contributory causes and causal packages,
which are often described as causal cakes or pies, showing the various components (slices) that
make up the package (see Cartwright and Hardie, 2012 and Stern et al., 2012, for discussions).
The discussion of contributory causes has here so far been in deterministic terms (i.e. a cause is either
sufficient or it is not). However, as noted earlier, the discussion needs to reflect the probabilistic nature of
causality in socio-economic phenomena. Mahoney (2008: 421) argues that ‘a treatment is a cause when
its presence raises the probability of an outcome occurring in any given case’. He introduces the useful
ideas of probabilistically necessary causes – ‘factors that usually or almost always have to be present for
the outcome to occur’ – and probabilistically sufficient causes – ‘a cause that much of the time on its own
will produce the effect’ (pp. 425–6). For many interventions being evaluated, these are more realistic
interpretations of the necessary and sufficient conditions discussed earlier.
In term of an intervention’s causal package, I will use the terms likely necessary to describe the
supporting factors, and likely sufficient to describe the sufficiency of the intervention causal
Consider an intervention aimed at improving the education outcomes for girls in a developing country,
through raising the knowledge, skills and awareness of teachers in schools.
Other supporting factors here might be:
• the willingness of teachers to support the education of girls;
• the support of parents for their daughters to attend schools and study at home;
• the ability of girls to get to the schools;
• the adequacy of the schools to accommodate girls.
The causal package here is the training provided to teachers plus the supporting factors. An evaluation
would want to know if this causal package worked, i.e. in this case were education outcomes for girls
improved as a result of the causal package, and since it is an evaluation of the intervention, was the
intervention a needed part of the causal package?
Box 1. Causal packages and making a difference
by Mayne John on July 16, 2012evi.sagepub.comDownloaded from
Mayne: Contribution analysis: Coming of age? 277
package, meaning that in this case, the causal package most likely produced the observed result.
5
To show that the intervention is a contributory cause is to show that the intervention’s causal pack-
age is likely sufficient, and that the intervention is itself a necessary element of the sufficient
package.
Indeed, in terms of causal packages, this is exactly what CA is able to do; i.e. confirming:
• that the expected result occurred;
• that the supporting factors – the assumptions for each link in the theory of change – have
occurred and together provide a reasonable explanation for the results that occur;
• that any other identified supporting factor that is present has been included in the causal
package, thereby potentially revising the theory of change; and
• that any plausible rival explanations
6
– external causal factors – have been accounted for.
Given that the assumptions may be likely necessary conditions, in a specific case not all may
have occurred. In this case an assessment is needed of whether the collection of supporting factors
(assumptions) actually occurring provides a reasonable explanation for the observed result. This,
plus the assessment of rival explanations, allows for the causal inference to be made as to whether
the intervention causal package (for a link in a causal chain) was sufficient. If it was and all the other
links in the causal chain are also confirmed, then the theory of change itself has been confirmed.
Data and evidence for the analysis comes from drawing on logic, critical thinking, and prior
research and asking relevant stakeholders whether they believe there were other causal factors
beyond the package at work. If other causal factors were believed to be at work, one would need
to seek out supporting evidence. Note that the links in a theory of change should comprise rela-
tively proximate cause and effects, making judgement and the use of logic and critical thinking
easier.
Consider the intervention described in Box 1 about enhancing the skills of teachers so as to
improve education outcomes for girls. Assume the additional training of teachers occurred and
subsequently, an increase in education outcomes for girls was observed. Did the intervention make
a difference? Was the additional training a contributory cause? To answer, the steps and assump-
tions in the theory of change would be examined to see if they occurred as expected. Did teachers
acquire and apply new skills? Were girls more engaged? Did parents support their daughters get-
ting educated? Further, were any other factors at play outside the postulated theory of change, in
particular were there any plausible rival explanations for the enhanced education outcomes, and if
so what were they? Possible rival explanations might include greater investment generally in edu-
cation, increasing teacher/student ratios or the arrival of the internet, providing possibilities for self
learning. If the theory of change is confirmed and no significant rival explanations found, then one
could conclude with confidence that the intervention did indeed make a difference.
7
I am arguing that the idea of contributory cause is a useful and relevant concept for many interven-
tions. It offers a practical approach to confirming that an intervention contributed to an observed
result and indeed made a difference. As such, it also offers one basis for future methodological devel-
opment of CA as a form of theory-based evaluation, building on the concepts of causal packages.
Articles in the Special Issue
The articles in this Special Issue provide a broad overview and discussion of the current state of
CA. The first few articles provide specific cases of applied CA. Delahais and Toulemonde describe
their five years of experience of CA in their evaluation practice. Using several examples, they
by Mayne John on July 16, 2012evi.sagepub.comDownloaded from
278 Evaluation 18(3)
discuss how they operationalized key aspects of CA and particularly the contribution story. They
discuss the real challenges faced in carrying out a CA and the resources required. The authors also
argue for the need for better ways of presenting contribution stories and for standards for assuring
the quality of CA.
Lemire et al. begin the discussion of applied CA, describing how they have operationalized the
steps involved in the analysis, focusing in particular on how to account for other influencing fac-
tors and rival explanations. They propose a tool to do this – the ‘Relevant Explanation Finder’.
Wimbush et al. provide examples of the use of CA and CA concepts from Canada and
Scotland. They have used CA as a participatory process, which strengthens both conceptual and
practical understanding of planning/managing for outcomes and the related implementation and
change theories, thus helping to build collaborative capacity within and across participating
partner organizations.
After an insightful discussion of the debate in development evaluation about the use of experi-
mental designs versus theory-based approaches, Vaessen and Raimondo discuss an evaluation of a
UNESCO programme that is not amenable to experimentation. They discuss CA in relation to
impact evaluation pointing to both strengths and weaknesses, describe the formative evaluation
they conducted using a theory of change, and discuss their planned use of CA for the summative
evaluation. They also discuss the idea of contributory causes in CA as outlined in this introductory
article.
Leeuw, in discussing theory-based and CA approaches, describes the application of several
tools that are likely to be new to most evaluators, addressing three problematic situations: (1)
building theories of change using software; (2) forecasting impacts by testing theories of change
with look-alike interventions and examining past implementation failures; and (3) developing ‘his-
torical’ counterfactuals. Leeuw argues that CA can be strengthened through the identification of the
mechanisms at work that visualization software can foster, through the examination of implemen-
tation failures in look-alike mechanisms, and through testing contribution stories using historical
counterfactuals and hypothetical question research.
The remaining two articles discuss various aspects of the concepts behind CA. Patton, after
outlining and discussing cases of how CA is used in evaluations, argues that it depends crucially on
critical thinking. CA requires careful and rational thinking about the factors and conditions behind
the links between interventions and their impacts. He presents a forceful argument that ‘rigorous
thinking supersedes rigorous methodology’ and discusses quality standards for rigorous thinking.
Sridharan and Nakaima end the Special Issue. They discuss a number of questions and chal-
lenges to those using theory-based and CA approaches that can help in the further development of
theory-driven evaluation and CA approaches. Questions such as: ‘What is a “good-enough” pro-
gramme theory? How does one build understanding and expectations about programme impact?
What does causality mean for complex interventions? How does learning occur? How does the
application of theory-driven evaluation approaches help generate an “ecology of evidence”? How
can evaluation methods be integrated directly with CA?’ are discussed within the context of an
ongoing dance/physical activity programme for health promotion and the questions set the stage
for future work on CA.
Final comments
My intent with this Special Issue is to widen the interest in and share the experiences with using
CA as a way of making causal claims about interventions in a credible and rigorous manner.
by Mayne John on July 16, 2012evi.sagepub.comDownloaded from
Mayne: Contribution analysis: Coming of age? 279
The articles provide a wealth of ideas to further explore CA and many references that readers can
follow up. After a healthy gestation period, I do think that CA is coming of age.
Funding
This research received no specific grant from any funding agency in the public, commercial or not-for-profit
sectors.
Notes
1. I am using the term outcomes to cover the sequence of results (or effects) – immediate, intermediate and
final outcomes – following the delivery of outputs. Final outcomes here are meant to include impacts as
the term is used in the development evaluation world.
2. In some cases two or more theories of change may emerge. Hansen and Vedung (2010) discuss using
multiple stakeholder theories of change.
3. In commenting on this introduction, Jos Vaessen suggested that attribution/contribution can be usefully
distinguished as follows. Attribution emphasizes the issues of whether or not and how much of a par-
ticular change can be attributed to an intervention. Contribution emphasizes the confluence of multiple
causal factors to a particular change and emphasizes the issues of whether or not and how an intervention
contributes to the change.
4. There is a wide variety of ways of depicting theories of change. Funnell and Rogers (2011) provide
numerous examples.
5. Note that I am distinguishing here between the sufficiency of a specific event such as an intervention, and
the sufficiency of a phenomenon.
6. Lemire et al. in this volume discuss a systematic way to explore other influencing factors and rival expla-
nations in CA.
7. If one were trying to get at attribution in this case, the evaluation issue would be: how much of the
increase in education outcomes is due to the training provided to teachers?
References
Blamey A and Mackenzie M (2007) Theories of change and realistic evaluation. Evaluation 13(4): 439–55.
Cartwright N and Hardie J (2012) Evidence-based Policy: Doing it Better: A Practical Guide to Predicting if
a Policy Will Work for You. Oxford: Oxford University Press.
Chen H-T (1900) Theory-Driven Evaluations. Newbury, CA: SAGE.
Connell JP and Kubisch AC (1998) Applying a theory of change approach to the evaluation of comprehensive
community initiatives: progress, prospects, and problems. In: Fulbright-Anderson K, Kubisch AC and
Connell JP (eds) New Approaches to Evaluating Community Initiatives, Vol. 2: Theory, Measurement,
and Analysis. Washington, DC: Aspen Institute.
Coryn CLS, Noakes LA, Westine CD and Schroter DC (2011) A systematic review of theory-driven evalua-
tion practice from 1990 to 2009. American Journal of Evaluation 32(2): 199–226.
Davidson EJ (2006) Causal inference nuts and bolts. American Evaluation Association Annual Conference,
Portland, OR. URL: http://realevaluation.co.nz/pres/causation-aea06.pdf.
Funnell S and Rogers P (2011) Purposeful Program Theory. San Francisco, CA: Jossey-Bass.
Gysen J, Bruyninckx h and Bachus K (2006) The modus narrandi: a methodology for evaluating effects of
environmental policy. Evaluation 12(1): 95–118.
Hansen MB and Vedung E (2010) Theory-Based Stakeholder Evaluation. American Journal of Evaluation
31(3): 295–313.
Hendricks M (1996) Performance monitoring: how to measure effectively the results of our efforts. American
Evaluation Association Annual Conference. Atlanta, GA.
Lemire S (2010) Contribution analysis: the promising new approach to causal claims. Paper presented at
the European Evaluation Society Annual Conference, Prague. URL: http://www.europeanevaluation.org/
images/file/Conference/Past_Conference/2010_Prague/FullPapers/3_Lemire_Sebastian.pdf.
by Mayne John on July 16, 2012evi.sagepub.comDownloaded from
280 Evaluation 18(3)
Mackie JL (1974) The Cement of the Universe: A Study of Causation. Oxford: Oxford University Press.
Mahoney J (2008) Toward a unified theory of causality. Comparative Political Studies 14(4/5): 412–36.
Mayne J (2001) Addressing attribution through contribution analysis: using performance measures sensibly.
Canadian Journal of Program Evaluation 16(1): 1–24.
Mayne J (2008) Contribution Analysis: An Approach to Exploring Cause and Effect. ILAC Brief No. 16:
Rome: The Institutional Learning and Change Initiative. URL: http://www.cgiar-ilac.org/files/publica-
tions/briefs/ILAC_Brief16_Contribution_Analysis.pdf.
Mayne J (2011) Contribution analysis: addressing cause and effect. In: Schwartz R, Forss K and Marra M
(eds) Evaluating the Complex. New Brunswick, NJ: Transaction Publishers, 53–96.
Patton MQ (2008a) Utilization-Focused Evaluation, 4th edn. Thousand Oaks, CA: SAGE.
Patton MQ (2008b) Advocacy impact evaluation. Journal of MultiDisciplinary Evaluation 5(9): 1–10. URL:
http://survey.ate.wmich.edu/jmde/index.php/jmde_1/article/view/159/181.
Pawson R and Tilley N (1997) Realistic Evaluation. Thousand Oaks, CA: SAGE.
Pawson R, Greenhalgh T, Harvey G and Walshe K (2004) Realist synthesis: an introduction. University of
Manchester. ESRC Research Methods Programme. URL: http://www.ccsr.ac.uk/methods/publications/
documents/RMPmethods2.pdf.
Reynolds A (1998) Confirmatory program evaluation: a method for strengthening causal inference. American
Journal of Evaluation 19(2): 203–21.
Rogers P (2007) Theory-based evaluations: reflections ten years on. New Directions for Evaluation 114:
63–7.
Stame N (2004) Theory-based evaluation and varieties of complexity. Evaluation 10(1): 58–76.
Stern E, Stame N, Mayne J, Forss K, Davies R and Befani B (2012) DFID Working Paper 38. Broadening the
range of designs and methods for impact evaluations. London: DFID, pp. vi + 92. URL: http://www.dfid.
gov.uk/R4D/Output/189575/Default.aspx.
Toulemonde J (2010) Evaluating impact through a contribution analysis. Workshop at the Prague
Conference of the European Evaluation Society. URL: http://www.ees2010prague.org/images/file/
W404_Evaluating%20Impact%20through%20a.pdf.
Weiss CH (1995) Nothing as practical as good theory: exploring theory-based evaluation for comprehen-
sive community initiatives for children and families. In: Connell JP, Kubisch AC, Schorr LB and Weiss
CH (eds) New Approaches to Evaluating Community Initiatives: Concepts, Methods and Contexts.
Washington, DC: The Aspen Institute.
Weiss CH (1997a) Theory-based evaluation: past, present, and future. New Directions for Evaluation 76:
41–55.
Weiss CH (1997b) How can theory-based evaluation make greater headway? Evaluation Review 21: 501–24.
Weiss CH (2000) Which links in which theories shall we evaluate? New Directions for Evaluation 87: 35–45.
White H (2009) Theory-based impact evaluation: principles and practice, international initiative on
impact evaluation. Working Paper 3: International Initiative for Impact Evaluation (3ie). URL: http://
www.3ieimpact.org.
White H and Phillips D (2012) Addressing attribution of cause and effect in small n impact evaluations:
towards an integrated framework. Working Paper 15, International Initiative for Impact Evaluation (3ie).
URL: http://www.3ieimpact.org/3ie_working_papers.html.
Wimbush E and Mulherin T (2010) Applying contribution analysis to partnership contexts in Scotland.
European Evaluation Society Conference, Prague.
John Mayne is an independent advisor on public sector performance. He has been working with a number of
government, NGO and international organizations in various jurisdictions, on results management, evaluation
and accountability issues. He has authored numerous articles and reports, including several on contribution
analysis, and co-edited five books in the areas of programme evaluation, public administration and performance
monitoring. In 1989 and in 1995, he was awarded the Canadian Evaluation Society Award for Contribution to
Evaluation in Canada. In 2006, he was made a Canadian Evaluation Society Fellow.
by Mayne John on July 16, 2012evi.sagepub.comDownloaded from