ArticlePDF Available

Contribution Analysis: Coming of Age?



In this introductory article, a brief history and introduction to contribution analysis is provided to lay the stage for the articles that follow. At the heart of contribution analysis is the aim to be able to make credible causal claims about the contribution an intervention is making to observed results. The key role that theories of change play is noted, and what a useful theory of change ought to contain is discussed. The article then makes a link between the philosophical discussions on causality and contribution analysis through a discussion of contributory causes. It is argued that such causes, which on their own are neither necessary nor sufficient, represent the kind of contribution role that many interventions play: where there are a number of other influencing events and conditions at work in addition to the intervention of interest. Contribution analysis is an approach to confirming that an intervention is a contributory cause.
The online version of this article can be found at:
DOI: 10.1177/1356389012451663
2012 18: 270Evaluation
John Mayne
Contribution analysis: Coming of age?
Published by:
On behalf of:
The Tavistock Institute
can be found at:EvaluationAdditional services and information for Alerts:
What is This?
- Jul 8, 2012Version of Record >>
by Mayne John on July 16, 2012evi.sagepub.comDownloaded from
18(3) 270 –280
© The Author(s) 2012
Reprints and permission: sagepub.
DOI: 10.1177/1356389012451663
Contribution analysis: Coming of
John Mayne
Independent advisor, Canada
In this introductory article, a brief history and introduction to contribution analysis is provided
to lay the stage for the articles that follow. At the heart of contribution analysis is the aim to be
able to make credible causal claims about the contribution an intervention is making to observed
results. The key role that theories of change play is noted, and what a useful theory of change
ought to contain is discussed. The article then makes a link between the philosophical discussions
on causality and contribution analysis through a discussion of contributory causes. It is argued
that such causes, which on their own are neither necessary nor sufficient, represent the kind of
contribution role that many interventions play: where there are a number of other influencing
events and conditions at work in addition to the intervention of interest. Contribution analysis is
an approach to confirming that an intervention is a contributory cause.
causality, causal packages, contribution analysis, contributory causes, theories of change
This Special Issue focuses on contribution analysis (CA), a theory-based approach to evaluation
aimed at making credible causal claims about interventions and their results. Theory-based
approaches in evaluation have been discussed for many years (see Weiss, 1997a; Stame, 2004;
Rogers, 2007; White, 2009; Funnell and Rogers, 2011) and much has been written. Blamey and
Mackenzie (2007) make the useful distinction in theory-based approaches between realist evalua-
tions (Pawson and Tilley, 1997) and those approaches that develop an explicit theory of change.
The latter include Chen’s (1990) theory-driven evaluation, Weiss’s (1995, 2000) theory-based
evaluation and Mayne’s (2001, 2008, 2011) contribution analysis.
One result of the widespread interest in theory-based approaches is that there is no agreement on
the terms used and even some of the concepts. Nevertheless, there is consistency on the value of
theory-based approaches. They may be best thought of as a logic of enquiry for explaining
interventions that can complement and be used in combination with other designs and data collection
Corresponding author:
John Mayne, Canada 654 Sherbourne Rd, Ottawa, ON K2A 3H3, Canada.
18310.1177/1356389012451663Mayne: Contribution analysis: Coming of age? Evaluation
by Mayne John on July 16, 2012evi.sagepub.comDownloaded from
Mayne: Contribution analysis: Coming of age? 271
techniques. Coryn et al. (2010) review practice with theory-based evaluation approaches over the past
decade, and White and Phillips (2012) review a number of ‘small n’ approaches to evaluation.
Most theory-based approaches rely on developing a theory of change, a logical model for an inter-
vention showing a results chain of how outputs are expected to lead to a sequence outcomes,
ing successive levels of desired effects. They seek to show how the intervention is expected to work or
make a difference. The theory of change is usually developed based on initial policy intentions, informed
then by a range of stakeholder views and information sources, including prior evaluations and research.
The theory of change is then verified to the extent that it matches what is observed to have happened.
CA lies within these theory-of-change approaches. I introduced the approach in 2001 (Mayne,
2001) and amplified it more recently (Mayne, 2011), and as noted in the 2011 publication, pub-
lished applications of CA were few, despite many references to the approach. Interest in CA was
evident at the 2010 EES Conference on Prague where a workshop on CA was given and several
papers presented (Lemire, 2010; Toulemonde, 2010; Wimbush and Mulherin, 2010). Subsequently,
the possibility of this Special Issue was raised.
I first discussed CA in the context of results monitoring systems (Mayne, 2001). The question I
was considering was what could be said about causality of an intervention when only monitoring
data were available. CA, it seemed to me, offered a reasonable way to make evidence-based causal
claims rather than being unable to say anything about causality – or worse, leaving readers to make
their own assumptions.
As I became more familiar with the range of theory-based approaches in evaluation – such as
those by Connell and Kubisch (1998), Davidson (2006), Gysen et al. (2006), Patton (2008b), Pawson
et al. (2004), Reynolds (1998) and Weiss (1995, 1997b) – it was clear than many shared common
features with CA. What was distinctive about CA was that it offered a more systematic way to arrive
at credible causal claims, and improve often weak evaluation practice when dealing with causality.
From an evaluation perspective, the issue was what could be done to make credible causal claims in
the absence of experimental approaches. Many evaluations seemed either to be silent on causality
or, perhaps worse, made causal claims based solely on the views of interviewees.
Contribution analysis: A quick overview
CA is based on the existence of, or more usually, the development of a postulated theory of change
for the intervention being examined. The analysis examines and tests this theory against logic and the
evidence available from results observed and the various assumptions behind the theory of change,
and examines other influencing factors. The analysis either confirms – verifies – the postulated theory
of change or suggests revisions in the theory where the reality appears otherwise. The overall aim is
to reduce uncertainty about the contribution an intervention is making to observed results through an
increased understanding of why results did or did not occur and the roles played by the intervention
and other influencing factors.
One aspect of CA that has been noted is that it suggests a structured approach to the analysis
(White and Phillips, 2012). Six key steps are set out as shown in Table 1 (Mayne, 2001). These
steps can also be part of an iterative approach to building the logic and evidence for claiming that
the intervention made a contribution.
While Table 1 sets out a number of specific steps, as the articles in this Special Issue illustrate,
those who have made use of CA have usually modified these steps to best suit the circumstances they
face and the specific analytic methods they have used. CA is still a relatively new approach in evalu-
ation. In my view, it is good practice that a variety of approaches are being developed and explored.
CA, then, argues that if one can verify or confirm a theory of change with empirical evidence,
and account for major external influencing factors, then it is reasonable to conclude that the
by Mayne John on July 16, 2012evi.sagepub.comDownloaded from
272 Evaluation 18(3)
intervention in question has made a difference. The theory of change provides the basis for the
argument that the intervention is making a difference, identifies weaknesses in the argument and
hence where evidence for strengthening such claims is most needed. Causality is inferred from the
following logic and evidence:
1. The intervention is based on a reasoned theory of change: the chain of results, and the
assumptions behind why the intervention is expected to work are plausible, sound, informed
by existing research and literature and supported by key stakeholders.
2. The activities of the intervention were implemented as outlined in the theory of change.
3. The theory of change is verified by evidence: the chain of expected results occurred, and
the assumptions held.
Table 1. Key Steps in Contribution Analysis.
Step 1: Set out the cause-effect issue to be addressed
Acknowledge the causal problem.
Scope the problem: determine the specific causal question being addressed; determine the level of
confidence needed in answering the question
Explore the nature and extent of the contribution expected
Determine the other key influencing factors
Assess the plausibility of the expected contribution given the intervention size and reach
Step 2: Develop the postulated theory of change and risks to it, including rival explanations
Set out the postulated theory of change of the intervention, including identify the risks and
assumptions and links in the theory of change,
Identify the roles of the other influencing factors and rival explanations
Determine how contested is the postulated theory of change
Step 3: Gather the existing evidence on the theory of change
Assess the strengths and weaknesses of the links in the theory of change
Gather the evidence that exists from previous measurement, past evaluations, and relevant
research (1) for the observed results, (2) for each of the links in the results chain, (3) for the other
influencing factors, and (4) for rival explanations.
Step 4: Assemble and assess the contribution claim, and challenges to it
Set out the contribution ‘story’: the causal claim based on the analysis so far
Assess the strengths and weaknesses in the postulated theory of change in light of the available
evidence, the relevance of the other influencing factors, and the evidence gathered to support rival
If needed, refine or update the theory of change
Step 5: Seek out additional evidence
Determine what kind of additional evidence is needed to enhance the credibility of the contribution
Gather new evidence
Step 6: Revise and strengthen the contribution story
Build the more credible contribution story
Reassess its strengths and weaknesses
Revisit Step 4
by Mayne John on July 16, 2012evi.sagepub.comDownloaded from
Mayne: Contribution analysis: Coming of age? 273
4. External factors – context and rival explanations – influencing the intervention are assessed
and are either shown not to have made a significant contribution or, if they did, their rela-
tive contribution is recognized.
In the end, a conclusion is reached – a contribution claim about whether the intervention made a
difference as expected. To summarize:
Contribution claim = a verified theory of change + other key influencing factors accounted for.
What does a contribution claim look like? The result of a CA is rarely definitive proof.
Causality in relation to socio-economic interventions is usually of the probabilistic form: that the
intervention is most likely to have made a difference. CA provides an argument with evidence
from which it is reasonable to conclude with confidence that the intervention has made a contri-
bution and why. It builds a compelling case – a warrant – about the contribution being made:
The aim is to get what Hendricks (1996) call ‘plausible association’: whether a reasonable person would
agree from the evidence and argument that the program has made an important contribution to the observed
result. (Mayne, 2011: 62)
While the focus has often been on using CA to make causal claims, as implied in my earlier
articles, and as the articles in this Special Issue amply point out, CA also has other uses. Wimbush
et al. in this issue discuss using CA approaches as a participatory tool in planning for results and
evaluation systems to enhance learning and understanding about intervention being planned and
reviewed. Leeuw in this issue discusses using CA approaches to be able to assess the likelihood
that a proposed policy initiative will work.
The Special Issue is evidence of both a growing CA practice and a continuing discussion and
debate about making use of CA and related theory-based approaches.
Contribution rather than attribution
There has been discussion and possible confusion between the terms attribution and contribution. Many
authors make a useful distinction between these terms (Patton, 2008a; Stern et al., 2012). In much of the
literature, attribution is used to both identify with finding the cause of an effect and with estimating quan-
titatively how much of the effect is due to the intervention. The term contribution is used here in the fol-
lowing way: in light of the multiple factors influencing a result, has the intervention made a noticeable
contribution to an observed result and in what way? The authors in this Special Issue adhere to this usage.
Useful theories of change
Critical to CA is the development of a well thought-out and credible theory of change. In my view,
a good theory of change goes well beyond a results chain or logical framework. I would argue that
a complete theory of change is embedded in the context of the intervention, and is developed incor-
porating the perspectives of key stakeholders, beneficiaries and the existing relevant research.
Theories of change should include:
• a results (causal) chain showing the basic logic of the intervention;
• the underlying assumptions behind the links in the results chain;
by Mayne John on July 16, 2012evi.sagepub.comDownloaded from
274 Evaluation 18(3)
• an elaboration of the risks to each of these links;
• identification of unintended effects; and
• identification of other key explanatory factors (rival explanations).
Figure 1 illustrates the various components of a theory of change.
The theory of change is dis-
played deliberately as a quasi-linear process, but allows for feedback loops as needed. A ‘sort of
linear theory of change facilitates both arriving at causal claims and communicating the performance
story of the intervention. The assumption boxes can be used to reduce the number of explicit links
that might otherwise be needed in a theory of change. Other explanatory factors (rival explanations)
may be different for different links or may apply to the overall causal logic of the intervention. The
vertical ‘activities and outputs’ box allows for an implementation theory to be shown (i.e. the activi-
ties and outputs that are going to be delivered, perhaps over time, to implement the intervention).
Theories of change as causal packages
The logic used for making causal contribution claims outlined above was not related directly to the
literature on causality. There is a large and active literature on the issue of causation, and over
centuries now, a number of different perspectives have been developed to explain and understand
Final Outcomes
Assumptions: How do the intervention outputs
expect to result in or effect the immediate,
intermediate and inal outcomes? What has to
happen? What contextual factors inluence
these processes?
Risks : Risks to the link not occurring.
Assumptions: How are immediate
outcomes expected to producethe
intermediate outcomes? What has to
happen? What contextual factors
inluence these processes?
Risks: Risks to the link not occurring.
Assumptions: How are intermediate
outcomes expected to producethe
inaloutcomes? What has to
happen? What contextual factors
inluence these processes?
Risks: Risks to the link not occurring.
Other Explanatory
Factors/Rival Explanations:
Socio-economic factors; other
can differ for different
Activities and Outputs
Figure 1. Displaying a theory of change.
Assumptions are events and conditions that need to happen for the link to work. They are developed from a mix of
stakeholder and social science theories and research.
Risks are external event and conditions that could put the causal link at risk.
Other Explanatory Factors are other factors or conditions that might help explain the occurrence of the observed result
other than the influence of the intervention.
Unintended effects are positive or – more usually – negative unanticipated effects that occur as a result of the
interventions activities and results.
by Mayne John on July 16, 2012evi.sagepub.comDownloaded from
Mayne: Contribution analysis: Coming of age? 275
causation, of which counterfactual approaches are only one. A recent study on impact evaluation
that I participated in commissioned by DFID (Stern et al., 2012) provides a discussion of these
different bases for causal inference. I would like to discuss how CA relates to the causality litera-
ture, many ideas of which arose during the work on the DFID-funded project. This has the potential
to suggest ways that we can further deepen and systematize CA within evaluation practice.
Causality involves relationships between events or conditions, and is often discussed in terms
of necessary and sufficient conditions. When we say that X ‘caused’ Y, we can mean that X is:
• Necessary but not sufficient.
A person must be infected with HIV before they can develop AIDS. HIV is therefore a
necessary cause of AIDS; however, since every person with HIV does not contract
AIDS, it is not sufficient.
• Sufficient but not necessary
Decapitation is sufficient to cause death; however, people can die in many other ways.
• Both necessary and sufficient
A gene mutation associated with Tay-Sachs is a both necessary and sufficient cause for
the development of the disease, since everyone with the mutation will eventually
develop Tay-Sachs and no-one without the mutation will ever have it.
• Neither necessary nor sufficient – a contributory cause
Smoking heavily is a contributory cause of lung cancer – it is not a necessary cause
since there are other sources of lung cancer, nor is it a sufficient cause since not all
smokers suffer from lung cancer.
I would suggest that in ordinary discussions, ‘X causes Y’ is most often taken to mean suffi-
ciency. We mean either that in this case the event X resulted in Y, or that generally the phenomena
X results in Y, recognizing in either case that there may be other events that also could produce Y.
Necessity seems a less common use of the term ‘cause’ as it quite demanding, requiring that when-
ever there is the event Y then there is X. Causes that are both necessary and sufficient are even
more rare. On the other hand, contributory causes that are neither necessary nor sufficient on their
own are quite common.
It is clear that many interventions do not act alone and that the desired outcomes will be the
result of a combination of causal factors, including other related interventions, and events and
conditions external to the intervention. Indeed, many interventions are designed to be part of such
a ‘causal package’, and even if not so designed, their evaluation needs to take these other factors
into account. Cartwright and Hardie (2012) call these supporting factors, other events and condi-
tions that need to happen in order for the intervention to work, to make a difference.
In these instances, the key causal question becomes: was the causal package consisting of the
intervention plus supporting factors sufficient to produce the intended outcome? It is recognized
that there could be other ways that the desired outcome is brought about and hence the particular
causal package in question may not be necessary to achieve the desired outcome. In addition, we
would want to know if the intervention was a necessary part of the specific causal package. Perhaps
the desired outcome could be realized through the supporting factors without the intervention. I’ll
refer to the causal package with these two characteristics – sufficiency of the package and necessity
of the intervention as part of the package – as the intervention causal package.
If these conditions hold, then the intervention is a contributory cause and as such has ‘made a
difference’, as I would define it. That is, the intervention was a necessary element of the causal
package that produced the observed result. Box 1 sets out an example of a causal package related
by Mayne John on July 16, 2012evi.sagepub.comDownloaded from
276 Evaluation 18(3)
to an intervention aiming to improve education outcomes for girls. It is clear in this example that
there are other ways of improving educational outcomes of girls, such as through offering extracur-
ricular help to girls.
What is of particular interest is that, as I have described them, theories of change are in fact
postulated causal packages, and more. They identify the supporting factors as assumptions and
identify the risks (the confounding factors) and, as well, set out the relationships between support-
ing factors and the intervention. The assumptions are the supporting factors needed for the theory
of change to work. Overall, a theory of change is a model of how the intervention is expected to act
as a contributing cause.
A theory of change can be constructed for the example in Box 1. Its outline would be something
like: teachers with more training and skills in educating girls would provide teaching that is more
attuned to girls’ needs and is of more interest to them, resulting in girls being more actively engaged
in studying and wanting an education. This will lead to better education outcomes for the girls
concerned. Among the assumptions here would be (1) that teachers want to help girls get a better
education and hence work to acquire new skills, (2) parents support their daughters more active
engagement in school, (3) girls are able to attend schools, and (4) are comfortable there.
In the philosophy literature, contributory causes are called INUS causes: an Insufficient but
Necessary part of a condition that is itself Unnecessary but Sufficient for the occurrence of the
effect (Mackie, 1974) and there is a large literature on contributory causes and causal packages,
which are often described as causal cakes or pies, showing the various components (slices) that
make up the package (see Cartwright and Hardie, 2012 and Stern et al., 2012, for discussions).
The discussion of contributory causes has here so far been in deterministic terms (i.e. a cause is either
sufficient or it is not). However, as noted earlier, the discussion needs to reflect the probabilistic nature of
causality in socio-economic phenomena. Mahoney (2008: 421) argues that ‘a treatment is a cause when
its presence raises the probability of an outcome occurring in any given case’. He introduces the useful
ideas of probabilistically necessary causes – ‘factors that usually or almost always have to be present for
the outcome to occur – and probabilistically sufficient causes – ‘a cause that much of the time on its own
will produce the effect’ (pp. 425–6). For many interventions being evaluated, these are more realistic
interpretations of the necessary and sufficient conditions discussed earlier.
In term of an intervention’s causal package, I will use the terms likely necessary to describe the
supporting factors, and likely sufficient to describe the sufficiency of the intervention causal
Consider an intervention aimed at improving the education outcomes for girls in a developing country,
through raising the knowledge, skills and awareness of teachers in schools.
Other supporting factors here might be:
the willingness of teachers to support the education of girls;
the support of parents for their daughters to attend schools and study at home;
the ability of girls to get to the schools;
the adequacy of the schools to accommodate girls.
The causal package here is the training provided to teachers plus the supporting factors. An evaluation
would want to know if this causal package worked, i.e. in this case were education outcomes for girls
improved as a result of the causal package, and since it is an evaluation of the intervention, was the
intervention a needed part of the causal package?
Box 1. Causal packages and making a difference
by Mayne John on July 16, 2012evi.sagepub.comDownloaded from
Mayne: Contribution analysis: Coming of age? 277
package, meaning that in this case, the causal package most likely produced the observed result.
To show that the intervention is a contributory cause is to show that the intervention’s causal pack-
age is likely sufficient, and that the intervention is itself a necessary element of the sufficient
Indeed, in terms of causal packages, this is exactly what CA is able to do; i.e. confirming:
• that the expected result occurred;
• that the supporting factors – the assumptions for each link in the theory of change – have
occurred and together provide a reasonable explanation for the results that occur;
• that any other identified supporting factor that is present has been included in the causal
package, thereby potentially revising the theory of change; and
• that any plausible rival explanations
– external causal factors – have been accounted for.
Given that the assumptions may be likely necessary conditions, in a specific case not all may
have occurred. In this case an assessment is needed of whether the collection of supporting factors
(assumptions) actually occurring provides a reasonable explanation for the observed result. This,
plus the assessment of rival explanations, allows for the causal inference to be made as to whether
the intervention causal package (for a link in a causal chain) was sufficient. If it was and all the other
links in the causal chain are also confirmed, then the theory of change itself has been confirmed.
Data and evidence for the analysis comes from drawing on logic, critical thinking, and prior
research and asking relevant stakeholders whether they believe there were other causal factors
beyond the package at work. If other causal factors were believed to be at work, one would need
to seek out supporting evidence. Note that the links in a theory of change should comprise rela-
tively proximate cause and effects, making judgement and the use of logic and critical thinking
Consider the intervention described in Box 1 about enhancing the skills of teachers so as to
improve education outcomes for girls. Assume the additional training of teachers occurred and
subsequently, an increase in education outcomes for girls was observed. Did the intervention make
a difference? Was the additional training a contributory cause? To answer, the steps and assump-
tions in the theory of change would be examined to see if they occurred as expected. Did teachers
acquire and apply new skills? Were girls more engaged? Did parents support their daughters get-
ting educated? Further, were any other factors at play outside the postulated theory of change, in
particular were there any plausible rival explanations for the enhanced education outcomes, and if
so what were they? Possible rival explanations might include greater investment generally in edu-
cation, increasing teacher/student ratios or the arrival of the internet, providing possibilities for self
learning. If the theory of change is confirmed and no significant rival explanations found, then one
could conclude with confidence that the intervention did indeed make a difference.
I am arguing that the idea of contributory cause is a useful and relevant concept for many interven-
tions. It offers a practical approach to confirming that an intervention contributed to an observed
result and indeed made a difference. As such, it also offers one basis for future methodological devel-
opment of CA as a form of theory-based evaluation, building on the concepts of causal packages.
Articles in the Special Issue
The articles in this Special Issue provide a broad overview and discussion of the current state of
CA. The first few articles provide specific cases of applied CA. Delahais and Toulemonde describe
their five years of experience of CA in their evaluation practice. Using several examples, they
by Mayne John on July 16, 2012evi.sagepub.comDownloaded from
278 Evaluation 18(3)
discuss how they operationalized key aspects of CA and particularly the contribution story. They
discuss the real challenges faced in carrying out a CA and the resources required. The authors also
argue for the need for better ways of presenting contribution stories and for standards for assuring
the quality of CA.
Lemire et al. begin the discussion of applied CA, describing how they have operationalized the
steps involved in the analysis, focusing in particular on how to account for other influencing fac-
tors and rival explanations. They propose a tool to do this – the ‘Relevant Explanation Finder’.
Wimbush et al. provide examples of the use of CA and CA concepts from Canada and
Scotland. They have used CA as a participatory process, which strengthens both conceptual and
practical understanding of planning/managing for outcomes and the related implementation and
change theories, thus helping to build collaborative capacity within and across participating
partner organizations.
After an insightful discussion of the debate in development evaluation about the use of experi-
mental designs versus theory-based approaches, Vaessen and Raimondo discuss an evaluation of a
UNESCO programme that is not amenable to experimentation. They discuss CA in relation to
impact evaluation pointing to both strengths and weaknesses, describe the formative evaluation
they conducted using a theory of change, and discuss their planned use of CA for the summative
evaluation. They also discuss the idea of contributory causes in CA as outlined in this introductory
Leeuw, in discussing theory-based and CA approaches, describes the application of several
tools that are likely to be new to most evaluators, addressing three problematic situations: (1)
building theories of change using software; (2) forecasting impacts by testing theories of change
with look-alike interventions and examining past implementation failures; and (3) developing ‘his-
torical’ counterfactuals. Leeuw argues that CA can be strengthened through the identification of the
mechanisms at work that visualization software can foster, through the examination of implemen-
tation failures in look-alike mechanisms, and through testing contribution stories using historical
counterfactuals and hypothetical question research.
The remaining two articles discuss various aspects of the concepts behind CA. Patton, after
outlining and discussing cases of how CA is used in evaluations, argues that it depends crucially on
critical thinking. CA requires careful and rational thinking about the factors and conditions behind
the links between interventions and their impacts. He presents a forceful argument that ‘rigorous
thinking supersedes rigorous methodology’ and discusses quality standards for rigorous thinking.
Sridharan and Nakaima end the Special Issue. They discuss a number of questions and chal-
lenges to those using theory-based and CA approaches that can help in the further development of
theory-driven evaluation and CA approaches. Questions such as: ‘What is a “good-enough” pro-
gramme theory? How does one build understanding and expectations about programme impact?
What does causality mean for complex interventions? How does learning occur? How does the
application of theory-driven evaluation approaches help generate an “ecology of evidence”? How
can evaluation methods be integrated directly with CA?’ are discussed within the context of an
ongoing dance/physical activity programme for health promotion and the questions set the stage
for future work on CA.
Final comments
My intent with this Special Issue is to widen the interest in and share the experiences with using
CA as a way of making causal claims about interventions in a credible and rigorous manner.
by Mayne John on July 16, 2012evi.sagepub.comDownloaded from
Mayne: Contribution analysis: Coming of age? 279
The articles provide a wealth of ideas to further explore CA and many references that readers can
follow up. After a healthy gestation period, I do think that CA is coming of age.
This research received no specific grant from any funding agency in the public, commercial or not-for-profit
1. I am using the term outcomes to cover the sequence of results (or effects) – immediate, intermediate and
final outcomes – following the delivery of outputs. Final outcomes here are meant to include impacts as
the term is used in the development evaluation world.
2. In some cases two or more theories of change may emerge. Hansen and Vedung (2010) discuss using
multiple stakeholder theories of change.
3. In commenting on this introduction, Jos Vaessen suggested that attribution/contribution can be usefully
distinguished as follows. Attribution emphasizes the issues of whether or not and how much of a par-
ticular change can be attributed to an intervention. Contribution emphasizes the confluence of multiple
causal factors to a particular change and emphasizes the issues of whether or not and how an intervention
contributes to the change.
4. There is a wide variety of ways of depicting theories of change. Funnell and Rogers (2011) provide
numerous examples.
5. Note that I am distinguishing here between the sufficiency of a specific event such as an intervention, and
the sufficiency of a phenomenon.
6. Lemire et al. in this volume discuss a systematic way to explore other influencing factors and rival expla-
nations in CA.
7. If one were trying to get at attribution in this case, the evaluation issue would be: how much of the
increase in education outcomes is due to the training provided to teachers?
Blamey A and Mackenzie M (2007) Theories of change and realistic evaluation. Evaluation 13(4): 439–55.
Cartwright N and Hardie J (2012) Evidence-based Policy: Doing it Better: A Practical Guide to Predicting if
a Policy Will Work for You. Oxford: Oxford University Press.
Chen H-T (1900) Theory-Driven Evaluations. Newbury, CA: SAGE.
Connell JP and Kubisch AC (1998) Applying a theory of change approach to the evaluation of comprehensive
community initiatives: progress, prospects, and problems. In: Fulbright-Anderson K, Kubisch AC and
Connell JP (eds) New Approaches to Evaluating Community Initiatives, Vol. 2: Theory, Measurement,
and Analysis. Washington, DC: Aspen Institute.
Coryn CLS, Noakes LA, Westine CD and Schroter DC (2011) A systematic review of theory-driven evalua-
tion practice from 1990 to 2009. American Journal of Evaluation 32(2): 199–226.
Davidson EJ (2006) Causal inference nuts and bolts. American Evaluation Association Annual Conference,
Portland, OR. URL:
Funnell S and Rogers P (2011) Purposeful Program Theory. San Francisco, CA: Jossey-Bass.
Gysen J, Bruyninckx h and Bachus K (2006) The modus narrandi: a methodology for evaluating effects of
environmental policy. Evaluation 12(1): 95–118.
Hansen MB and Vedung E (2010) Theory-Based Stakeholder Evaluation. American Journal of Evaluation
31(3): 295–313.
Hendricks M (1996) Performance monitoring: how to measure effectively the results of our efforts. American
Evaluation Association Annual Conference. Atlanta, GA.
Lemire S (2010) Contribution analysis: the promising new approach to causal claims. Paper presented at
the European Evaluation Society Annual Conference, Prague. URL:
by Mayne John on July 16, 2012evi.sagepub.comDownloaded from
280 Evaluation 18(3)
Mackie JL (1974) The Cement of the Universe: A Study of Causation. Oxford: Oxford University Press.
Mahoney J (2008) Toward a unified theory of causality. Comparative Political Studies 14(4/5): 412–36.
Mayne J (2001) Addressing attribution through contribution analysis: using performance measures sensibly.
Canadian Journal of Program Evaluation 16(1): 1–24.
Mayne J (2008) Contribution Analysis: An Approach to Exploring Cause and Effect. ILAC Brief No. 16:
Rome: The Institutional Learning and Change Initiative. URL:
Mayne J (2011) Contribution analysis: addressing cause and effect. In: Schwartz R, Forss K and Marra M
(eds) Evaluating the Complex. New Brunswick, NJ: Transaction Publishers, 53–96.
Patton MQ (2008a) Utilization-Focused Evaluation, 4th edn. Thousand Oaks, CA: SAGE.
Patton MQ (2008b) Advocacy impact evaluation. Journal of MultiDisciplinary Evaluation 5(9): 1–10. URL:
Pawson R and Tilley N (1997) Realistic Evaluation. Thousand Oaks, CA: SAGE.
Pawson R, Greenhalgh T, Harvey G and Walshe K (2004) Realist synthesis: an introduction. University of
Manchester. ESRC Research Methods Programme. URL:
Reynolds A (1998) Confirmatory program evaluation: a method for strengthening causal inference. American
Journal of Evaluation 19(2): 203–21.
Rogers P (2007) Theory-based evaluations: reflections ten years on. New Directions for Evaluation 114:
Stame N (2004) Theory-based evaluation and varieties of complexity. Evaluation 10(1): 58–76.
Stern E, Stame N, Mayne J, Forss K, Davies R and Befani B (2012) DFID Working Paper 38. Broadening the
range of designs and methods for impact evaluations. London: DFID, pp. vi + 92. URL: http://www.dfid.
Toulemonde J (2010) Evaluating impact through a contribution analysis. Workshop at the Prague
Conference of the European Evaluation Society. URL:
Weiss CH (1995) Nothing as practical as good theory: exploring theory-based evaluation for comprehen-
sive community initiatives for children and families. In: Connell JP, Kubisch AC, Schorr LB and Weiss
CH (eds) New Approaches to Evaluating Community Initiatives: Concepts, Methods and Contexts.
Washington, DC: The Aspen Institute.
Weiss CH (1997a) Theory-based evaluation: past, present, and future. New Directions for Evaluation 76:
Weiss CH (1997b) How can theory-based evaluation make greater headway? Evaluation Review 21: 501–24.
Weiss CH (2000) Which links in which theories shall we evaluate? New Directions for Evaluation 87: 35–45.
White H (2009) Theory-based impact evaluation: principles and practice, international initiative on
impact evaluation. Working Paper 3: International Initiative for Impact Evaluation (3ie). URL: http://
White H and Phillips D (2012) Addressing attribution of cause and effect in small n impact evaluations:
towards an integrated framework. Working Paper 15, International Initiative for Impact Evaluation (3ie).
Wimbush E and Mulherin T (2010) Applying contribution analysis to partnership contexts in Scotland.
European Evaluation Society Conference, Prague.
John Mayne is an independent advisor on public sector performance. He has been working with a number of
government, NGO and international organizations in various jurisdictions, on results management, evaluation
and accountability issues. He has authored numerous articles and reports, including several on contribution
analysis, and co-edited five books in the areas of programme evaluation, public administration and performance
monitoring. In 1989 and in 1995, he was awarded the Canadian Evaluation Society Award for Contribution to
Evaluation in Canada. In 2006, he was made a Canadian Evaluation Society Fellow.
by Mayne John on July 16, 2012evi.sagepub.comDownloaded from
... We point to contribution style approaches (e.g., Kok & Schuit, 2012;Morton, 2015) as a potential way in which evaluators can address issues of attribution in future evaluation studies. Contribution analysis is a theory-based evaluation approach that provides a systematic way to arrive at credible causal claims about a program's contribution to change (Mayne, 2008;2012). The approach involves developing and assessing the evidence for a logic model to explore the program's contribution to observed outcomes. ...
... The approach involves developing and assessing the evidence for a logic model to explore the program's contribution to observed outcomes. The approach is particularly useful in situations where an experimental (i.e., twogroup) design is not feasible (Mayne, 2008;2012). The findings from a contributions analysis do not provide definitive proof that a program attributed to outcomes but allows evaluators to draw a plausible conclusion that the program has contributed to documented results (Mayne, 2008;2012). ...
... The approach is particularly useful in situations where an experimental (i.e., twogroup) design is not feasible (Mayne, 2008;2012). The findings from a contributions analysis do not provide definitive proof that a program attributed to outcomes but allows evaluators to draw a plausible conclusion that the program has contributed to documented results (Mayne, 2008;2012). ...
Full-text available
This paper examines how frequently K* training programs have been evaluated, synthesizes information on the methods and outcome indicators used, and identifies potential future approaches for evaluation. We conducted a systematic scoping review of publications evaluating K* training programs, including formal and informal training programs targeted toward knowledge brokers, researchers, policymakers, practitioners, and community members. Using broad inclusion criteria, eight electronic databases and Google Scholar were systematically searched using Boolean queries. After independent screening, scientometric and content analysis was conducted to map the literature and provide in-depth insights related to the methodological characteristics, outcomes assessed, and future evaluation approaches proposed by the authors of the included studies. The Kirkpatrick four-level training evaluation model was used to categorize training outcomes. Of the 824 unique resources identified, 47 were eligible for inclusion in the analysis. The number of published articles increased after 2014, with most conducted in the United States and Canada. Many training evaluations were designed to capture process and outcome variables. We found that surveys and interviews of trainees were the most used data collection techniques. Downstream organizational impacts that occurred because of the training were evaluated less frequently. Authors of the included studies cited limitations such as the use of simple evaluative designs, small cohorts/sample sizes, lack of long-term follow-up, and an absence of curriculum evaluation activities. This study found that many evaluations of K* training programs were weak, even though the number of training programs (and the evaluations thereof) have increased steadily since 2014. We found a limited number of studies on K* training outside of the field of health and few studies that assessed the long-term impacts of training. More evidence from well-designed K* training evaluations are needed and we encourage future evaluators and program staff to carefully consider their evaluation design and outcomes to pursue.
... Contribution analysis (CA) aims to explore how and why various elements of a program contribute to the outcomes of interest (10), e.g., how the professional accreditation standards contribute to the observed outcomes. By collecting information from multiple sources (e.g., documents and interviews), CA uses an expert-derived theory of change (14,15) to explore the interactions between program and curricular activities and connect their relationship to proximal (program-related outcomes) and distal outcomes (system-level outcomes), and the assumptions informing these connections (10). CA aligns with contemporary recommendations of health professions education program evaluation to comprehensively capture contributing factors and the emergent processes toward development of the outcomes of interest (1). ...
... This study applied Mayne's six-step contribution analysis (15,18) to evaluate the outcomes of a convenience sample of six health professions/health science programs offered at a large Australian university (name removed for peer review). In our study, we applied CA to identify relevant health professions graduate outcomes and develop a theory of how and why factors that have contributed to this outcome (10,14,15). ...
... This study applied Mayne's six-step contribution analysis (15,18) to evaluate the outcomes of a convenience sample of six health professions/health science programs offered at a large Australian university (name removed for peer review). In our study, we applied CA to identify relevant health professions graduate outcomes and develop a theory of how and why factors that have contributed to this outcome (10,14,15). Utilizing CA approach allowed us to describe the complex pathways learners experience as they move toward these outcomes and explore the relationships between the different contributing factors to the outcomes (10). ...
Full-text available
Introduction/background Course evaluation in health education is a common practice yet few comprehensive evaluations of health education exist that measure the impact and outcomes these programs have on developing health graduate capabilities. Aim/objectives To explore how curricula contribute to health graduate capabilities and what factors contribute to the development of these capabilities. Methods Using contribution analysis evaluation, a six-step iterative process, key stakeholders in the six selected courses were engaged in an iterative theory-driven evaluation. The researchers collectively developed a postulated theory-of-change. Then evidence from existing relevant documents were extracted using documentary analysis. Collated findings were presented to academic staff, industry representatives and graduates, where additional data was sought through focus group discussions - one for each discipline. The focus group data were used to validate the theory-of-change. Data analysis was conducted iteratively, refining the theory of change from one course to the next. Results The complexity in teaching and learning, contributed by human, organizational and curriculum factors was highlighted. Advances in knowledge, skills, attitudes and graduate capabilities are non-linear and integrated into curriculum. Work integrated learning significantly contributes to knowledge consolidation and forming professional identities for health professional courses. Workplace culture and educators’ passion impact on the quality of teaching and learning yet are rarely considered as evidence of impact. Discussion Capturing the episodic and contextual learning moments is important to describe success and for reflection for improvement. Evidence of impact of elements of courses on future graduate capabilities was limited with the focus of evaluation data on satisfaction. Conclusion Contribution analysis has been a useful evaluation method to explore the complexity of the factors in learning and teaching that influence graduate capabilities in health-related courses.
... Even so, it may be unlikely to attribute impact to certain research projects as impact processes are complex, diffused and fuzzy (Meagher et al., 2008). Some scholars have suggested to focus on contributions rather than attributions of research to change or impact (e.g., Mayne, 2012). As such, it may be through understanding or leveraging the process through which research can lead to impacts, such as the process of KE, in order to promote research impact. ...
... In this case, the CFS KE practitioners could evaluate the extent of activities in their KE process (e.g., using dimensions or indicators relevant to their cyclical KE process in Fig. 2 such as number of correspondences, amount of time spent on a respective process; see Maag et al., 2018 andPosner andCvitanovic, 2019 for more details on indicators and impacts of knowledge brokers and boundary spanner). Other intangible results (called attributable results indicators) such as team cohesion, group learning or alignment of objectives (common ground), increased trust, stronger and diverse social networks may be used to measure knowledge brokering effectiveness (Maag et al., 2018;Posner and Cvitanovic, 2019) and substantiate 'contribution stories' for evaluating research or intervention impacts (Mayne, 2012). Although identifying indicators is beyond the scope of our work, leveraging intermediary individuals and their process of KE or knowledge brokering may be an alternative worth exploring. ...
Full-text available
While there is a growing body of work on the barriers to knowledge exchange (KE) and the development of actionable science, what remains more elusive is an understanding of what strategies and conditions lead to effective KE, how it is operationalized, or how different practitioners define successful exchange of scientific knowledge. We interviewed nine KE practitioners at the Canadian Forest Service (CFS), a national agency, to understand: (1) who at CFS is involved in KE and how they perceive their roles, (2) the strategies for KE used in the CFS and its distribution in a KE typology framework, (3) how KE practitioners define a "successful" exchange of knowledge and KE bright spots, and (4) what conditions enable KE within the CFS. We identified CFS KE practitioners roles as knowledge brokers. They use a cyclical KE strategy that integrates concepts of co-design in operationalizing KE. The CFS KE practitioners engage in a variety of KE activities, but outreach was the most frequently cited. We suggest organizations work closely with intermediary individuals as they hold unique positions of building and maintaining relationships with knowledge users. They can also provide valuable insights in evaluating research impacts such as through contribution stories. The KE typology was a useful tool to inform decisions about KE strategies. Finally, our study emphasizes the need for organizations to adopt more qualitative evaluations to assess the full scope and impact of KE work, and recognizes the integral role of relationships and trust in all aspects of KE work.
... Before the midline and endline surveys, other organizations and institutions also promoted some interventions similar to the ones disseminated by the CORIGAP program, which resulted in the contamination of the initial grouping in our survey design. Since development does not happen in isolation, outcome and impact assessments need to take these facts into account and opt for different methodologies, such as contribution analysis (Apgar et al. 2020;Mayne 2012) or process tracing (Ton 2012) which have been shown to be effective methods to account for project contributions on development issues. Nevertheless, based on the analysis, the study indicated that the CORIGAP interventions contributed to the observed changes in farm management practices and related outcomes in the focus countries. ...
Full-text available
In this chapter, we propose a framework of market-based incentive mechanisms for the adoption and scaling of sustainable production standards throughout rice value chains and review evidence of two mechanisms that have been piloted in Vietnam: “internalizing” and “embodying.” The evidence suggests that sustainable production standards can be successfully “internalized” in rice value chains through policies (public governance) that provide an enabling environment for vertical coordination and private governance of standards (e.g., through contract farming). However, the major challenge policymakers and value chain actors face for this mechanism to succeed is to reconcile differences in contract preferences between contracting parties and solve trust and coordination issues (e.g., contract breach and side-selling). Market evidence suggests that sustainable production standards can be successfully “embodied” in rice products through certification and labeling. Vietnamese consumers were found to put significant price premiums on sustainable production certification and even more so if supplemental information is provided on certification and traceability. Both examples highlight the role policymakers can play in the adoption and scaling of sustainable production standards throughout rice value chains by creating an enabling environment for vertical coordination and private sector investment in certification and information campaigns. We conclude by discussing how policymakers can overcome the challenges for these mechanisms to succeed and identifying areas for future research. Furthermore, we provide a detailed description of the monitoring and evaluation process of CORIGAP activities. We explain the development from paper-based to computer-assisted survey tools, the evaluation of changes that farmers perceive and provide a case study on impact evaluation using econometric analysis. It becomes clear that a multidimensional project like CORIGAP needs a variety of means to assess the changes on different levels. We found that farmers in all CORIGAP countries perceive positive changes. Their yields and profits have increased, and the project has exceeded its target reach in all countries. This was also due to other funding schemes that supported CORIGAP technologies and practices, such as the rollout of 1M5R in Vietnam and the 3CT in China. The project used a variety of dissemination strategies to communicate the outputs and outcomes to a plethora of different stakeholders. Among the most successful were social media campaigns, including informative videos about CORIGAP technologies and practices. The chapter closes with some anecdotal evidence of how, especially postharvest technologies, influenced policies in the CORIGAP countries. We provide lessons learned from the project to be taken care of in future projects that aim to introduce sustainable agricultural practices and technologies to improve natural resource management.
... A plausibility analysis called contribution analysis was applied to the case of Burkina Faso during the first assessment. It is a theory-driven evaluative approach that was first articulated by Mayne in 1999, and has evolved into a well-recognized methodology [13]. Contribution analysis is used when more conventional methods cannot be used, especially when complex systems are involved. ...
Full-text available
Background The practice of giving water before 6 mo of age is the biggest barrier to exclusive breastfeeding in West and Central Africa. To address this challenge, a regional initiative, “Stronger with Breastmilk Only” (SWBO), was rolled out at country level in several countries of the region. Objective We examined the implementation process of the SWBO initiative and the contribution of its advocacy component to a more supportive environment for breastfeeding policies and programs. Methods This study was based on 2 assessments at the national level carried out in 5 countries (Burkina Faso, Chad, Democratic Republic of the Congo, Senegal, and Sierra Leone) using qualitative methods. We combined 2 evaluative approaches (contribution analysis and outcome harvesting) and applied 2 theoretical lenses (Breastfeeding Gear Model and Consolidated Framework for Implementation Research) to examine the implementation process and the enabling environment for breastfeeding. Data sources included ∼300 documents related to the initiative and 43 key informant interviews collected between early 2021 and mid-2022. Results First, we show how a broad initiative composed of a set of combined interventions targeting multiple levels of determinants of breastfeeding was set up and implemented. All countries went through a similar pattern of activities for the implementation process. Second, we illustrate that the initiative was able to foster an enabling environment for breastfeeding. Progress was achieved notably on legislation and policies, coordination, funding, training and program delivery, and research and evaluation. Third, through a detailed contribution story of the case of Burkina Faso, we illustrate more precisely how the initiative, specifically its advocacy component, contributed to this progress. Conclusion This study shed light on how an initiative combining a set of interventions to address determinants of breastfeeding at multiple levels can be implemented regionally and contributes to fostering an enabling environment for breastfeeding at scale.
... "This comparison of the data generates theoretical properties of the category...Thus the process of constant comparison stimulates thought that leads to both descriptive and explanatory categories" (Lincoln & Guba, 1985, p. 341). We will draw upon the principles of contribution analysis (Mayne, 2012) and participatory impact pathway analysis (Douthwaite et al., 2007) for a participatory construction and validation of the impact pathway. ...
Full-text available
Finding Disciplinary Literacy Capacities Between Cultures: An Inquiry in United States Secondary School Agriscience Education Classrooms Between English Language Learners and Native English Speakers
This article argues for the importance of theory and theorizing for an evaluation in the form of a process theory of change. A process theory of change centers its theoretical attention on key episodes that explain how things worked, in which the causal linkages are unpacked. The key lies in answering why actors do what they do (and thus whether these actions can be traced back to the intervention). This theorization has three steps: (1) definition of intervention and potential contribution; (2) theorization of potential contribution pathways; and (3) unpacking the process. This procedure is illustrated with a hypothetical example.
There is a need to mitigate upstream factors that contribute to workplace bullying to prevent the far‐reaching consequences that impact individuals, teams, and organizations. In this chapter, we review literature on interventions designed to prevent workplace bullying targeting groups of employees or organizations as a whole and strategies to support implementation. We identified a number of prevention strategies at the team, organizational, and society level; however, there are a number of gaps in available research evidence on effective prevention strategies in this area. Recommendations are provided on the development of comprehensive organizational strategies to address workplace bullying involving a review and development process that breaks down existing processes and conditions that are supporting, precipitating, or enabling workplace bullying and incorporating a participatory approach. We also point to opportunities for further research through the validation of existing measurement instruments and the incorporation of recent advancements in the evaluation of complex interventions.
Full-text available
Zusammenfassung Angesichts der Dringlichkeit ökologischer, ökonomische und sozialer Transformationsprozesse in deutschen Unternehmen gewinnen wirksame Personal- und Organisationsentwicklungen zunehmend an Bedeutung. Dieser Beitrag in der Zeitschrift Gruppe. Interaktion. Organisation. (GIO) führt Wirkmodelle als Instrument zur Klärung von Wirkungen von transformativen Personal- und Organisationsentwicklungsmaßnahmen ein. Mithilfe solcher theoriebasierter Evaluationsansätze lässt sich die Wirkweise komplexer Maßnahmen systematisch entwickeln und bewerten. Die Evaluation ermöglicht zudem einen Beitrag zur Gestaltung von Transformationsprozessen auf Mitarbeiterebene, im Unternehmen und der Gesellschaft abzuleiten. Anhand eines Fallbeispiels (Weiterbildungsformat VeränderungsMacher) wurde ein Wirkmodell für eine integrierte Personal- und Organisationsentwicklungsmaßnahme konstruiert und in einem Pilotprojekt mit 16 Fachkräften getestet. Durch induktive und deduktive Interviewanalysen mit Führungskräften (N = 13) wurde die Weiterbildung evaluiert, um den Soll- und Ist-Zustand der Lern- und Kompetenzzuwächse der Fachkräfte zu eruieren und die entwickelte Wirkmodellstruktur zu überprüfen. Deskriptive Daten von teilnehmenden Fachkräften (N = 24) bestätigen die angenommenen Wirkmechanismen dieser transformativen Weiterbildung. Die Ergebnisse werden hinsichtlich der Nützlichkeit von Wirkmodellen und der Contribution Analysis zur Gestaltung und Evaluation von Transformationsprozessen diskutiert. Zudem werden praktische Implikationen für die Anwendung der programmtheoretischen Methoden in Unternehmen aufgezeigt.
Full-text available
The changing culture of public administration involves accountability for results and outcomes. This article suggests that performance measurement can address such attribution questions. Contribution analysis has a major role to play in helping managers, researchers, and policymakers to arrive at conclusions about the contribution their program has made to particular outcomes. The article describes the steps necessary to produce a credible contribution story.
Full-text available
Theory-based evaluations have helped open the ‘black box’ of programmes. An account is offered of the evolution of this persuasion, through the works of Chen and Rossi, Weiss, and Pawson and Tilley. In the same way as the ‘theory of change’ approach to evaluation has tackled the complexity of integrated and comprehensive programmes at the community level, it is suggested that a theory-oriented approach based on the practice of realistic cumulation be developed for dealing with the vertical complexity ofmulti-level governance.
Over the last twenty or so years, it has become standard to require policy makers to base their recommendations on evidence. That is now uncontroversial to the point of triviality—of course, policy should be based on the facts. But are the methods that policy makers rely on to gather and analyze evidence the right ones? In Evidence-Based Policy, Nancy Cartwright, an eminent scholar, and Jeremy Hardie, who has had a long and successful career in both business and the economy, explain that the dominant methods which are in use now—broadly speaking, methods that imitate standard practices in medicine like randomized control trials—do not work. They fail, Cartwright and Hardie contend, because they do not enhance our ability to predict if policies will be effective. The prevailing methods fall short not just because social science, which operates within the domain of real-world politics and deals with people, differs so much from the natural science milieu of the lab. Rather, there are principled reasons why the advice for crafting and implementing policy now on offer will lead to bad results. Current guides in use tend to rank scientific methods according to the degree of trustworthiness of the evidence they produce. That is valuable in certain respects, but such approaches offer little advice about how to think about putting such evidence to use. Evidence-Based Policy focuses on showing policymakers how to effectively use evidence. It also explains what types of information are most necessary for making reliable policy, and offers lessons on how to organize that information.
In this book, J. L. Mackie makes a careful study of several philosophical issues involved in his account of causation. Mackie follows Hume's distinction between causation as a concept and causation as it is ‘in the objects’ and attempts to provide an account of both aspects. Mackie examines the treatment of causation by philosophers such as Hume, Kant, Mill, Russell, Ducasse, Kneale, Hart and Honore, and von Wright. Mackie's own account involves an analysis of causal statements in terms of counterfactual conditionals though these are judged to be incapable of giving a complete account of causation. Mackie argues that regularity theory too can only offer an incomplete picture of the nature of causation. In the course of his analysis, Mackie critically examines the account of causation offered by Kant, as well as the contemporary Kantian approaches offered by philosophers such as Bennett and Strawson. Also addressed are issues such as the direction of causation, the relation of statistical laws and functional laws, the role of causal statements in legal contexts, and the understanding of causes both as ‘facts’ and ‘events’. Throughout the discussion of these topics, Mackie develops his own complex account of the nature of causation, finally bringing his analysis to bear in regard to the topic of teleology and the question of whether final causes can be justifiably reduced to efficient causes.