ArticlePDF Available

Contribution analysis: An approach to exploring cause and effect

Authors:

Abstract and Figures

Questions of cause and effect are critical to assessing the performance of programmes and projects. When it is not practical to design an experiment to assess performance, contribution analysis can provide credible assessments of cause and effect. Verifying the theory of change that the programme is based on, and paying attention to other factors that may influence the outcomes, provides reasonable evidence about the contribution being made by the programme.
Content may be subject to copyright.
Questions of cause and effect are critical to assessing the performance of programmes and projects. When it is not
practical to design an experiment to assess performance, contribution analysis can provide credible assessments of
cause and effect. Verifying the theory of change that the programme is based on, and paying attention to other
factors that may influence the outcomes, provides reasonable evidence about the contribution being made by the
programme.
Introduction
A key question in the assessment of programmes and projects
is that of attribution: to what extent are observed results due
to programme activities rather than other factors? What we
want to know is whether or not the programme has made a
differencewhether or not it has added value. Experimental or
quasi-experimental designs that might answer these questions
are often not feasible or not practical. In such cases,
contribution analysis can help managers come to reasonably
robust conclusions about the contribution being made by
programmes to observed results.
Contribution analysis explores attribution through
assessing the contribution a programme is making to observed
results. It sets out to verify the theory of change behind a
programme and, at the same time, takes into consideration
other influencing factors. Causality is inferred from the
following evidence:
1. The programme is based on a reasoned theory of
change: the assumptions behind why the program is
expected to work are sound, are plausible, and are
agreed upon by at least some of the key players.
2. The activities of the programme were implemented.
3. The theory of change is verified by evidence: the chain
of expected results occurred.
4. Other factors influencing the programme were assessed
and were either shown not to have made a significant
contribution or, if they did, the relative contribution
was recognised.
Contribution analysis is useful in situations where the
programme is not experimentalthere is little or no scope for
varying how the program is implementedand the
programme has been funded on the basis of a theory of
change. Many managers and evaluators assessing the
performance of programmes face this situation. Kotvojs (2006)
describes one way of using contribution analysis in a
development context, "as a means to consider progress
towards outputs and intermediate and end outcomes" (p. 1).
Conducting a contribution analysis
There are six iterative steps in contribution analysis (Box 1),
each step building the contribution story and addressing
weaknesses identified in the previous stage. If appropriate,
many of the steps can be undertaken in a participatory mode.
Step 1: Set out the attribution problem to be addressed
Acknowledge
the
attribution
problem.
Too often the question of
attribution is ignored in programme evaluations. Observed
results are reported with no discussion as to whether they
were the result of the programme's activities. At the outset, it
should be acknowledged that there are legitimate questions
about the extent to which the programme has brought about
the results observed.
Determine
the
specific
cause
effect
question
being
addressed.
A variety of questions about causes and effects can
be asked about most programmes. These range from traditional
causality questions, such as
To what extent has the programme caused the outcome?
to more managerial questions, such as
Is it reasonable to conclude that the programme has made
a difference to the problem?
Care is needed to determine the relevant causeeffect question
in any specific context, and whether or not the question is
reasonable. In many cases the traditional causality question
may be impossible to answer, or the answer may simply lack
any real meaning given the numerous factors influencing a
result. However, managerial-type causeeffect questions are
generally amenable to contribution analysis.
Determine
the
level
of
confidence
required.
The level of
proof required needs to be determined. Issues that need to be
considered are, for example: What is to be done with the
findings? What kinds of decisions will be based on the
findings? The evidence sought needs to fit the purpose.
Explore
the
type
of
contribution
expected.
It is worth
exploring the nature and extent of the contribution expected
from the programme. This means asking questions such as:
What do we know about the nature and extent of the
contribution expected? What would show that the programme
made an important contribution? What would show that the
programme 'made a difference'? What kind of evidence would
we (or the funders or other stakeholders) accept?
Determine
the
other
key
influencing
factors.
In
determining the nature of the expected contribution from the
programme, the other factors that will influence the outcomes
will also need to be identified and explored, and their
significance judged.
ILAC Brief 16 May 2008
1
Box 1. Contribution Analysis
Step
1:
Set out the attribution problem to be addressed
Step
2:
Develop a theory of change and risks to it
Step
3:
Gather the existing evidence on the theory of change
Step
4:
Assemble and assess the contribution story,
and challenges to it
Step
5:
Seek out additional evidence
Step
6:
Revise and strengthen the contribution story
Contribution analysis:
An approach to exploring cause and effect
John Mayne
Assess
the
plausibility
of
the
expected
contribution
in
relation
to
the
size
of
the
programme.
Is the expected contribution of the programme
plausible? Assessing this means asking questions such as: Is the
problem being addressed well understood? Are there baseline data?
Given the size of the programme intervention, the magnitude and
nature of the problem and the other influencing factors, is an important
contribution by the programme really likely? If a significant contribution
by the programme is not plausible, the value of further work on causes
and effects needs to be reassessed.
Step 2: Develop the theory of change and the risks to it
Build
a
theory
of
change
and
a
results
chain.
The key tools of contribution
analysis are theories of change and results chains. With these tools the
contribution story can be built. Theories of change (Weiss, 1997)
explain how the programme is expected to bring about the desired
resultsthe outputs, and subsequent chain of outcomes and impacts
(impact pathways of Douthwaite et al., 2007). In development aid, a
logframe is often used to set out funders' and/or managers' expectations
as to what will happen as the programme is implemented. The theory
of change, as well as simply identifying the steps in the results chain,
should identify the assumptions behind the various links in the chain
and the risks to those assumptions. One way of representing a theory
of change including its assumptions and risks is shown in Figure 1.
Determine
the
level
of
detail.
Logic models/results
chains/theories of change can be shown at almost any level of detail.
Contribution analysis needs reasonably straightforward, not overly
detailed logic, especially at the outset. Refinements may be needed but
can be added later.
Determine
the
expected
contribution
of
the
programme.
Making
statements about the contribution of programmes to outputs is quite
straightforward, but it is considerably more challenging to make
statements about the contribution that programmes make to final
outcomes (impacts). Three 'circles of influence' (Montague et al., 2002)
are useful here:
direct controlwhere the programme has fairly direct control of
the results, typically at the output level;
direct influencewhere the programme has a direct influence on
the expected results, such as the reactions and behaviours of its
clients through direct contact, typically the immediate outcomes
and perhaps some intermediate outcomes; and
indirect influencewhere the programme can exert significantly
less influence on the expected results due to its lack of direct
contact with those involved and/or the significant influence of
other factors.
The theory of change is probably much better developed and
understoodand expectations are clearerat the direct control and
direct influence levels than at the level of indirect influence.
List
the
assumptions
underlying
the
theory
of
change.
Typical logic
models focus on the results expected at different levels, i.e., the boxes
in the results chain in Figure 1. But a theory of change needs to spell
out the assumptions behind the theory, for example to explain what
conditions have to exist for A to lead to B, and what key risks there are
to that condition. Leeuw (2003) discusses different ways of eliciting and
illustrating these behind-the-scenes assumptions.
Include
consideration
of
other
factors
that
may
influence
outcomes
. A well thought out theory of change not only shows the
results chain of a programme but also how external factors may affect
the results. In Figure 1, other influences (not shown) might be pressure
from donors and/or a government-wide initiative to improve PM&E.
Although it is not realistic to do primary research on external factors
that may affect results, reasonable efforts should be made to gather
available information and opinions on the contribution they might have.
Determine
how
much
the
theory
of
change
is
contested.
Views
may differ about how a programme is supposed to work. If many players
contest the theory of change, this may suggest that overall
understanding of how the programme is supposed to work is weak. If,
ILAC Brief 16
2
Figure 1. A Theory of Change for Enhancing Planning, Monitoring and Evaluation (PM&E) Capacity in
Agricultural Research Organisations (AROs)
More effective, efficient and
relevant agricultural programmes
Final Outcomes
(impacts)
Adapted from Horton et al. (2000).
Results Chain Theory of Change: Assumptions and Risks
Intermediate
Outcomes
Immediate
Outcomes
Outputs
Strengthened management of
agricultural research
Enhanced planning processes, evaluation
systems, monitoring systems, and
professional PM&E capacities
information
training and workshops
facilitation of organisational change
Institutionalisation of integrated
PM&E systems and strategic
management principles
Assumptions: Better management will result in more effective, efficient and relevant
agricultural programmes.
Risks: New approaches do not deliver (great plans but poor delivery); resource cut-backs
affect PM&E first; weak utilisation of evaluation information.
Assumptions: The new planning, monitoring and evaluation approaches will enhance the
capacity of the AROs to better manage their resources.
Risks: Management becomes too complicated; PM&E systems become a burden;
information overload; evidence not really valued for managing.
Assumptions: Over time and with continued participatory assistance, AROs will
integrate these new approaches into how they do business. The project's activities
complement other influencing factors.
Risks: Trial efforts do not demonstrate their worth; pressures for greater accountability
dissipate; PM&E systems sidelined.
Assumptions: Intended target audience received the outputs. With hands-on,
participatory assistance and training, AROs will try enhanced planning, monitoring and
evaluation approaches.
Risks: Intended reach not met; training and information not convincing enough for
AROs to make the investment; only partially adopted to show interest to donors.
after discussion and debate, key players cling to alternative theories of
change, then it may be necessary to assess each of thesespecifically
the links in the results chain where the theories of change differ. The
process of gathering evidence to confirm or discard alternative theories
of change should help decide which theory better fits reality.
Step 3: Gather existing evidence on the theory of change
Assess
the
logic
of
the
links
in
the
theory
of
change.
Reviewing the
strengths and weaknesses of the logic, the plausibility of the various
assumptions in the theory and the extent to which they are contested,
will give a good indication of where concrete evidence is most needed.
Gather
the
evidence.
Evidence to validate the theory of change is
needed in three areas: observed results, assumptions about the theory
of change, and other influencing factors.
Evidence on results and activities
Evidence on the occurrence or not of key results (outputs, and
immediate, intermediate and final outcomes/impacts) is a first step for
analysing the contribution the programme made to those results.
Additionally, there must be evidence that the programme was
implemented as planned. Were the activities that were undertaken and
the outputs of these activities, the same as those that were set out in
the theory of change? If not, the theory of change needs to be revised.
Evidence on assumptions
Evidence is also needed to demonstrate that the various assumptions in
the theory of change are valid, or at least reasonably so. Are there
research findings that support the assumptions? Many interventions in
the public and not-for-profit sectors have already been evaluated.
Mayne and Rist (2006) discuss the growing importance of synthesising
existing information from evaluations and research. Considering and
synthesising evidence on the assumptions underlying the theory of
change will either start to confirm or call into question how programme
actions are likely to contribute to the expected results.
Evidence on other influencing factors
Finally, there is a need to examine other significant factors that may
have an influence. Possible sources of information on these are other
evaluations, research, and commentary. What is needed is some idea of
how influential these other factors may be.
Gathering evidence can be an iterative process, first gathering
and assembling all readily available material, leaving more exhaustive
investigation until later.
Step 4: Assemble and assess the contribution story, and
challenges to it
The contribution story, as developed so far, can now be assembled and
assessed critically. Questions to ask at this stage are:
Which links in the results chain are strong (good evidence
available, strong logic, low risk, and/or wide acceptance) and
which are weak (little evidence available, weak logic, high risk,
and/or little agreement among stakeholders)?
How credible is the story overall? Does the pattern of results and
links validate the results chain?
Do stakeholders agree with the storygiven the available
evidence, do they agree that the programme has made an
important contribution (or not) to the observed results?
Where are the main weaknesses in the story? For example: Is it
clear what results have been achieved? Are key assumptions
validated? Are the impacts of other influencing factors clearly
understood? Any weaknesses point to where additional data or
information would be useful.
So far, no 'new' data has been gathered other than from discussions with
programme individuals and maybe experts, and perhaps a literature
search. At this point, the robustness of the contribution story, with
respect to the attribution question(s) raised at the outset, is known and
will guide further efforts.
Step 5: Seek out additional evidence
Identify
what
new
data
is
needed.
Based on the assessment of the
robustness of the contribution story in Step 4, the information needed
to address challenges to its credibility can now be identified, for
example, evidence regarding observed results, the strengths of certain
assumptions, and/or the roles of other influencing factors.
Adjust
the
theory
of
change.
It may be useful at this point to
review and update the theory of change, or to examine more closely
certain elements of the theory. To do this, the elements of the theory
may need to be disaggregated so as to understand them in greater
detail.
Gather
more
evidence.
Having identified where more evidence is
needed, it can then be gathered. Multiple approaches to assessing
performance, such as triangulation, are now generally recognised as
useful and important in building credibility. Some standard approaches
to gathering additional evidence for contribution analysis (Mayne,
2001) are:
Surveys of, for example, subject matter experts, programme
managers, beneficiaries, and those involved in other programmes
that are influencing the programme in question.
Case studies, which might suggest where the theory of change
could be amended.
Tracking variations in programme implementation, such as over
time and between locations.
Conducting a component evaluation on an issue or area where
performance information is weak.
Synthesising research and evaluation findings, for example using
cluster evaluation and integrative reviews, and synthesising
existing studies.
Step 6: Revise and strengthen the contribution story
New evidence will build a more credible contribution story, buttressing
the weaker parts of the earlier version or suggesting modifications to
the theory of change. It is unlikely that the revised story will be
foolproof, but it will be stronger and more credible.
Contribution analysis works best as an iterative process. Thus,
at this point the analysis may return to Step 4 (Box 1) and reassess the
strengths and weaknesses of the contribution story.
Box 2 illustrates some of the steps in contribution analysis in
one evaluation and makes suggestions about what else could have been
done.
Levels of contribution analysis
Three levels of contribution analysis lead to different degrees of
robustness in statements of contribution:
Minimalist
contribution
analysis.
At this level, the analysis (1)
develops the theory of change, and (2) confirms that the expected
outputs were delivered. Statements of contribution are based on the
inherent strength of the theory of change and on evidence that the
expected outputs were delivered. For example, in a vaccination
programme, if the outputs (vaccinations) are delivered, then the
outcome of immunisation can be assumed based on the results of
previous vaccination programmes. The weaknesses of this level of
analysis are any perceived weaknesses in the theory of change.
Contribution
analysis
of
direct
influence.
This level of analysis
starts with minimalist analysis and gathers and builds evidence that (1)
the expected results in areas of direct influence of the theory of change
were observed, and (2) the programme was influential in bringing about
those results, taking other influencing factors into consideration.
3
ILAC Brief 16
Statements of contribution are based on (1) observed results, (2)
confirmation that the assumptions about direct influence are supported
by factual evidence, and (3) the inherent strength of the theory of
change in areas of indirect influence. An example of where this level of
analysis would be appropriate is an intervention to get an agricultural
research organisation to work collaboratively to solve complex
problemsan approach, say, that has proven effective elsewhere. If
there is evidence that the research organisation has indeed adopted the
new approach (the desired behavioural change) as a result of the
intervention, the subsequent benefits may not have to be
demonstrated, as they will have already been established from previous
research.
Contribution
analysis
of
indirect
influence.
This level extends the
analysis into the more challenging area of indirect influence. It measures
the intermediate and final outcomes/impacts (or some of them) and
gathers evidence that the assumptions (or some of them) in the theory
of change in the areas of indirect influence were borne out. Statements
of contribution at this level attempt to provide factual evidence for at
least the key parts of the entire theory of change.
Further reading
Douthwaite, B., Schulz, S., Olanrewaju, A.S. and Ellis-Jones, J. 2007.
Impact pathway evaluation of an integrated Striga hermonthica
control project in Northern Nigeria. Agricultural Systems 92:
201-222.
Horton, D., Mackay, R., Andersen, A. and Dupleich, L. 2000.
Evaluating capacity development in planning, monitoring, and
evaluation: a case from agricultural research. Research Report no.
17. Available at
http://www.ifpri.org/isnararchive/Publicat/PDF/rr-17.pdf
Kotvojs, F. 2006. Contribution analysis: a new approach to evaluation
in international development. Paper presented at the Australian
Evaluation Society 2006 International Conference, Darwin.
Available at http://www.aes.asn.au/conferences/2006/papers/
022%20Fiona%20Kotvojs.pdf.
Leeuw, F.L. 2003. Reconstructing program theories: methods available
and problems to be solved. American Journal of Evaluation 24:
5-20.
Mayne, J. 2001. Addressing attribution through contribution analysis:
using performance measures sensibly. Canadian Journal of
Program Evaluation 16: 1-24. Earlier version available at
http://www.oagbvg.gc.ca/domino/other.nsf/html/99dp1_e.html/
$file/99dp1_e.pdf
Mayne, J. and Rist, R.C. 2006. Studies are not enough: the necessary
transformation of evaluation. Canadian Journal of Program
Evaluation 21: 93-120.
Montague, S., Young, G. and Montague, C. 2003. Using circles to tell
the performance story. Canadian Government Executive 2: 12-16.
Available at
http://pmn.net/library/usingcirclestotelltheperformancestory.htm
Weiss, C.H. 1997. Theory-based evaluation: past, present, and future.
New Directions for Evaluation 76(Winter): 41-55.
About the author
John Mayne (john.mayne@rogers.com) is an independent advisor on
public sector performance. Previously, he was with the Office of the
Auditor General of Canada and the Treasury Board Secretariat.
Recent briefs
7. Outcome mapping
8. Learning alliances
9. The Sub-Saharan Africa Challenge Program
10. Making the most of meetings
11. Human resources management
12. Linking diversity to organizational effectiveness
13. Horizontal evaluation
14. Engaging scientists through institutional histories
15. Evaluación horizontal: Estimulando el aprendizaje
social entre "pares"
.
ILAC Brief 16
4
The Institutional Learning and Change (ILAC) Initiative (www.cgiar-ilac.org), hosted by Bioversity
International, seeks to increase the contributions of agricultural research to sustainable reductions in
poverty. The ILAC Initiative is currently supported by the Netherlands Ministry of Foreign Affairs.
ILAC Briefs aim to stimulate dialogue and to disseminate ideas and experiences that researchers
and managers can use to strengthen organizational learning and performance. An ILAC brief may
introduce a concept, approach or tool; it may summarize results of a study; or it may highlight an event
and its significance. To request copies, write to ilac@cgiar.org. The ILAC Initiative encourages fair use of
the information in its Briefs and requests feedback from readers on how and by whom the publications
were used.
Layout and printing: www.scriptoria.co.uk
Box 2. Contribution Analysis in Evaluating Capacity
Development in Planning, Monitoring and Evaluation
In the evaluation of the project on evaluating capacity
development in planning, monitoring and evaluation (Figure
1) outlined and described by Horton et al. (2000), a number
of steps in contribution analysis were undertaken:
A theory of change was developed.
There was clear recognition that the project activities were not
the only influences on adoption of PM&E approachesother
influencing factors were identified, such as the general
pressure for public sector reform and pressure from donors.
Surveys asked explicitly for views on the nature and extent of
the project's contribution to enhanced capacity, and attempts
were made to triangulate the findings.
The lessons learned on how future projects could enhance
their contribution represent de facto refinements of the theory
of change.
Additional contribution analysis steps that might
have been useful include:
A more structured approach to assessing contribution
from the outset.
More analysis of the other influencing factors, perhaps through
clearer articulation up front, comparisons with similar
organisations not part of the project, and through asking
about the relative contribution of the project efforts.
More attention to the risks facing the project.
... The Contribution Analysis is a methodology used to explore and determine the relative contributions of a program; it is used to provide an acceptable evidence-based narrative about the contributions a programme is making as well as to evaluate the impacts of these programmes/policies through attribution using observed results. Mayne, J. (2008). Therefore, the contribution analysis technique will be used to reveal effectiveness, impact and sustainability of the CYP program to member states, stakeholders and young people within the Commonwealth member states by carrying out an in-depth assessment which will evaluate the extent to which the Commonwealth member states and stakeholders may have benefited from the CYP work and identify sustainable impacts (if any), the CYP has had on the lives of young people in the member countries through attribution using observed results. ...
... The technique will review the Commonwealth Secretariats strategic plan in relation to observed results from the member countries and stakeholders who benefit from these programmes to determine impact and link outcomes (positive or negative) to the YTH strategic plan/overall programme output to reveal effectiveness and sustainability. It will evaluate the YTH strategy and programmes delivered to member states and stakeholders in the strategic period under review to determine benefits, likely impacts and unintended side effects, the desired changes will also be determined The CA will be conducted by following 6 basic steps Mayne, J. (2008). The first step is to set out the attribution problem to be addressed using cause and effect questions such as "Are the observed outcomes in the lives of young people as a result of the CYP programmes"? ...
... The second step will be to develop a theory of change and the risks to it. Mayne, J. (2008) suggests the use of the theory of change and the result chains to build the contribution story on the desired results the CYP is expected to produce -the outputs and the chain of impact after the programme has been implemented (impact-outcome) one of which is "Member states engage with and benefit from strengthened Good Offices of the Secretary-General." ...
Thesis
Full-text available
Reflection and communication are drivers of change and restructuring which is fundamental to the continued survival of organizations. Changes are driven by evidence which is a product of evaluation, therefore, for organizations to demonstrate impact, program monitoring and evaluation is necessary towards determining the effectiveness, value and impact of programs and to generate results that would be applied towards addressing problematic areas, design and improve program implementation for the continued success of these programs. The Commonwealth Secretariat through the Strategic Planning and Evaluation Division (SPED-the client) has a need to review the CYP program and track the progress and contributions of work done in the areas of youth led initiatives, youth empowerment, the strategic and intermediate outcomes as detailed in the Revised Strategic Plan for the period of 2013/14-2016/17 to ascertain the relevance, efficiency, effectiveness, impact and sustainability of the support provided by YTH in advancing CYP as well as to garner recommendations in areas such as the strategic and operational perspective to optimize the utilization of resources in achieving sustainable impact. This report bid articulates steps that will be taken to achieve the client's objectives using proven business methodologies to conduct the evaluation study which will provide outcomes such as revealing the relevance of the support of YTH to member states, program efficiency and effectiveness, benefits of the CYP to member states, sustainability and impact of the CYP in member countries, program strategy and structure, challenges and lessons learnt of the current program. It further proffers recommendations on how to improve the CYP, program design, management and the implementation of strategic plans to maximise impact as well as insights to help increase the utilization of resources towards achieving sustainable impact. 2 74108838 OBIANUJU BARBARA AKAGBUSI
... For this, a contribution analysis (CA) based on the Theory of Change (ToC) of the Gender and Social Inclusion (GSI) Strategy, as presented in 2016, was carried out. This methodology allows to find reasonable evidence about the contribution that any development intervention has made to a specific change (Mayne, 2008(Mayne, , 2012. We collected the evidence through deep dives in the Outcome Impact Case Reports (OICRs) and interviews with key stakeholders working in each region where CCAFS has developed activities related with gender and policies. ...
... A contribution analysis (CA) is a methodology used to identify reasonable evidence about the contribution that any development intervention has made to a specific change or set of changes (Mayne, 2008(Mayne, , 2012. Due to the complexity of large social changes, the aim is to produce a credible, evidence-based narrative of the contribution that a reasonable person would be likely to agree with, rather than to produce conclusive proof. ...
Technical Report
Full-text available
Women and girls are disproportionately affected by global crises such as climate change and environmental degradation. Moreover, the key role of women in agriculture and in sustaining the livelihoods and food security of their households in low-income countries, emphasises the need to address the gender gap. Therefore, gender transformative research that informs policymakers and improves the design of innovative and equitable climate laws and policies and adaptation and mitigation strategies is needed. This document presents a synthesis of the work of CCAFS in integrating a gender perspective into climate change policies and agreements at global, national and subnational levels in the last ten years (2010-2020). A contribution analysis (CA) based on the Theory of Change (ToC) of the Gender and Social Inclusion (GSI) Strategy was carried out. We collected the evidence through deep dives in the Outcome Impact Case Reports (OICRs) and interviews with key stakeholders working in each region where CCAFS has developed activities related to gender and policies. Our preliminary results show that, by using a multilevel governance approach to policy processes, the CCAFS program has contributed to anticipated outcomes and that it has played a key role in raising awareness about GSI and gender-transformative approaches in agriculture and climate policy agenda. However, all the efforts have proved insufficient to achieve the transformation that women and girls throughout the world need to see in international and national debates, policies, and practices concerning climate crisis. Therefore, our suggestion is to involve civil organizations and invest more in strengthening institutions for gender-transformative societies.
... However, in instances involving small "n" cases, experimental or quasi-experimental designs are not 4 The Normal Lights Volume 16, No. 1 (2022) practical and not feasible to permit statistical inferences on the magnitude of impacts created by the intervention (White & Phillips, 2012). Contribution analysis thus emerged as an alternative tool to document plausible conclusions about the impacts contributed by the intervention (Mayne, 2008). Causality is then inferred from the narrative statements of program beneficiaries, taking into consideration the multiple sources that might have influenced the changes reported. ...
Article
Full-text available
While the teacher-training program is one of the prominent extension modalities in Higher Education Institutions, a closer look at the literature has revealed several gaps in evaluating the long-term impacts of training programs. The present study addressed limitations related to the absence of baseline data and the evaluation framework by using Kirkpatrick's four levels of evaluation to assess a teacher-training extension program at a university in Bicol region, Philippines. Findings suggest that the teacher-training was successfully implemented at the reaction level but failed to assess changes in learning and behavior after the training was conducted. However, the results levels was measured using the Qualitative Impact Assessment Protocol (QuIP). Causal statements from the key informants in QuIP revealed positive changes during the evaluation period but without explicit reference to the training program. Based on the evaluation results, lessons learned were documented along with the training extension program's timing, duration, and monitoring. Consequently, the study recommended the appraisal of existing practices and extension policies on teacher-training programs and other similar undertakings with evaluability criteria and standards.
... Furthermore, we followed the precepts of contribution analysis to assess whether any changes in open research behaviours during the pandemic could be attributed to the Joint Statement. 5 In particular, we sought to answer the following questions, with a particular focus on signatory organisations: ...
Technical Report
Full-text available
This report explores the impact of calls to rapidly and openly share COVID-19 research findings to inform the public health response, as recommended in our 2020 Joint Statement. It reflects on how open and rapid sharing shaped the global pandemic response and behaviour of the research community, and sets out recommendations for organisations who may wish to develop statements as a policy tool.
... While attribution indicates how much change is caused by (attributed to) an organization's specific effort, contribution assesses how much the organization contributed to the outcomes of change, without explicit indication on the share of effects produced and amount of change created (Budhwani & McDavid, 2017) Advocate organizations can demonstrate how they contributed to policy success rather than how the policy change was attributed exclusively to their efforts (Devlin-Foltz et al., 2012;Mayne, 2008). Contribution analysis has much to offer the theory-based evaluation landscape, as it bypasses the methodological complexity of establishing causality without actually defying its importance (Budhwani & McDavid, 2017;Dybdal et al., 2011;Kotvojs & Shrimpton, 2007). ...
Article
Advocacy is an intentional act of influencing government and an important precondition for successful policy change in society. Drawing from an existing framework on policy influence, we propose an approach to quantifying the impact of policy influence efforts, specifically within the context of European Public Health (EPH) advocacy. The analysis hinges on the article “Moving from tokenism” which provides a starting point to conceptualize strategies to quantify impact. An exploratory case study approach allowed to integrate literature on advocacy evaluation in parallel with the internal documentation of a EPH advocacy organization We provide recommendations to advocacy organizations that aim to create an infrastructure towards quantifying the impact of their efforts. The framework is mostly tailored to the needs of EPH advocacy, but it can also have resonance beyond the scope of a specific sector.
... Sometimes co-creation processes can result in stakeholder fatigue, which may limit the quality of the results . Additionally, it is not clear whether co-creation processes are effective for achieving better results (Mayne, 2008(Mayne, , 2015. Thus, more evidence is needed to assess the impact and effectiveness of different aspects of co-creation processes (Durose et al., 2018). ...
Article
Full-text available
Stockholm Environment Institute is an international non-profit research and policy organization that tackles environment and development challenges. We connect science and decision-making to develop solutions for a sustainable future for all. Our approach is highly collaborative: stakeholder involvement is at the heart of our efforts to build capacity, strengthen institutions, and equip partners for the long term. Our work spans climate, water, air, and land-use issues, and integrates evidence and perspectives on governance, the economy, gender and human health. Across our eight centres in Europe, Asia, Africa and the Americas, we engage with policy processes, development action and business practice throughout the world.
... They also understood that in complex situations, it is often not possible, or at least extremely difficult, to establish direct causal links between activities and results. However, according to Mayne (2008), by carefully delineating the intentions and associated activities of an intervention and linking these to the intended outcomes, it is possible to establish a theory of change that is at least plausible. Such a theory can be tested in the realities of practice, subject to appropriate methods of evaluation or investigation, to identify the contribution made by the intervention amongst other factors that affect results. ...
Book
Full-text available
Citation This publication can be cited as: Fullerton, D., Bamber, J. and Redmond, S. (2021) Developing effective relationships between youth justice workers and young people: a synthesis of the evidence, REPPP Review, University of Limerick.
Article
Modern power system requires advanced and intelligent sensors-based protection such as a Phasor Measurement Unit that can provide faster, accurate, and real-time data acquisition. The aim is to allow accurate action-based performance for analysts in monitoring the transmission lines so that rapid actions can be taken during abnormal circumstances before the blackout occurs. Among different algorithms, this study focuses on modelling the non-recursive phasor estimation method in a power Simulink environment for a standard test system equipped with a developed algorithm to detect the fault zone. The algorithm includes an index for faulty bus classification based on the positive-sequence voltage measurements of the pre-fault and post-fault conditions, where the bus with a maximum differential percentage is identified as a faulted bus. An important differentiation of this work is that the proposed algorithm can coordinate with all phasor measurement units to accurately determine the faulty line using the index of unwrapped dynamic phase angles. Furthermore, the robustness of the indices is analyzed in the presence of sudden load change, measurement noise, and during nonlinear high-impedance faults. The performance of the comprehensive algorithm is investigated on the IEEE 9-bus and 39-bus standard test systems by applying different faults scenarios, considering several factors such as fault inception angles, line-fault resistance, ground-fault resistance. The comparative studies have shown that the proposed indices can play a significant role in segregating the fault and non-fault conditions, as they are needed to supervise the appropriate relays for enhancing the overall security of the power grid.
Article
Capacity development is increasingly recognized as central to conservation goals. Efforts to develop individual, organizational and societal capacity underpin direct investments in biodiversity conservation and natural resource management, and sustain their impact over time. In the face of urgent needs and increasingly complex contexts for conservation the sector not only needs more capacity development, it needs new approaches to capacity development. The sector is embracing the dynamic relationships between the ecological, political, social and economic dimensions of conservation. Capacity development practitioners should ensure that individuals, organizations and communities are prepared to work effectively in these complex environments of constant change to transform the systems that drive biodiversity loss and unsustainable, unequitable resource use. Here we advocate for a systems view of capacity development. We propose a conceptual framework that aligns capacity development components with all stages of conservation efforts, fosters attention to context, and coordinates with parallel efforts to engage across practitioners and sectors for more systemic impact. Furthermore, we highlight a need for practitioners to target, measure and support vital elements of capacity that have traditionally received less attention, such as values and motivation, leadership and organizational culture, and governance and participation by using approaches from psychology, the social sciences and systems thinking. Drawing from conservation and other sectors, we highlight examples of approaches that can support reflective practice, so capacity development practitioners can better understand the factors that favour or hinder effectiveness of interventions and influence system-wide change.
Technical Report
Full-text available
This evaluation assesses whether the GCF’s approach and investments have been relevant and effective in reducing the vulnerability of local communities in LDCs and of their livelihoods to the effects of climate change, and whether these impacts are likely to be sustained. It examines how and to what extent the GCF’s approach, mechanisms and financial modalities respond to the conditions facing LDCs. Moreover, the evaluation assesses the key enabling conditions for the GCF to support a paradigm shift towards low emission and climate resilient development pathways in LDCs. The evaluation team has structured the main findings and recommendations according to the core chapters of the report: responsiveness and relevance of the GCF to LDCs, coherence and complementarity of the GCF with other climate funds, country ownership and capacity development in LDCs, performance of the GCF’s business model and processes in LDCs, and results and impacts.
Article
Full-text available
The changing culture of public administration involves accountability for results and outcomes. This article suggests that performance measurement can address such attribution questions. Contribution analysis has a major role to play in helping managers, researchers, and policymakers to arrive at conclusions about the contribution their program has made to particular outcomes. The article describes the steps necessary to produce a credible contribution story.
Article
Full-text available
The Problem – Clarity versus Complexity: Public sector managers face increasing pressure from all sides to reduce costs, improve service levels, make progress towards the achievement of priority outcomes, and increase accountability. In order to accomplish these things, a strong vision of success is vital. Many current management environments combine policy, subsidy, intergovernmental jurisdiction, operations, research and development, science, regulatory oversight and new-economy services, leading to difficulties in planning, measuring, and reporting performance. Results can often be abstract, subject to a wide range of factors and take place over considerable periods of time with a diverse set of groups. For this reason, it is in fact more important to articulate a clear vision than it would be for less complex programming. A 'system' for performance management which addresses complexities should provide for a precise description of a limited number of priority results with an emphasis on addressing target group needs. The approach should also allow all key delivery participants to 'own' the system, i.e. the people delivering the services should believe that the performance system appropriately articulates their results, goals, and values.
Article
Full-text available
This paper discusses methods for reconstructing theories underlying programs and policies. It describes three approaches. One is empirical–analytical in nature and focuses on interviews, documents and argumentational analysis. The second has strategic assessment, group dynamics, and dialogue as its core. The third has cognitive and organizational psychology as its foundation. For each of the three approaches, case-illustrations are given. These approaches can help to make the process of reconstructing underlying program theories more open for scrutiny. This is important because mis-reconstruction of policy and program theories is dangerous. All three approaches have a number of weaknesses to be remedied. The paper discusses these shortcomings and presents some suggestions for overcoming the limitations are presented.
Article
This article examines AusAID's shift to contribution analysis and a system of outcome-based monitoring and evaluating. It looks at the method of contribution analysis, its implementation in the Fiji Education Sector Program, an assessment of its use, and the challenges faced in the application of contribution analysis.
Article
Theory-based evaluation examines conditions of program implementation and mechanisms that mediate between processes and outcomes as a means to understand when and how programs work.
Article
This paper evaluates a project that developed and introduced integrated Striga control (ISC) in Northern Nigeria. Adoption of ISC increased from 44 participating farmers in four pilot areas to more than 500 farmers in 16 villages and hamlets in three seasons. On average, farmers adopted 3.25 different Striga control options from a basket of six “best bets”. Resource-poor and -medium farmers were more likely to adopt than resource-rich ones. Adopting farmers enjoyed livelihood improvements, largely through selling ISC soybean. Women in most adopting households benefited through selling food products based on soybean. Adoption of ISC can be attributed to four factors: (1) farmer-field-school-type training that explained how the technologies worked; (2) incorporation of at least one technology in the ISC package that gave quick benefits to sustain farmer interest in adopting and learning other components whose effects took longer to become evident; (3) allowance for farmer experimentation and adaptation to local conditions; and, (4) use of a monitoring and evaluation component that identified and incorporated farmer modifications to continually improve the ISC package. These principles are likely to be valid for research and extension approaches for similar integrated natural resource management (INRM). Impact pathway evaluation methodology used for the evaluation helped give the project a greater impact focus; helped design and reporting of the evaluation; and, by identifying early adoption pathways, has provided a firm basis for any future ex post impact assessment of ISC in Northern Nigeria.