ArticlePDF Available

The Politics and Consequences of Including Stakeholders in International Development Evaluation

Authors:

Abstract and Figures

Participatory evaluation approaches have a relatively long history of advocacy and application in the international development evaluation community. Despite widespread use and apparent resonance with practitioners and donors alike, very little empirical research exists on why and how participatory evaluation approaches are used in international development settings. In this article, we present results derived from a mixed method investigation of a sample of practicing international development evaluators regarding their perceptions of how and why stakeholders are included in international development evaluations. Findings suggest that participatory evaluation approaches are interpreted and practiced in widely differing ways. Implications for international development evaluation practice and future research are discussed.
Content may be subject to copyright.
The Politics and
Consequences of Including
Stakeholders in International
Development Evaluation
Anne E. Cullen
1
, Chris L. S. Coryn
1
, and Jim Rugh
2
Abstract
Participatory evaluation approaches have a relatively long history of advocacy and application in
the international development evaluation community. Despite widespread use and apparent
resonance with practitioners and donors alike, very little empirical research exists on why and
how participatory evaluation approaches are used in international development settings. In this
article, we present results derived from a mixed method investigation of a sample of practicing
international development evaluators regarding their perceptions of how and why stakeholders are
included in international development evaluations. Findings suggest that participatory evaluation
approaches are interpreted and practiced in widely differing ways. Implications for international
development evaluation practice and future research are discussed.
Keywords
participatory evaluation, international development evaluation, evaluation politics, evaluation
consequences
Participatory approaches to evaluations of international development and aid programs first came to
prominence in the late 1970s and early 1980s as a direct response to international development pro-
grams that were seemingly mismatched to the needs of their intended beneficiaries (Chambers,
1992; Townsley, 1996). Including various stakeholder groups in the planning and evaluation process
was believed to create development programs that both were better suited to these groups’ needs and
also more effective. Thus, stakeholders were not viewed exclusively as sources of evaluation data
but also as important collaborators in the evaluation process. The adoption and recognition of par-
ticipatory evaluation methods in international development represented a clear shift from what had
previously been an almost exclusive focus on donor priorities to an expanded focus that included the
1
The Evaluation Center, Western Michigan University, Kalamazoo, MI, USA
2
Servierville, TN, USA
Corresponding Author:
Anne E. Cullen, The Evaluation Center, Western Michigan University, 1903 West Michigan Avenue, 4405 Ellsworth Hall,
Kalamazoo, MI 49008, USA
Email: anne.cullen@wmich.edu
American Journal of Evaluation
32(3) 345-361
ª The Author(s) 2011
Reprints and permission:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/1098214010396076
http://aje.sagepub.com
at WESTERN MICHIGAN UNIVERSITY on July 25, 2011aje.sagepub.comDownloaded from
views and values of both direct and indirect program beneficiaries, managers, service providers, and
other relevant stakeholder groups.
Participatory evaluation approaches quickly flourished, and donors, international nongovernmental
agencies, and international aid organizations such as Catholic Relief Services (CRS), Food and
Agricultural Organization (FAO), Heifer Project International (HPI), Peace Corps (PC), the United
Nations (UN), t he United States Agency for International Development (USAID), and the World
Bank (WB), among ma ny others, soon advocated for and adopted their use. M any of these same
organizations also developed detailed manuals and guides for evaluators that described how to
design and execute participatory evaluation approaches and strategies (Aaker & Shumaker,
1994; Aubel, 1994; Chambers, 1992, 1994; Feuerstein 1986; Hall, 1981; Park, 1992; Rugh,
1986; Scrimshaw & Gleason, 1992; World Bank, 1996). Participatory rural appraisal, participatory
action research, community-based participatory research, and asset-based community development
are but a few participatory a pproache s that ultimately were developed to evaluate int ernat ional
development programs. (A complete comparison of the many synonyms sometimes used to describe
participato ry forms of evaluat ion exceed s the scope of this article. For a more complete compa riso n,
interested readers are referred to Cullen, 2009.)
Presently, such approaches to and forms of evaluation are widely used in international develop-
ment settings (Blue, Clapp-Wincek, & Benner, 2009). Despite their prevalence, however, there have
been few empirical investigations that have documented the reasons for and the politics and conse-
quences of including stakeholders in international development evaluations. Even so, there have
been a number of studies on participatory evaluation approaches, largely led by Cousins (e.g.,
Cousins, Donohue, & Bloom, 1996; Cousins & Earl, 1992; Cousins & Whitmore, 1998), the major-
ity of which predominately have been limited in scope to North America and have dealt primarily
with evaluations of educational programs (Brandon, 1998). What is more, participatory evaluation
approaches are not uncontroversial, and supporters and detractors have widely differing views and
opinions about their merits. Therefore, there is a clear need for research on participatory approaches
to international development evaluations to either justify or repudiate their use or recommend ways
for them to be improved.
Defining Participatory Evaluation
There is strikingly little consensus on what is meant by participatory evaluation and the range of
approaches or methods that are classified as participatory vary widely. For some, participatory
evaluation methods are those involving any type of consultation or interaction with stakeholders.
For others, an evaluation is not truly participatory unless key stakeholders are actively involved
in all stages of the evaluation. On a deeper level, participatory methods can be seen as both an expan-
sion of decision making and, in some circumstances, an opportunity to shift power dynamics and
promote social change (Cousins & Whitmore, 1998). Given this ambiguity, there is a pressing need
to clearly define participatory evaluation.
Cousins is one of the most prolific and frequently cited writers on participatory evaluation, and in
his early work he defines participatory evaluation as ‘applied social research that involves a part-
nership between trained and practice-based decision makers, organization members with program
responsibility, or people with a vital interest in the program’ (Cousins & Earl, 1992, p. 399). From
this view, participatory evaluation is premised on members of different professional communities
working in partnership or a partnership between someone who is trained in evaluation methodology
and those who are not. Others describe participatory evaluation as an overarching term for ‘any eva-
luation that involves program staff or participants actively in decision making and other activities
related to the planning and implementation of evaluation studies’ (King, 2005, p. 241). In both cases
the definitions are so broad and operationally vague that specific stakeholder groups are not
346 American Journal of Evaluation 32(3)
at WESTERN MICHIGAN UNIVERSITY on July 25, 2011aje.sagepub.comDownloaded from
identified nor are specific evaluation tasks detailed. In short, there is a lack of conceptual and
operational specificity (Miller, 2010) as regards what represents ‘participatory evaluation’ and what
does not.
A Framework for Studying Participatory Evaluation
Given the prevalence of so many similar participatory evaluation approaches, having a reliable
means by which to distinguish approaches seems necessary. Cousins, Donohue, and Bloom
(1996) developed a widely cited framework for differentiating between types of participatory
approaches that was subsequently developed by Cousins and Whitmore (1998) and later refined
by Weaver and Cousins (2004). According to the original framework, all forms of participatory
evaluation can be classified along three dimensions: (a) control of the evaluation process,
(b) stakeholder selection for participation (i.e., which stakeholders are included in the evaluation),
and (c) depth of participation (i.e., in what capacity do stakeholders participate?). Accordingly,
participatory evaluation approaches fall somewhere on a continuum for each of these dimensions.
The current investigation used a three-dimensioned framework for classifying participatory
approaches that examines which stakeholders participate, in what capacity (i.e., how and to what
extent), and in which phases of evaluation they participate. The first two dimensions are directly
derived from Cousins and Whitmore (1998). Specifically, the first dimension directly addresses who
holds technical control of the decision-making process (i.e., the evaluator, stakeholders, or some
combination thereof). The second dimension describes the extent of stakeholder participation from
consultation to extensive participation. The third dimension differs from Cousins and Whitmore in
that depth of participation is described according to the principal evaluation phases in which differ-
ent stakeholder groups participate. Here, this dimension has been decomposed into what are consid-
ered the most important, discrete facets related to the primary activities necessary to execute most
evaluations (i.e., evaluation design, data collection, data analysis, developing recommendations,
reporting of findings, and dissemination) in an attempt to more fully operationalize Cousins’ original
dimension (see Cullen, 2009). (In retrospect, this operational definition also should have included
‘interpretation of findings’ given stakeholders knowledge of local context that most evaluators are
not privy to.) An oversimplified, conceptual rendering of these dimensions are illustrated in Figure 1
and are described more fully elsewhere in this article.
Consequences of Using Participatory Evaluation Approaches
The merits of participatory evaluation are debated. Morra Imas and Rist (2009) suggest that there are
two primary objectives to participation and participatory approaches: (a) participation as product,
where the act of participation is an objective and is one of the indicators of success and (b) partic-
ipation as a process by which to achieve a stated objective. Most of the disagreement regarding
participatory evaluation approaches stems from evaluations with the former objective. In other
words, criticisms arise when an evaluation has an objective other than determining the merit or
worth of something (Stufflebeam, 2001; Stufflebeam & Shinkfield, 2007). In this article, the pros
and cons of participatory evaluation approaches are described in terms of perceptions regarding
positive and negative consequences of their use.
Positive Consequences
Weaver and Cousins (2004) argue that there are three main goals (which also can be viewed as
positive consequences) of participatory evaluation approaches: pragmatic (because stakeholders are
included in the evaluation process, evaluation findings will be more useful), political (including sta-
keholders improves the fairness of an evaluation), and epistemological (stakeholders have unique
Cullen et al. 347
at WESTERN MICHIGAN UNIVERSITY on July 25, 2011aje.sagepub.comDownloaded from
perspectives and their inclusion improves the validity of an evaluation). Some argue that the
inclusion of a broader range of stakeholders in the evaluation process increases the use of evaluation
findings (Brandon, 1998, 1999; Cousins, 2003; Patton, 2008; Ryan, Greene, Lincoln, Mathison,
& Mertens, 1998; Weiss, 1986). It is held that increased use occurs, in part, because upstream
stakeholders (see Davidson, 2005, for descriptions of upstream, downstream, and other stakeholder
groups) are more likely to follow evaluation conclusions because their staff were actively involved
in the evaluation process (Brandon, 1998) and because all stakeholders will be more committed to
use findings because they have had a voice in the evaluation process (Weiss, 1986).
The second type of positive consequence of participatory evaluation approaches is increased
fairness. As participatory approaches include more diverse stakeholder groups, such evaluations include
the priorities of a larger group of individuals. This, in turn, leads to a more democratic evaluation process
(Weaver & Cousins, 2004; Weiss 1986). Thus participatory evaluation approaches are considered more
balanced and fair because the evaluation addresses the concerns of more stakeholder groups.
The third justification for participatory evaluation approaches, epistemological, is one of the most
frequently cited reasons for their use. Namely, many evaluators believe that the use of participatory
evaluation approaches greatly enhances the validity and credibility of an evaluation. Program stake-
holders are aware of contextual considerations of which evaluators are not. Therefore, by including
stakeholders in the evaluation process, the evaluation is more likely to identify important problems
of concern (Brandon, Linberg, & Wang, 1993; Stake & Abma, 2005).
Dimension 1
Technical Control
Dimension 2
Extent of Parcipaon
Dimension 3
Subdimension: Stakeholder Group
Dimension 3
Subdimension:Evaluaon Phase
Figure 1. Framework describing participatory evaluation.
348 American Journal of Evaluation 32(3)
at WESTERN MICHIGAN UNIVERSITY on July 25, 2011aje.sagepub.comDownloaded from
Negative Consequences
Despite these positive consequences of participatory evaluation approaches, there are potential
negative consequences that merit attention. Examples of such problems include increased time and
resource demands, difficulty managing multiple stakeholders, lack of stakeholder qualifications,
stakeholder bias, and intervention disguised as evaluation. Including stakeholders in evaluations, for
example, introduces the risk that stakeholder bias may reduce the validity of the evaluation and its
findings. In other words, stakeholders’ views of programs will drive the evaluation. If stakeholders
have roles in the evaluation, their opinions, views, and personal motivations could influence how the
evaluation is designed, implemented, reported, and disseminated. Such hidden objectives on the part
of stakeholders potentially could jeopardize the validity of the evaluation. Chelimsky (2008) warns
that stakeholders can introduce ‘loaded evaluation questions’ wherein sponsors (upstream stake-
holders) try to influence the focus of the evaluation. In such cases, evaluation findings are sometimes
determined even before the evaluation is undertaken, thereby reducing the validity of the evaluation
and its findings. Moreover, some critics argue that participatory approaches are essentially program
interventions rather than evaluations (Brisolara, 1998).
Critics of participatory evaluation approaches contend that the inclusion of managing stakeholder
groups might result in increased logistical problems. In these instances, the evaluation is hindered by
‘too many cooks in the kitchen’ or, in other words, too many evaluation team members (program
staff and nonprogram evaluation team members) which could result in personnel management
difficulties. Other critics argue that participatory methods, through the inclusion of multiple stake-
holder groups, result in increased time and financial burdens (Stufflebeam & Shinkfield, 2007).
Question Investigated in the Study
The following specific questions were posed to investigate practicing evaluators’ perceptions of the
politics and consequences of stakeholder participation in international development evaluations:
Why do evaluators include stakeholders in the evaluation process?
How have stakeholders typically been included in international development evaluations?
What are the perceived consequences of stakeholder inclusion in international development
evaluations?
Questions #1 and #2 were intended to provide information on evaluators’ reasons for including
stakeholders in the evaluation process. Question #3 directly relates to the evaluation framework
presented in this study. In other words, the role of stakeholders in the evaluation process is analyzed
based on three dimensions: technical control of the decision-making process, extent of participation,
and depth of participation by evaluation phase. The reader should note that the proposed participa-
tory evaluation framework was not empirically tested. Rather, the framework helped guide the
study, particularly how participatory evaluation was operationalized.
Method
Design
A mixed method sequential design, giving equal priority to both quantitative and qualitative
methods, was used to investigate the primary research questions (Creswell, Plano Clark, Gutmann,
& Hanson, 2003). The design was sequential in that the study’s quantitative methods preceded and
informed the subsequent qualitative methods (Morse, 2003). It was equal priority in that both the
quantitative and qualitative methods and their results were assigned equal weight in the
Cullen et al. 349
at WESTERN MICHIGAN UNIVERSITY on July 25, 2011aje.sagepub.comDownloaded from
interpretation of findings (Creswell, 2009). Ideally, mixed method designs reduce mono-method
biases and provide greater insight into a phenomenon than a single method alone. In this study, the
quantitative method consisted of a questionnaire to a nonprobability sample of international devel-
opment evaluators and the qualitative method consisted of in-depth, semi-structured interviews with
a subsample of questionnaire respondents.
(The study also included a systematic review and synthesis of a large sample of recently issued
international development evaluation reports. However, this aspect of the study exceeds the scope of
this article. Additionally, and given the nature and scope of the study, the results presented only par-
tially reflect the larger findings; in particular those derived from the study’s qualitative methods.
Interested readers are referred to Cullen, 2009.)
Sample
For the study’s quantitative phase, a nonprobability snowball sampling procedure (Patton, 2002) was
used to gather information from practicing international development evaluators. Because an
accurate and complete sampling frame consisting of all international development evaluators cannot
be reliably constructed, identifying a known list of units for simple random or other probability
sampling methods was not possible.
A total of 186 individuals completed a web-based questionnaire. The first item of the questionnaire
was used to screen respondents, and asked them to indicate whether they had experience conducting
international development evaluations. In all, 166 respondents (89%) responded affirmatively and the
remaining 20 (11%) responded negatively and were directed to the final page of the questionnaire
using a skip pattern and informed that their participation was not necessary.
Collectively, the 166 respondents who indicated experience in development evaluation had 1,357
(M ¼ 9.8, SD ¼ 7.6, median ¼ 8) years of combined experience conducting international develop-
ment evaluations and had conducted a total of 1,412 (M ¼ 11.0, SD ¼ 13.0, median ¼ 5) unique
international development evaluations. With regards to their country of origin, respondents were
from 55 countries on 6 continents.
For the study’s qualitative phase, a criterion sampling method (Patton, 2002) was used to identify
interviewees for more in-depth data gathering than possible with the survey questionnaire alone.
In all, 15 interviewees were selected from questionnaire respondents based on their demographic
characteristics, including (a) country of origin and (b) experience conducting international develop-
ment evaluations as well as their (c) experience including stakeholders in the evaluation process.
Collectively, these interviewees had a total of 201 years (M ¼ 14.4, SD ¼ 9.9, median ¼ 14)
conducting international development evaluations and conducted a total of 188 (M ¼ 15.7, SD ¼ 13.2,
median ¼ 12) international development evaluations.
Instrumentation
The questionnaire consisted of 24 items designed to gather information about international development
evaluators’ perceptions of the politics and consequences of stakeholder participation in development
evaluations. The instrument consisted of both open- and close-ended response sets, including, but not
limited to, ‘select all that apply’ items, semantic differential scales, and dichotomous items.
The questionnaire was divided into three main sections. The first section asked respondents how
stakeholders typically participate in their international development evaluations, what stakeholders
typically participate, and in what phase of the evaluation they typically participate. The second sec-
tion of the questionnaire probed respondents about their familiarity and experience with participa-
tory approaches to international development evaluations. Respondents with experience utilizing
participatory approaches were asked to describe their experiences in detail, indicate what specific
method/methods or participatory approach/approaches they utilized, detail perceived consequences
350 American Journal of Evaluation 32(3)
at WESTERN MICHIGAN UNIVERSITY on July 25, 2011aje.sagepub.comDownloaded from
of their use, identify challenges encountered, present strategies for mitigating problems, and
describe in which circumstances participatory approaches worked best. Finally, in the third section,
respondents were asked to provide demographic information on their years of evaluation experience,
regional, content area, and organizational experience, and country of origin.
Although the instrument was designed to gather information about participatory approaches in
international development evaluation, the introduction to the instrument did not indicate so. Instead,
the introduction to the questionnaire stated that the purpose was to study current practice in interna-
tional development evaluation. Participatory evaluation approaches were specifically omitted in the
introduction so as to reduce the number of respondents who would self-select out of the question-
naire based on their experience or lack of with participatory evaluation approaches.
A semi-structured interview protocol was developed following completion of the study’s
quantitative phase and analysis to more fully investigate emergent themes and probe issues identi-
fied from the results of the questionnaire. Semi-structured interviewing was used because, while a
structured interview has a formalized, limited set of questions, a semi-structured method is more
flexible, allowing new questions to be raised during the interview in response to what the interviewee
says.
Procedure
To recruit participants for the questionnaire, an e-mail invitation was sent to four professional
listservs targeting international development evaluators: MandENEWS, XCEval, IDEAS, and
EVALTALK. The snowball aspect of the sampling procedure was accomplished by contacting mon-
itoring and evaluation departments of international development agencies (both donors and private
voluntary organizations) asking them to recommend evaluators with whom they currently collabo-
rate or those with whom they previously collaborated. In some cases, the agencies forwarded infor-
mation regarding the questionnaire directly to collaborating evaluators. In others, the agencies
provided the names and contact information of collaborating evaluators. The snowball strategy also
was enhanced by including an item in the questionnaire asking respondents to refer other development
evaluators to the questionnaire.
Data Processing and Analysis
Information obtained from the questionnaire was in both qualitative (from open-ended items) and
quantitative (from close-ended items) forms. Close-ended, quantitative data from the questionnaire
were analyzed using traditional statistical techniques in the form of central tendency and variability
rather than inferential statistics and null hypothesis significance testing due to the nature of the focal
research questions, which were predominately descriptive. For qualitative data, text segments from
open-ended questionnaire items and interview transcripts were analyzed using open and axial coding
methods in order to estimate the prevalence of codes, assess similarities and differences in related
codes, and to compare relationships between codes and other relevant information (Miles & Huberman,
1994).
Results
Why Stakeholder Inclusion?
Almost half (45%) of all questionnaire respondents reported that using a participatory approach was
the most appropriate for the contexts in which they work. More than one third (34%) reported that
they always use participatory evaluation approaches. Finally, 17% of respondents indicated that their
client specifically requested the use of a participatory evaluation approach and 4% did not know why
it was used.
Cullen et al. 351
at WESTERN MICHIGAN UNIVERSITY on July 25, 2011aje.sagepub.comDownloaded from
Questionnaire and interview respondents were asked whehter there are particular circumstances
in which participatory evaluation approaches appear to work best. There was a wide range of
responses to this item that reveal the diversity in thinking and practice with participatory evaluation
approaches. The predominant themes that emerged included:
Stakeholders included in evaluation process from the beginning. It is difficult to conduct a truly
participatory evaluation when stakeholders are only brought on board at the later stages of an
evaluation such as data collection. In this way, if stakeholders have an active role in determin-
ing what questions are asked to how the data are analyzed and interpreted, the evaluation
findings will have more meaning for them and they will be more likely to use the findings.
Stakeholders involved in project being evaluated. If the program being evaluated is participatory
in nature, a participatory evaluation approach would work well.
Donor support of participatory process. Having donor support, in terms of financial and time
resources, logistical support, and commitments to the participatory process, is critical to the
success of a participatory evaluation approach.
Conducive environment to participatory evaluations. For several respondents, this meant that
stakeholder groups were on good terms and that there were no explicit, identifiable conflicts.
Sufficient time. This is particularly true for those respondents who reported that stakeholders need
to be included in the evaluation from the beginning.
Participatory evaluation approaches are always appropriate. These respondents reported that
stakeholder participation is not something that should be reserved for particular cases, but
rather something that should be incorporated in all evaluations.
Flexibility. Including additional stakeholders in the evaluation process opens the door for com-
plications. Some respondents commented on the need to have flexibility to respond appropri-
ately to both problems and the additional complications of including more stakeholders.
Evaluation that is formative in nature. Several respondents reported that the stakes often are too
high for summative evaluations to apply a participatory evaluation approach and that partici-
patory approaches are better suited to formative tasks.
How Stakeholders are Included
Questionnaire respondents reported that the following stakeholder groups typically participate in
international development evaluations: program staff (82%), recipients (77%), funding agency staff
(67%), government (53%), nonrecipients who were positively impacted (30%), and nonrecipients
who were negatively impacted (28%). Respondents also reported that program staff is the stake-
holder group most frequently included in the evaluation process, and data collection is the evaluation
phase with the greatest stakeholder participation. Conversely, nonrecipients who were negatively
impacted were the stakeholder group least included in the evaluation process, and data analysis has
the least amount of stakeholder participation.
Questionnaire respondents were asked to rate technical control of the evaluation process on a
5-point semantic differential scale with stakeholders at one end of the continuum and evaluators
at the other. Respondents largely (68%) reported that evaluators have control of the decision-
making process. Only 10% of respondents reported that stakeholders have technical control of the
evaluation process.
Questionnaire respondents also were asked to indicate the extent of stakeholder participation in
each evaluation phase using the example of their most recent participatory evaluation (see Table 1).
As shown in the table, data collection had the highest average rating (3.82), followed by dissemina-
tion of findings (3.52) and developing recommendations (3.31). On the other end of the participatory
spectrum, data analysis (2.57) and evaluation design (2.88) had the lowest average ratings.
352 American Journal of Evaluation 32(3)
at WESTERN MICHIGAN UNIVERSITY on July 25, 2011aje.sagepub.comDownloaded from
Consequences of Stakeholder Inclusion
Questionnaire respondents were asked to report their perceptions of the positive and negative
consequences of the use of participatory evaluation approaches for development evaluations
(see Table 2). Interviews probed further into themes identified by questionnaire respondents.
Positive Consequences
According to the questionnaire responses, the highest rated positive consequences of participatory
evaluation approaches included increased usefulness (e.g., practicality/relevance) of evaluation
findings (93%), increased evaluation use (e.g., findings are acted upon/used; 88%), increased
empowerment of stakeholders (88%), and increased stakeholder buy in (87%). However, in inter-
views different positive consequences were identified.
In the interviews, by far the most frequently cited positive impact of participatory evaluation
approaches was the perception that they increased validity. According to interviewees, stakeholder
participation helps ensure that the evaluation uses relevant data and accurately reflects the needs of
stakeholders, which some argue constitutes one facet of validity (see Messick, 1989). Stakeholder
participation helps ensure that the evaluation uses relevant data and accurately reflects the needs
of stakeholders:
We usually need to use participatory approaches in international development evaluation because most
of the time, the programme design was done by technicians alone in their corner without having taken
into account stakeholders’ views. An evaluation is usually conducted either at the mid-course or at the
Table 1. Extent of Stakeholder Participation in Each Evaluation Phase
No Participation/Consultation Only Extensive Participation MSD
Evaluation design 22% 23% 21% 15% 20% 2.88 1.43
Data collection 7% 7% 21% 26% 38% 3.82 1.22
Data analysis 30% 20% 24% 14% 11% 2.57 1.35
Developing recommendations 8% 20% 23% 31% 18% 3.31 1.21
Reporting of findings 18% 14% 24% 28% 16% 3.10 1.34
Dissemination of findings 12% 12% 20% 25% 32% 3.52 1.36
Table 2. Perceived Consequences of Participatory Evaluation Approaches
Decreased No Change Increased Don’t Know
Usefulness of evaluation findings 1% 4% 93% 3%
Empowerment of stakeholders 1% 7% 88% 4%
Use of evaluation findings 0% 8% 88% 4%
Buy in 0% 5% 87% 8%
Fairness 5% 12% 75% 9%
Stakeholder’s technical research skills 3% 14% 74% 9%
Validity 9% 17% 71% 3%
Time constraints 10% 17% 69% 4%
Social change 1% 12% 64% 24%
Other
a
5% 11% 63% 21%
Financial constraints 13% 24% 58% 6%
a
Other responses included development of a culture of evaluation, accountability and transparency, and ownership, for
example.
Cullen et al. 353
at WESTERN MICHIGAN UNIVERSITY on July 25, 2011aje.sagepub.comDownloaded from
end of a programme so stakeholders’ views are more than important in order to know what did really
happen and why it happened that way according to those who lived the programme from inside.
Understanding the local context and stakeholders was identified as very beneficial. Taking the time
to learn about local traditions and practices will help facilitate the evaluation process and facilitate
greater stakeholder buy in:
There is an ancient Chinese proverb—When you are in the community do as the community. Eat like
them, behave like them. When you go in consider it a learning process. Don’t go and tell them what
degrees you have. Don’t act like you are smarter than them. Go thinking you will learn from them. Local
people can teach you too. Give and take of knowledge.
Another positive impact frequently mentioned by interviewees was facilitation of the evaluation
process. Interviewees reported that including relevant stakeholders often facilitated data collection
and access to data, use and access to local resources, and reduced dependence on hiring external
evaluation consultants:
If you bring people into the evaluation process the evaluation process will be greatly facilitated. There
will be better data ... in that it reflects what they think, more complete because they have a stake in the
evaluation process. So, there will be less time spent in data management.
Evaluation capacity building was another positive impact frequently reported. Many interviewees
indicated that participatory evaluation approaches help develop stakeholders’ evaluation skills.
Indeed, some interviewees reported that it was one of their objectives to help build capacity and that
they did not care if they crossed the line with evaluation execution:
Participation enables stakeholders to assess the program results with various viewpoints and criteria.
They see and hear the same things the evaluator is seeing and hearing which helps them come to the same
conclusions and act upon the recommendations. Perhaps, more importantly, they learn how and why to
do evaluations.
For a minority, participatory evaluation approaches also help resolve concerns such as fairness and
transparency. They also indicated that participatory evaluation approaches directly and indirectly
contribute to various forms of empowerment. For these interviewees, stakeholder participation in
the evaluation process constitutes an ideological commitment:
It is a basic human right to be much more than a subject in evaluations which affect the target popula-
tion’s welfare. They live with the product. And it enhances validity, as well as their incidence in defining
their own future.
Negative Consequences
The negative consequences of participatory evaluation approaches were the same for questionnaire
respondents and interviewees. Interviewees reported that it takes time to bring all relevant stakeholders
together and, in particular, to come to a consensus. However, such constraints were seen not only as neg-
ative impacts of doing participatory evaluation but also were regarded by some as actual barriers that
precluded doing participatory evaluation in the first place. Indeed, several interviewees reported that
time and financial constraints precluded the use of participatory evaluation approaches:
Many evaluations are slapdash and are usually put together as an afterthought. People just don’t think
about evaluation beforehand. They try to do too much in too short of a time period. The amount of time
354 American Journal of Evaluation 32(3)
at WESTERN MICHIGAN UNIVERSITY on July 25, 2011aje.sagepub.comDownloaded from
in the field is almost laughable. It is impossible to think that you could have any genuine participation of
stakeholders in the evaluation process. The more people you include in evaluations, the more compli-
cated it becomes. If you involve everyone in the process it takes more time and money.
Although reported as a positive impact, validity also was indicated as a negative impact of
participatory evaluation approaches in some instances:
I do experience some criticism from people who are worried about reduced validity. But I try to go over
the evaluation process with them so that they understand. Even if validity is reduced I think that it is
worth the risk.
As shown in Table 3, even those evaluators who are strong advocates of and consistently use
participatory evaluation approaches encounter problems when using them. More than one third of
questionnaire respondents indicated that the time-consuming nature of participatory approaches was
very challenging and an additional 39% reported that they were often challenging. As shown in the
table, reconciling power issues also was considered challenging.
In both the survey and interviews, respondents reported that donors and clients sometimes impede
the use of participatory evaluation approaches in that they try to control the evaluation by ‘cherry
picking’ stakeholders to participate, trying to stifle negative findings, and ‘overpowering weak
stakeholders’’:
Project managers and partners deliberately selecting community members and other stakeholders who
have had favorable experiences with the project and will only say favorable things. Field coordinators
not understanding or disregarding guidelines and not planning or implementing activities as requested
(because participatory approaches are more complex than simply passing out surveys). They take short-
cuts so as to simplify the process and compromise the integrity and validity of the evaluation.
As for time and financial constraints participatory evaluation approaches reportedly require significant
time and financial resources in order to bring stakeholders together:
Harmonizing and aligning the different perspectives of a range of stakeholders is very time consuming—
calling for patience, working within the ever changing schedules of various stakeholders; this sometimes
has a cost implication.
As mentioned, power issues were another reported challenge of participatory evaluation approaches.
As one interviewee reported, ‘I almost always have problems with power issues. In any culture,
poor people do not hob knob with ministry people and literate people as they do in participatory eva-
luation approaches.’ Trying to get stakeholders from different socioeconomic groups to participate
collaboratively can be extremely challenging:
Table 3. Challenges Encountered in Using Participatory Evaluation Approaches
Not at all
Challenging
Somewhat
Challenging
Often
Challenging
Always
Challenging
Determining which stakeholders to
include
21% 47% 24% 8%
Determining how stakeholders will
participate
14% 40% 40% 6%
Power issues 6% 20% 49% 25%
Lack of stakeholder expertise 12% 31% 38% 20%
Time consuming 9% 17% 39% 35%
Cullen et al. 355
at WESTERN MICHIGAN UNIVERSITY on July 25, 2011aje.sagepub.comDownloaded from
Another problem I experience is with senior and more experienced people dominating. Younger people
without power tend to keep quiet as they are afraid to participate. Power issues are problems for all eva-
luations but they are particularly problematic for participatory evaluations. This is because participatory
evaluations tend to bring all stakeholders together to discuss issues. In regular evaluations, stakeholders
can be met with one on one to get their perspective. When I hold workshops, I like to have a mix of sta-
keholders present, that is, from all stakeholder groups. But I tend to group program staff with program
staff, beneficiaries with beneficiaries, and so forth.
Discussion
Perhaps the most significant finding of this study is the confirmation that participatory evaluation
approaches are interpreted and practiced in widely differing ways. On the surface that finding might
not seem very substantial. However, given the continuing debates over the use of participatory
evaluation approaches, it presents potentially interesting implications. For example, without a
common understanding of what is meant by participatory evaluation (i.e., operational specificity;
Miller, 2010), how can the merits or shortcomings of such approaches be legitimately debated?
That being said, and as the debates over the consequences of participatory evaluation rage on, the
question of the relevance of these debates emerges. One of the biggest complaints waged by critics is
that participatory evaluation approaches sacrifice objectivity and, even more troubling, validity via
the inclusion of stakeholders and their potential interests in predetermined results. However, accord-
ing to the majority of international development evaluators who participated in this study, evaluators
largely maintain control of the evaluation process. Even so, another argument is that participatory
evaluation approaches cross the line into intervention when empowerment and capacity building
become objectives. Only a few respondents listed empowerment and capacity building as explicit
objectives of using a participatory approach. If empowerment and capacity building are side effects
that result from participatory evaluation approaches, should that be considered problematic?
Most intriguing is the rather common practice of evaluators referring to interviewing stakeholders
or gathering or eliciting other types of information (e.g., recipients, government officials, imple-
menting partners) as a legitimate form of participation. This view treats the notion of participation
as essentially using stakeholders as sources of information or data (i.e., they become informants
rather than true participants). This phenomenon emerged across all of the methods employed for this
study (i.e., in the systematic review of evaluation reports, questionnaires, and follow-up interviews
with evaluators).
Throughout the course of this study it became clear that the lack of a common and shared under-
standing of participatory evaluation was problematic. For example, numerous respondents reported
that donors call for participatory evaluations but provide no explanation of what specific activities
that entails. In the systematic review of a sample of international development evaluation reports,
several evaluation reports clearly stated that they used a participatory evaluation approach yet pro-
vided no evidence to support such claims. In several instances, it seemed that stakeholders were only
included as data sources, yet, even so, the evaluation was labeled participatory. Has participation
become a catch phrase that evaluators are eager to assign to their evaluations but, in reality, has
no significance?
The most frequently cited problem associated with the use of participatory evaluation approaches
was increased time constraints. Respondents reported that the participation of stakeholders signifi-
cantly increased the amount of time it took to conduct evaluations. From introducing new logistical
constraints from the addition of more individuals to reconciling different priorities of stakeholders,
participatory evaluation approaches are time consuming. However, even though donors frequently
call for the use of participatory evaluation approaches, they do not seemingly recognize the
356 American Journal of Evaluation 32(3)
at WESTERN MICHIGAN UNIVERSITY on July 25, 2011aje.sagepub.comDownloaded from
additional time and other demands required for such approaches. Numerous respondents reported
that the terms of reference (TOR) and scope/scopes of work (SOW) with their corresponding pre-
determined questions and methods issued by donors often do not allow for participatory evaluation
approaches.
Donor dominance of the evaluation process was another important finding of this study. Survey
and interview respondents reported that the prescribed SOWs and TORs for international develop-
ment evaluations do not allow for flexibility in the evaluation process. More troubling are the reports
of donors trying to interfere with evaluation findings by ‘cherry picking’ stakeholders with positive
impacts to trying to dominate less powerful stakeholders to, most troubling, trying to stifle negative
findings. Such environments or perspectives are not at all conducive to conducting any evaluation
with integrity, regardless of the level of participation of stakeholders.
The findings of this study underscore the importance of clarity and the need for details when
discussing participatory evaluation approaches. Evaluators proposing to engage in a participatory
evaluation approach should be prepared to answer the following questions: Which stakeholders
will be included in the evaluation? In what capacity will they participate? In what evaluation phases
will they participate? Who will maintain technical control over the decision-making process?
The answers to these questions will help ensure that both evaluators and clients have a shared
common understanding of the nature of participation.
Finally, the findings from this study demonstrate that the vast majority of participatory
approaches to international development evaluation tend to be evaluator driven. While much of the
debate surrounding participatory evaluation focuses on stakeholder-driven approaches such as
empowerment evaluation, this study shows that those types of approaches are the exception in inter-
national development evaluation. This study underscores the importance of precision and specificity
in detailing how participatory evaluation approaches are operationalized and implemented in order
to accurately discuss their merits and demerits.
Limitations
The most serious limitation of this study is simply the lack of certainty that respondents are repre-
sentative of the population of international development evaluators. Great effort was taken to ensure
that news about the study was distributed to as wide an audience as possible in order to increase the
diversity of respondents as well as to maximize the number of respondents. At worst, findings from
the survey questionnaire are likely only generalizable to those with similar characteristics as respon-
dents. An additional, albeit important, limitation is that the findings derived from the study reflect
respondents’ perceptions, perspectives, experiences, and opinions, not necessarily the actual reasons
for and the politics underlying participatory evaluation in international development contexts. While
understanding how international development evaluators perceive participatory evaluation
approaches is important, they do not, however, take the place of empirical studies that research the
true impact of participatory evaluation approaches.
Future Research
Some of the limitations and lessons learned from this study gave rise to ideas for improving future
research into participatory approaches to international development evaluation and participatory
forms of evaluation more generally. First, and before a similar study is undertaken, the classification
framework for assessing participatory evaluation approaches should be revised. At a minimum, the
framework should include a screening criterion for determining whether an evaluation is truly par-
ticipatory: Are stakeholders included as more than a data source? If the answer to this question is
negative, there is no need for continuing to assess the ways and extent to which the evaluation was
Cullen et al. 357
at WESTERN MICHIGAN UNIVERSITY on July 25, 2011aje.sagepub.comDownloaded from
truly participatory. Second, a worthwhile and potentially interesting study would be an investigation
into the actual impacts of participatory approaches to international development evaluation (e.g.,
contrasting a participatory evaluation approach with that of a nonparticipatory approach). Such a
study would move beyond perceived impacts and would scientifically document real impacts.
Finally, another recommendation for further research would be to investigate the frequency with
which and the reasons why donors often request participatory evaluation approaches which numer-
ous respondents reported to be one of the main reasons that they use participatory evaluation
approaches. Understanding how often and why donors or evaluation sponsors call for participatory
methods to be used will help put together another piece of the puzzle.
Final Remarks
The findings from this study compare favorably with Cousins et al.’s 1996 study on participatory
evaluation in Canada and the United States. While many of the specific questions in the latter differ
from the present study, there are, nonetheless, several worthwhile points of comparison. Findings
from both studies suggest that evaluators largely maintain technical control of the evaluation
decision-making process. In the present study, program staff were identified as the stakeholder
group with the highest reported rate of participation in the evaluation process. In the Cousins
et al. (1996) study, such fine distinctions were not made. Rather, that study reported that those con-
nected to the program—developers, mangers, funders, and implementers—had the highest reported
participation. Finally, both studies found high levels of stakeholder participation in the data collec-
tion phase.
The findings from this study also comport well with Rebien (1996) who asserts that one of the
necessary criteria for an evaluation to be considered participatory is that stakeholders are included
as more than a data source. Information gathered from the systematic review of evaluation reports,
questionnaires, and interviews demonstrated that many evaluators classify evaluations as participa-
tory even if stakeholders have had a limited role. Classifying these types as participatory seems to be
contradictory to the true intent of participatory evaluations. Finally, Daigneault and Jacob’s (2009)
participatory measurement instrument is promising and certainly a much needed addition. However,
the findings from this study raise some questions as regards its likely reliability and, subsequently,
its validity given that the latter is dependent upon the former.
With the exception of empowerment evaluation (Miller & Campbell, 2006), evaluation use
(Brandon & Singh, 2009; Johnson, Greenseid, Toal, King, Lawrenz, & Volkov, 2009; Shulha &
Cousins, 1997), theory-driven evaluation (Coryn, Noakes, Westine, Schro¨ter, 2010), and a few oth-
ers, very little empirical evidence exists to buttress the numerous theoretical postulations and pre-
scriptions about most evaluation approaches and their perceived benefits. Yet, for many years
evaluation scholars have urged the evaluation community to carry out empirical studies to scrutinize
such assumptions and test specific hypotheses about evaluation practice (Alkin & Christie, 2005;
Christie, 2003, 2009; Henry & Mark, 2003; Mark, 2007; Shadish, Cook, & Leviton, 1991; Smith,
1993; Stufflebeam & Shinkfield, 2007). Ultimately, each investigation makes incremental contribu-
tions to the field and taken together they should assist the broader evaluation community of scholars
and practitioners in understanding and improving the theory, method, and practice of evaluation.
Acknowledgments
The authors would like to thank James Sanders for his contribution to this work. We are also grateful
to the three blind peer reviewers for their comments and suggestions on earlier drafts of this article.
This review benefited greatly from their individual and collective wisdom.
358 American Journal of Evaluation 32(3)
at WESTERN MICHIGAN UNIVERSITY on July 25, 2011aje.sagepub.comDownloaded from
Declaration of Conflicting Interests
The author(s) declared no conflicts of interest with respect to the authorship and/or publication of
this article.
Funding
The author(s) received no financial support for the research and/or authorship of this article.
References
Aaker, J. & Shumaker, J. (1994). Looking back and looking forward: Participatory monitoring and evaluation.
Little Rock, AK: Heifer International.
Alkin, M. C., & Christie, C. A. (Eds.). (2005). Theorists’ models in action. New directions for evaluation, No.
106. San Francisco, CA: Jossey-Bass.
Aubel, J. (1994). Participatory program evaluation: A manual for involving program stakeholders in the eva-
luation process. New York, NY: Catholic Relief Services.
Blue, R. Clapp-Wincek, C., & Benner, H. (2009). Beyond success stories: Monitoring and evaluation for foreign
assistance results. Retrieved 7/20/2009 from http://www.usaid.gov/about_usaid/acvfa/060909_beyond_success_
stories.pdf.
Brandon, P. R. (1998). Stakeholder participation for the purpose of helping ensure evaluation validity: Bridging
the gap between collaborative and non-collaborative evaluations. American Journal of Evaluation, 19,
325-337.
Brandon, P. R. (1999). Involving program stakeholders in reviews of evaluators’ recommendations for program
revisions. Evaluation and Program Planning, 22, 363-372.
Brandon, P. R., Lineberg, M. A., & Wang, Z. (1993). Involving program beneficiaries in the early stages of
evaluation: Issues of consequential validity and influence. Educational Evaluation and Policy Analysis,
15, 420-428.
Brandon, P. R., & Singh, J. M. (2009). The strength of the methodological warrants for the findings of research
on program evaluation use. American Journal of Evaluation, 30, 123-157.
Brisolara, S. (1998). The history of participatory evaluation and current debates in the field. In E. Whitmore
(Ed.), Understanding and practicing participatory evaluation (pp. 25–41). New Directions for Evaluation,
No. 80. San Francisco, CA: Jossey-Bass.
Chambers, R. (1992). Rapid but relaxed and participatory rural appraisal: Towards applications in health and
nutrition. In N. S. Scrimshaw & G. R. Gleason (Eds.), Rapid assessment procedures–Qualitative methodologies
for planning and evaluation of health related programmes (Section III, Chap. 24). Boston, MA: International
Nutrition Foundation for Developing Countries (INFDC).
Chambers, R. (1994). The origins and practice of participatory rural appraisal. World Development, 22,
953-969.
Chelimsky, E. (2008). A clash of cultures: Improving the ‘fit’ between evaluative independence and the
political requirements of a democratic society. American Journal of Evaluation, 29, 400-415.
Christie, C. A. (2003). What guides evaluation practice? A study of how evaluation practice maps onto
evaluation theory. In C. A. Christie (Ed.), The theory-practice relationship in evaluation (pp. 7–36).
New Directions for Evaluation, No. 97. San Francisco, CA: Jossey-Bass.
Christie, C. A. (2009). Analyzing the practice of evaluation: What do these cases tell us about theory? In
J. Fitzpatrick, C. A. Christie, & M. M. Mark (eds.), Evaluation in action: Interviews with expert evaluators
(pp. 393-436). Thousand Oaks, CA: Sage.
Coryn, C. L. S., Noakes, L. A., Westine, C. D., & Schro¨ter, D. C. (2010). A systematic review of theory-driven
evaluation practice From 1990 to 2009. American Journal of Evaluation. Advance online publication. doi:
10.1177/1098214010389321.
Cousins, J. B. (2003). Utilization efforts of participatory evaluation. In T. Kellaghan & D. L. Stufflebeam (eds.),
International handbook of educational evaluation (pp. 245-266). Dordrecht, UK: Kluwer.
Cullen et al. 359
at WESTERN MICHIGAN UNIVERSITY on July 25, 2011aje.sagepub.comDownloaded from
Cousins, J. B., Donohue, J. J., & Bloom, G. A. (1996). Collaborative evaluation in North America: Evaluators’
self-reported opinions, practices and consequences. Evaluation Practice, 17, 207-226.
Cousins, J. B., & Earl, L. M. (1992). The case for participatory evaluation. Educational Evaluation and Policy
Analysis, 14, 397-418.
Cousins, J. B., & Whitmore, E. (1998). Framing participatory evaluation. In E. Whitmore (Ed.), Understanding
and practicing participatory evaluation (pp. 5–23). New Directions for Evaluation, No. 80. San Francisco,
CA: Jossey-Bass.
Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods approaches (3rd ed.).
Thousand Oaks, CA: Sage.
Creswell, J. W., Plano Clark, V. L., Gutmann, M. L., & Hanson, W. E. (2003). Advanced mixed methods
research designs. In A. Tashakkori & E. Teddlie (eds.), Handbook of mixed methods in social and beha-
vioral research (pp. 209-240). Thousand Oaks CA: Sage.
Cullen, A. (2009). The Politics and Consequences of Stakeholder Participation in International Development
Evaluation. Unpublished doctoral dissertation, Western Michigan University, Kalamazoo.
Daigneault, S. M., & Jacob, S. (2009). Toward accurate measurement of participation: Rethinking the concep-
tualization and operationalization of participatory evaluation. American Journal of Evaluation, 30, 330-348.
Davidson, E. J. (2005). Evaluation methodology basics: The nuts and bolts of sound evaluation. Thousand
Oaks, CA: Sage.
Feuerstein, M. T. (1986). Partners in evaluation: Evaluating development and community programmes with
participants. London, UK: Macmillan Publishers Ltd.
Hall, B. L. (1981). Participatory research, popular knowledge and power: A personal reflection. Convergence:
An International Journal of Adult Education, 14, 6-19.
Henry, G. T., & Mark, M. M. (2003). Toward an agenda for research on evaluation. In C. A. Christie (Ed.),
The practice-theory relationship in evaluation (pp. 69–80). New Directions for Evaluation, No. 97. San
Francisco, CA: Jossey-Bass.
Johnson, K., Greenseid, L. O., Toal, S. A., King, J. A., & Lawrenz, F. & Volkov, B. (2009). Research on
evaluation use: A review of the empirical literature from 1986 to 2005. American Journal of Evaluation,
30, 377-410.
King, J. A. (2005). Participatory evaluation. In S. Mathison (Ed.), Encyclopedia of evaluation (pp. 291-294).
Thousand Oaks, CA: Sage.
Mark, M. M. (2007). Building a better evidence base for evaluation theory: Beyond general calls to a
framework of types of research on evaluation. In N. L. Smith & P. R. Brandon (eds.), Fundamental issues
in evaluation (pp. 111-134). New York, NY: Guilford.
Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational measurement (3rd ed., pp. 13-103). New York,
NY: Macmillan.
Miles, M. B., & Huberman, M. A. (1994). Qualitative data analysis: An expanded sourcebook (2nd ed.). Thousand
Oaks, CA: Sage.
Miller, R. L. (2010). Developing standards for empirical examinations of evaluation theory. American Journal
of Evaluation, 31, 390-399.
Miller, R. L., & Campbell, R. (2006). Taking stock of empowerment evaluation: An empirical review. Amer-
ican Journal of Evaluation, 27, 296-319.
Morra Imas, L. G., & Rist, R.C. (2009). The road to results: Designing and conducting effective development
evaluations. Washington, DC: World Bank.
Morse, J. M. (2003). Principles of mixed methods and multimethod research design. In A. Tashakkori &
E. Teddlie (eds.), Handbook of mixed methods in social and behavioral research (pp. 189-208). Sage,
CA: Sage.
Park, P. (1992). The discovery of participatory research as a new scientific paradigm: Personal and intellectual
accounts. The American Sociologist
, 23, 29-42.
Patton, M. Q. (2002). Qualitative research & evaluation methods (3rd ed.). Thousand Oaks, CA: Sage.
360 American Journal of Evaluation 32(3)
at WESTERN MICHIGAN UNIVERSITY on July 25, 2011aje.sagepub.comDownloaded from
Patton, M. Q. (2008). Utilization-focused evaluation (4th ed.). Thousand Oaks, CA: Sage.
Rebien, C. C. (1996). Evaluating development assistance in theory and in practice. Brookfield, VT: Avebury.
Rugh, J. (1986). Self-evaluation: Ideas for participating evaluation of rural community development projects.
Oklahoma City, OK: World Neighbors.
Ryan, K., Green, K., Lincoln, Y., Mathison, S., & Mertens, D. M. (1998). Advantages and challenges of using
inclusive evaluation approaches in evaluation practice. American Journal of Evaluation, 19, 101-122.
Scrimshaw, N. S., & Gleason, G. R. (eds.). (1992). Rapid assessment procedures: Qualitative methodologies for
planning and evaluation of health related programmes. Boston, MA: International Nutrition Foundation for
Developing Countries (INFDC).
Shadish, W. R., Cook, T. D., & Leviton, L. C. (1991). Foundations of program evaluation: Theories of practice.
Thousand Oaks, CA: Sage.
Shulha, L. M., & Cousins, J. B. (1997). Evaluation use: Theory, research, and practice since 1986. American
Journal of Evaluation, 18, 195-208.
Smith, N. L. (1993). Improving evaluation theory through the empirical study of evaluation practice. Evalua-
tion Practice, 14, 237-242.
Stake, R. & Abma, T. (2005). Responsive evaluation. In S. Mathison (Ed.), Encyclopedia of evaluation
(pp. 376-379). Thousand Oaks, CA: Sage.
Stufflebeam, D. L. (2001). Evaluation models. New Directions for Evaluation, No. 89. San Francisco, CA:
Jossey-Bass.
Stufflebeam, D. L., & Shinkfield, A. J. (2007). Evaluation theory, models, and applications. Somerset, NJ: John
Wiley & Sons.
Townsley, P. (1996). The history of RRA and PRA [Appendix 1]. In Rapid rural appraisal, participatory rural
appraisal and aquaculture (FAO Fisheries Technical Paper, No. 358). Rome, Italy: Fisheries and Aquacul-
ture Department.
Weaver, L., & Cousins, J. B. (2004). Unpacking the participatory process. Journal of MultiDisciplinary
Evaluation, 1, 19-40.
Weiss, C. H. (1986). The stakeholder approach to evaluation: Origins and promise. In E. R. House (Ed.), New
directions for educational evaluation (pp. 145-157). Abingdon, Oxon: RoutledgeFalmer.
World Bank. (1996). The World Bank participation sourcebook. Washington, DC: Author.
Cullen et al. 361
at WESTERN MICHIGAN UNIVERSITY on July 25, 2011aje.sagepub.comDownloaded from
... Participatory evaluation methods are conceptualised in various ways, and there is no consensus on what participatory evaluation methods involve (Cullen et al., 2011). However, two main streams of participatory evaluation can be found in the literature. ...
... In contrast, the third (philosophical) justification points toward power and voice considerations -how meanings are constructed with or without the people involved in a program. Some scholars note that participatory evaluation is more suited to formative or process evaluations -where learning and program improvement in the focus, rather than summative evaluations -where accountability or impact assessment is the focus (Cousins et al., 2016;Cullen et al., 2011). The purposes of participatory evaluations influence the processes followed. ...
... The purposes of participatory evaluations influence the processes followed. Transformative participatory evaluation that aims to empower participants and influence power dynamics will involve different activities than a practical participatory evaluation that aims to improve the validity and usefulness of the evaluation itself (Cullen et al., 2011). For example, practical participatory evaluation can occur without addressing or altering the power dynamics; however, for transformative participatory evaluation, the explicit goal is to address power imbalances (Baur et al., 2010;Greene, 2006). ...
Chapter
With a growing need to demonstrate social value, accountability and transparency to attract resources, social impact measurement has received increasing attention. It can be viewed as a tool to legitimise what was done for and/or to vulnerable people without their knowledge or involvement. This chapter offers an anthropological critique of current approaches and identifies strategies and tools that empower and place people at the centre of measurement by integrating ‘local voices’, questioning assumed values, and attending to local contexts. The chapter also argues that such approaches can help map the contribution of social investment at all system levels (micro, meso and macro) to the United Nations SDGs [Relevant SDGs: SDG16: Partnerships for the Goals].
... Extension practitioners seek to institute participatory practice in development initiatives implemented among rural households due to the benefits associated with this approach (Chambers, 1994). The benefits include: a) enhancement of the relevance of programmes to ensure that they are all suited for the needs and circumstances of the beneficiaries (Kironde andKihirimbanyi, 2002 cited in Cuthill, 2010);b) ensuring that the views of many stakeholder groups are represented in the development process (Cullen et al., 2011); c) expectations that the programmes decisions that feed on the insights of many stakeholders are not just relevant to the beneficiaries, and that they are generally smarter (Weaver and Cousin, 2004;Cullen et al., 2011); d) greater programme outcomes such as greater access to social services (Bedelu et al., 2007), consumption and demand for services (Kilpatrick et al., 2009); e) ensuring programme sustainability due to a greater sense of ownership and responsibility by stakeholders. Their participation enables them to be willing to mobilize and commit local resources for continuity of some or all of the programmes proceeds after external support is withdrawn or reduced (Oakley, 1989). ...
... Extension practitioners seek to institute participatory practice in development initiatives implemented among rural households due to the benefits associated with this approach (Chambers, 1994). The benefits include: a) enhancement of the relevance of programmes to ensure that they are all suited for the needs and circumstances of the beneficiaries (Kironde andKihirimbanyi, 2002 cited in Cuthill, 2010);b) ensuring that the views of many stakeholder groups are represented in the development process (Cullen et al., 2011); c) expectations that the programmes decisions that feed on the insights of many stakeholders are not just relevant to the beneficiaries, and that they are generally smarter (Weaver and Cousin, 2004;Cullen et al., 2011); d) greater programme outcomes such as greater access to social services (Bedelu et al., 2007), consumption and demand for services (Kilpatrick et al., 2009); e) ensuring programme sustainability due to a greater sense of ownership and responsibility by stakeholders. Their participation enables them to be willing to mobilize and commit local resources for continuity of some or all of the programmes proceeds after external support is withdrawn or reduced (Oakley, 1989). ...
Article
Full-text available
The present study sought to establish the human and social capital that determines rural households’ participation in agricultural projects and programmes implemented by the Kenyan government and development partners. The research was carried out among rural households in the three counties of the coastal region of Kenya. Multi-stage sampling techniques (purposive, proportionate random and simple random sampling) were used to select the study area and the study sample. Data were collected using a semi-structured questionnaire, Focus Group Discussion and observation schedules. The data analysis was carried out using descriptive statistics and regression analysis with the help of the Statistical Package for Social Sciences Version 22. The findings revealed that individuals with human capital; namely age (-0.15), primary education (-0.16), secondary education (-0.14), vocational training (0.35), and on the of job training (0.25), have a higher likelihood of participating in agricultural development initiatives. Households with the social capital of membership to groups (0.51), engaged in economic activities (0.53) and have linkages with development agencies (0.44) have a higher likelihood of participating in development initiatives. Key policy recommendations for county government and development partners include: encourage the community members to enrol in adult education; provide support for vocational and technical training; register as members in existing groups or form groups based on common interest, and engage in economic activities. The county governments should enhance advisory services to ensure close contact with professionals who will facilitate training, meetings and interactions with groups leading to the empowerment of members.
... The analysis of the quantitative data provides a general overview of factors that influence pregnant women's decisions regarding institutional care while the qualitative data analysis explores indepth beliefs, perceptions, and attitudes. During the interpretation stage, findings from the econometric model and the behavioral sciences literature are brought together, with the same weight, for a deeper understanding of the interplay of structural and behavioral barriers that prevent women from accessing institutional care (Greene et al., 2001;Cullen et al., 2011). In addition, this design provides an opportunity for triangulation of the findings across methods. ...
... Diversity of Interests in Evaluation: Making Progress in the Creation of Spaces for Negotiation. Traditionally, in Latin America, governmental bodies and international development agencies have requested evaluations that follow a top-down approach (Cullen et al., 2011;Guba & Lincoln, 1989;House, 1980;Scriven, 1973). That has, in turn, generated attitudes of detachment and disinterest, if not resistance, among local stakeholders. ...
Article
In the last three decades, the promotion of stakeholder involvement in evaluation has been gaining relevance in the Latin American and internationally, across varied agencies, institutions, and civic organizations. The 2030 Agenda and the Global Evaluation Agenda have also recognized the centrality of participation in evaluation. This article explores stakeholder involvement in evaluation based on collaborative work with stakeholders from 15 evaluative experiences. It shows what characterizes participatory evaluation in the region today and the principles of this practice.
... Other studies have also examined how politics can effect specific evaluation approaches. For example, Cullen, Coryn, and Rugh (2011) conducted a study examining the practices of participatory evaluations. The findings of the study suggested that participatory evaluation approaches varied across different settings highlighting that evaluators mostly engage stakeholders during data collection, dissemination of evaluation findings, and developing recommendations as opposed to engagement in data analysis and evaluation design. ...
Article
Evaluation has been described as a political act. Programs and policies are generated from a political process, and the decision to evaluate and how to use the evaluation are manifestations of the political dynamic. This exploratory study was conducted with practicing evaluators to understand what they view as political situations in the evaluation process and how they responded to these situations. Findings suggest that, in relation to the potential evaluation phases in which each respondent has been involved, evaluations are susceptible to politics when initially attempting to identify stakeholders and when it’s time to report the evaluation findings. Evaluators have also developed multiple strategies for dealing with these situations, including finding allies for the evaluation and working to explain the evaluation process and its implications. We hope that this study will help to inform novice and expert evaluators about the various political situations they may encounter in their practice.
... Evaluators face tensions in meeting the conflicting needs and interests of diverse stakeholders (Chouinard & Cousins 2013). Unfortunately, when faced with such tensions, evaluators often implement evaluations whose goals and methods favour the most powerful stakeholders, that is, donors (Bamberger 1999;Cullen, Coryn & Rugh 2011). ...
Article
Full-text available
Background: Low-income countries receive millions of dollars of aid each year in support of international development programmes. These programmes often have a requirement for evaluation. Evaluators are therefore uniquely placed to contribute to the social and economic development of these countries by conducting useful evaluations. This study elaborates on the power dynamics involved in international development evaluations so that evaluators can be better positioned to conduct impactful evaluations. Objectives: The objective of this study was to explore how power is configured and distributed amongst key stakeholders and how power imbalances impact evaluations. Method: The study utilized the Girls Education Challenge (GEC) programme as a case study. GEC was a multi-million-dollar program that supported 37 education projects in 18 countries in Africa and South Asia. Interviews were conducted with 13 evaluators and 10 programme representatives - staff from organizations that were part of the GEC programme. Results: The study concluded that donors wield significant power over evaluations and there are few avenues for less powerful stakeholders to speak truth to power. At best, donors can help to increase the quality and utilisation of evaluations. At worst, they can hinder culturally responsive evaluation practices. Furthermore, the status quo favours international evaluators and utilises local researchers merely as hired hands – overseeing the logistics of data collection. Conclusion: The main implication of this study is that evaluators need to conduct formal or informal power analyses in order to identify power asymmetries and potential power sharing opportunities and strategies.
... Only the evaluators who satisfy their donors are chosen to perform further evaluations. This donor-dominated process, where methodology, scope, and way of work are all determined by the donor (Cullen, 2011), does not create incentives to enhance truthfulness and quality of evaluations. ...
Thesis
In light of the debate on the consequences of competitive contracting out of traditionally public services, this research compares two mechanisms used to allocate funds in development cooperation—direct awarding and competitive contracting out—aiming to identify their potential advantages and disadvantages. The agency theory is applied within the framework of rational-choice institutionalism to study the institutional arrangements that surround two different money allocation mechanisms, identify the incentives they create for the behavior of individual actors in the field, and examine how these then transfer into measurable differences in managerial quality of development aid projects. In this work, project management quality is seen as an important determinant of the overall project success. For data-gathering purposes, the German development agency, the Gesellschaft für Internationale Zusammenarbeit (GIZ), is used due to its unique way of work. Whereas the majority of projects receive funds via direct-award mechanism, there is a commercial department, GIZ International Services (GIZ IS) that has to compete for project funds. The data concerning project management practices on the GIZ and GIZ IS projects was gathered via a web-based, self-administered survey of project team leaders. Principal component analysis was applied to reduce the dimensionality of the independent variable to total of five components of project management. Furthermore, multiple regression analysis identified the differences between the separate components on these two project types. Enriched by qualitative data gathered via interviews, this thesis offers insights into everyday managerial practices in development cooperation and identifies the advantages and disadvantages of the two allocation mechanisms. The thesis first reiterates the responsibility of donors and implementers for overall aid effectiveness. It shows that the mechanism of competitive contracting out leads to better oversight and control of implementers, fosters deeper cooperation between the implementers and beneficiaries, and has a potential to strengthen ownership of recipient countries. On the other hand, it shows that the evaluation quality does not tremendously benefit from the competitive allocation mechanism and that the quality of the component knowledge management and learning is better when direct-award mechanisms are used. This raises questions about the lacking possibilities of actors in the field to learn about past mistakes and incorporate the finings into the future interventions, which is one of the fundamental issues of aid effectiveness. Finally, the findings show immense deficiencies in regard to oversight and control of individual projects in German development cooperation.
... Elections, despite their inherent limitations, represent the best means of translating mass preferences into public policy. Cullen et al. (2011) posit that stakeholder engagement is a strategy that brings together different political actors together to reach consensus on a contentious issue. It calls for dialogue and negotiation between worrying political parties. ...
Thesis
Full-text available
Untersuchungsgegenstand der vorliegenden Arbeit ist die Evaluationspraxis, welche als Bindeglied zwischen pädagogischer Theorie und Praxis verstanden wird. Es wird eine theoretische Betrachtung eines umfangreichen Evaluationsprojekts vorgenommen. Hierfür wird zunächst eine Sekundäranalyse bestehender empirischer Daten einer sechsjährigen Evaluation durchgeführt und anschließend die Evaluationspraxis aus einer Metaperspektive kritisch betrachtet. Ausgehend von der Theorie sozialer Systeme nach Luhmann (1984) werden drei Fragestellungen fokussiert, welche sich auf den Transfer und die Implementation von Evaluationsbefunden beziehen. Zunächst wird geprüft, ob sich die systemtheoretische Differenzierung von kognitiver und normativer Modalisierung von Erwartungen datengestützt nachweisen lässt. Daraufhin werden Veränderungen der Erwartungsstile über die Zeit betrachtet. Abschließend werden systemspezifische Logiken und deren Einfluss auf den Evaluationsprozess einerseits und die Nutzung von Evaluationsbefunden andererseits fokussiert. Zur Untersuchung der Forschungsfragen wird das empirische Material der sechsjährigen Evaluation des kommunalen Förderprogramms „Mannheimer Unterstützungssystem Schule“ herangezogen. Die qualitativen Daten der wissenschaftlichen Begleitung werden anhand der qualitativen Inhaltsanalyse nach Mayring (2015) ausgewertet, bestehende quantitative Ergebnisse werden ergänzend einbezogen. Die zentralen Befunde der Arbeit zeigen, dass über die Zusammenführung der Theorie sozialer Systeme nach Luhmann (1984) und empirischem Datenmaterial wesentliche Prozesse und Strukturen der Evaluationspraxis in gewissem Maß systematisch entflochten werden können. Als zwei entscheidende Faktoren gelingender Evaluation stellen sich dabei zum einen der Erwartungsstil der Systeme und zum anderen die strukturelle Kopplung zwischen evaluierendem und evaluiertem System heraus.
Article
Narrative case studies have shown that, when people are involved in an evaluation of a program they are part of, it can change how they experience the program. This study used a quasi-experiment to test this proposition empirically in the context a participatory action research curriculum called Youth as Researchers. Half of all Youth as Researcher groups engaged in a participatory evaluation (PE) of their program experience through writing reflective essays, creating their own evaluation questions, and conducting peer interviews. The other half served as control groups and did not engage in the PE activities. Pre-/posttest surveys and focus group data were used to assess differences among the experimental and control groups. Study results show that participants in the experiment had important differences in their experiences in the program as a result of participation in the evaluation. Implications for future practice and research are also explored.
Article
Full-text available
Despite growing theoretical interest in collaborative and participatory forms of evaluation, little is known about evaluators' perceptions about such evaluation practices or their views about its viability. This article reports results from a survey of Canadian and American evaluators concerning such perceptions. Five-hundred and sixty-four evaluators responded to questions about their views and opinions of collaborative evaluation and a subsample of 348 (61.7%) also described a specific collaborative evaluation project in which they had recently participated. The survey results suggested that evaluators support a utilization-focused, stakeholder-servicearientation to the role and believe that the evaluator has a primary responsibility of maximizing intended use for intended users. Reported practices suggest that most evaluators engaged in collaborative activities that resemble the conventional stakeholder-based evaluation approach. These data are neither able to, nor are they intended tosupport one side or the other in the professional debate about whether evaluators ought to embrace collaborative evaluation as a legitimate direction within the profession. Rather, they add to the empirical knowledge base concerning this type of evaluation.
Article
Full-text available
Empowerment evaluation entered the evaluation lexicon in 1993. Since that time, it has attracted many adherents, as well as vocal detractors. A prominent issue in the debates on empowerment evaluation concerns the extent to which empowerment evaluation can be readily distinguished from other approaches to evaluation that share with it an emphasis on participatory and collaborative processes, capacity development, and evaluation use. A second issue concerns the extent to which empowerment evaluation actually leads to empowered outcomes for those who have participated in the evaluation process and those who are the intended beneficiaries of the social programs that were the objects of evaluation. The authors systematically examined 47 case examples of empowerment evaluation published from 1994 through June 2005. The results suggest wide variation among practitioners in adherence to empowerment evaluation principles and weak emphasis on the attainment of empowered outcomes for program beneficiaries. Implications for theory and practice are discussed.
Article
As part of a larger effort by members of the American Evaluation Association (AEA) Topical Interest Group on Evaluation Use (TIGEU), we undertook an extensive review and synthesis of literature in evaluation use published since 1986. We observe several recent developments in theory, research and practice arising from this literature. These developments include: the rise of considerations of context as critical to understanding and explaining use; identification of process use as a significant consequence of evaluation activity; expansion of conceptions of use from the individual to the organization level; and diversification of the role of the evaluator to facilitator, planner and educator/trainer. In addition, understanding misutlilization has emerged as a significant focus for theory and to a limited extent, research. The article concludes with a summary of contemporary issues, particularly with regard to their implications for evaluation practice.
Chapter