ArticlePDF Available

Abstract

This chapter posits two principal streams of participatory evaluation, practical participatory evaluation and transformative participatory evaluation, and compares them on a set of dimensions relating to control, level, and range of participation. The authors then situate them among other forms of collaborative evaluations.
Chapter 1 posits two principal streams of participatory evaluation,
practical particip atory ev aluation (P-PE) and transf orma tive
particip a tory evaluation (T-PE) and com p a r es them on a set of
dimensio ns relating to control, level and range of participa tion. The
aut hors then situat e them amongst oth er forms of collabor a tive
evaluations.
CHAPTER 1
Framin g Particip atory Evalu a tion 1
J. Bradley Cousins, University of Ottawa
and
Elizab eth Whitmo r e, Carleton Universit y
Forms and applications of collaborative research and inquiry are
emergi n g at an asto u n ding pace. For example, a bibliography of
published works on participat o r y research in the health promotio n
sector listed close to five hun dred titles (Green et al., 1995), with
som e item s dating back as early as the late 1940s. The vast majority,
1A prior version of this paper was presented at the annual meeting of the Canadian Evaluation Society,
Ottawa, May 1997.
1
however, have surf aced within the last twe nty years. In the
evaluation field, one label that is being used with increasing freque n c y
as a des criptor of collaborativ e work is Aparticipatory evaluation. The
term, however, is used quite differe n tly by different people. For so m e
it implies a practical approach to broade ning decision m akin g and
problem solving throu g h systematic inquiry; for oth ers, reallocating
power in the produc tion of knowledge and promoting social change
are the root issue s .
The purpose of this paper is to explore the meani ngs of
particip a tory evalu a tion thro ugh the iden tification and explication of
key conceptu al dimensions. We are persuad e d of the existence of two
principal strea m s of participatory evaluation, stream s that loosely
correspo n d to pragmatic and em a ncipatory functions. After describing
these, we present a fram e w o rk for differ entiatin g among forms of
collaborativ e inquiry and apply it as a way to (1) compare the two
streams of participatory evalu a tion, and (2) situate them am on g s t
other form s of collaborative ev aluation and collaborative inquiry. We
conclude with a set of questions confron t e d by those with an interest
in participatory evalu a tion.
TWO STREAMS OF PARTICIPATORY EVALUATION
Participato ry evaluation implies that researchers, facilitat ors or
professional evalu a t o r s collaborate in so m e capacity wh en doing an
evaluation with individual s, groups or com m u nities who have a
decide d stake in the prog ra m , development project, or other entity
being evalu a t e d. In the North American literature stakeholders are
2
typically defined as those with some sort of vested interes t in the
focus for evaluation (Mark & Shotla nd, 1985), althou g h som e authors
prefer a finer distinctio n (Alkin (1991). Various groups to which
stakeholders might belong include progr a m sponsors , managers,
dev elopers, and implemen t o r s . Members of special intere s t groups
and progr a m beneficiaries also have an identifiable stake in the
program. In the evaluation literature arising from internatio n al and
comm u nity develo p m en t conte x t s , the term "stakehol d e r" is not
explicitly use d, nor is evalua tion typically bounded by the param e t ers
of a specific program. Nev ertheless, consideration is given to the
perspectives of various groups or com m uniti es within these
dev elopmen t conte x t s, particularly as related to their involve m e nt and
particip a tion. It can be assume d that many within have minimal
experience of and training in evaluation or with formal methods of
applied syst e ma t ic inquiry. While the gen er al principle of
collaboratio n between evaluator s and non- evaluators applies to
virtually all forms of participatory evaluation, distinguishing featu r e s
associat ed with goals and purposes and with historical and ideological
roots help to delineate two identifiable approaches.
Garaway (1995) acknowledges that most applications of
particip a tory evalu a tion combine rationales and attempt to integrate
multiple purpo s e s under a single evalu ation project. Nonet h e l e s s , she
differenti a t e s between two specific rationales. Pursley (1996) mak e s
similar argumen t s . Both author s subscribe to the view that one form
of participatory evalu a tion is practical in orientatio n, and that it
functio ns to supp o rt progr a m or organizational decision making and
3
problem solving. We ter m this approach practical participatory
evaluation (P-PE). A secon d rationale has, as its foundation, principles
of emanci p a tion and social justic e, and functions to emp o w e r
members of commu nity groups who are less powerful or are otherwise
oppressed by dominating groups. Our term for this approac h is
transfor m a tive participatory evaluation (T-PE).
Practical partici p at o ry evaluati o n (P- PE)
Practical participatory evaluatio n has arisen primarily in the U.S.
and Canada. It has, as its central function, the fostering of evalu a tion
use with the implicit ass u m ption that ev aluation is geared towar d
program, policy or organiza tional decision making. The core premise
of P-PE is that stakehold er participation in ev aluation will enhance
evaluation relevance, ownership and thus utilization. The utilization
constr u ct has be en tradition ally conce p t ualized in terms of three types
of effects or use s of evalua tion findings: (1) instrum e n tal , meaning the
provision of suppor t for discret e decisions ; (2) conce p t u al , as in an
educative/learning function; and (3) symbolic , me a ning the
persuasive/political use of evaluation to reaffirm decisions alrea d y
made or to further a particular age n d a (Leviton & Hughes, 1981; King,
1988; Weiss, 1972, 1979). Typically, impact is conceptu aliz e d in
terms of effects on an undifferentiated grou p of "users" or "decision
maker s." Shulha and Cousins (1996) describe several developm e nts
in the evaluation utilization field that have emer g ed over the past
dec ad e . First, many researchers have observed that utilization is
often as socia t e d as much or mor e with the proce s s of doing the
evaluation than with the findings per se (e.g., Cousins, Donohue &
4
Bloom, 1996; Green e, 1988; Patton, 1997a; Preskill, 1994; Whitm ore,
1991). Second, several researchers advocate an exp a n d ed role for
utilization- oriented evaluators that incorporat e s elements of planned
change ag e n try (Mathison, 1994; Preskill, 1994; Owen & Lambert,
1995; Whitmor e, 19 88). Third, conception s of utilization and
evaluation impact are being extende d beyond the particular progra m
or tar get for evaluati o n to include organizatio n a l learnin g and change
(Cousins & Earl, 1995; Jenlink, 1994; Owen & Lambert, 1995; Torres,
Preskill & Piontek, 1996). Each of th ese develo p m ents represent s
part of an integrated rationale for P-PE.
Building on principles of "sus tained inter activity" between
evaluators and program practition ers (Hub erm a n & Cox, 1990) and on
the observ ation that increased stakehol d e r involvemen t in ev aluation
renders the proc ess mor e responsive to user needs, several
research e r s hav e implem e n ted and studied various forms of P-PE.
Greene (1988) report ed a study of an evalu a tion process tha t fairly
closely resembled the conventi on al stakeholder- base d approach (Bryk,
1983). Here evalu a t o r s assum e responsibility for carrying out
technic al evalu a tion tasks, and stakeholders are involved
predominantly in evalu a tion proble m definition, scop e setting
activities and later, with interpreting data emer ging fro m the study.
Ayers (1987) described a similar model -- the "stak e h older-
collaborativ e approach" where stakehold e r s participate as partners,
share joint responsibility for the study, and are primarily accou n t a ble
for its res ults. A similar form of collaboration was det ailed by King
(1995). Cousins and Earl (199 2, 1995) outlined an approach they
5
labelled "participator y evaluation," that built on the convention al
stakeholder model by advocating joint owners hip and control of
technic al evalu a tion decision making, a more penetr ating role for
stakeholders, and restriction of participation to stak e h olders mos t
closely connected with the progr a m in questio n.
Despite the identification of count ervailing influences, such as
micropolitical processe s or th e lack of organization al or administrative
support for the ev aluation (Cousins & Earl, 1995; King, 1995), each of
the foregoing researchers provide empirical evide n c e for the potent
influence of these for ms of P-PE in enhancing th e utilization of both
evaluation findings and proc e s s. Moreover, it has be e n demonstrate d
that under appropriat e conditions, participation by stak e h olders can
enh a n c e utilization without compromising technical quality or
credibility (Cousins, 1996; Greene 1988). Process effects includ e
influences on affective dime n sions (e.g., feelings of self- worth and
empowerme n t), the developmen t of an apprecia tion and acce p t anc e
of evaluation, and the developmen t of skills associated with the act of
systema tic inquiry (Whitmore, 1988). Some of these process effects
overlap with thos e em e r ging from T-PE processe s describ e d below.
Transf o r mativ e participator y evaluatio n (T-PE)
Transfor m a tive participatory ev aluation (T-PE) invokes
particip a tory principles and actions in the service of de m ocra tizing
social change and has quite differe n t ideologic al and historical roots.
Most of the literature relates primarily to participat o ry research (PR),
and later, to participato r y action rese arch (PAR), although
6
particip a tory evalu a tion (PE) is addresse d directly at times. The
background and principles are shared by PE. Based on a more radical
ideology, T-PE emerg e d more than twenty years ago, primarily but not
exclusiv ely in the developing world -- notably Latin America (Fals-
Borda, 1980), India (Fernand e s & Tandon, 1981; Tandon, 1981) an d
Africa (Kassam & Mustafa, 1982) -- in part as a reaction to positivist
mod els of inquiry that were seen as exploitive and det a c h ed from
urgent social and economic problems. The work of these researchers
was framed explicitly within contexts of power and transfor m a tion
(Hall, 1992). An intern a tional participatory rese a rch network was set
up in the 197 0s, with headquart e r s in India, and the first of a series of
major internation al seminars was held in Tanzania in 1979 (Kassa m &
Mustapha, 1982). These initiatives sparked a period of intense
theoretical and practical activity in participatory research and
evaluation. Althou g h T-PE is now spreading to the university sector, it
is de eply rooted in commu nity and international develop m e nt, adult
education, and mor e recently, the wome n' s movem e nt.
Depende n c y theorists saw conv e n tional research met h o ds as
leading to cultural dependency and as denying the knowledge-
creating abilities of ordinary people (Hall, 1977). The work of the
Brazilian adult educ a t o r, Paolo Freire, has be en pivotal in est a blishing
the philosophical foundation s of T-PE (1970; 1982). Other influe nce s
include some of the early work of Marx and Engels, Gramsci's notions
of the "orga nic intellectual," heg e mony and civil society, Hab erm a s,
Adorno, and the critical theorists (Hall, 1992; McGuire, 1987). While
the early roots of T-PE took hold outside North America, important
7
work in this area has been done through the Highlander Rese a r c h and
Educa tion Centr e in Tennes s ee (Gaventa, 1980, 1981, 1988), and the
Toronto- based Participa tory Resear c h Group (Hall, 1978).
Several key concepts underpin T-PE. Most fundame n tal is the
issu e of who creates and controls the productio n of knowledge. One
impor t a n t aim of T-PE is to empower people, through particip a tion in
the proc ess of constru cting and respectin g their own knowledge
(based on Freire's notion of "conscien tization"), and their
understa n d i n g of connec tions amo n g knowledge, power and control
(Fals- Borda & Anisure- Rahman, 1991; Tandon, 1981). No
contradiction is seen between collective empowerme n t and de e p e ning
social knowled g e (Hall, 199 2); Popular knowled g e is assumed to be as
valid and usef ul as scientific knowledge. A second key concept relate s
to process: that is, how is the evaluation conduct ed? The distance
betwee n researcher and researched is broken down; all participants
are contrib utors working collectively. Initiating and sustaining a
process of genuine dialogu e amo n g actors lead s to a deeper level of
understa n d i n g an d mutual respect (Gaventa, 1993; Whitmor e , 1991,
1994). A third concept, critical reflection, requires participants to
question, to doub t, and to consider a broad range of social factors,
includin g their own biases and ass u mptions (Comstock & Fox, 1993).
Participato ry research has be en describ e d as a three- pronged
activity, involvin g investigation, education and action (Hall, 1981).
Likewise, T-PE, by helping create conditions where participants can
empower the m s el v e s , focuses not only on data collection, analysis
and dissemination, but also on learning inheren t in the proces s an d
8
on any actions that may result. T-PE has as its prim ary function th e
empowerme n t of individ uals or groups. Rappaport defined
empowerme n t as "both a psychological sense of personal control or
influence and concern with act u al social influence, political power and
legal rights" (1987, p. 121, cited in Perry & Backus, 1995) . In this
approach, evaluation proc ess e s and products are used to transfor m
power relatio ns and to promote social action and chang e. Evalua tion
is conceived as a developmen t al proc e ss where, through the
involveme n t of less powerful stakeholders in investig a tion, reflectio n,
neg otiation, decision making and knowledge creations individual
particip a n t s and power dynamics in the socio- cultural milieu are
changed (Pursley, 1996).
Brunner and Guzman (1989 ) charac t eriz e T-PE as an emerge n t
form of evaluation that takes the interests, preoccupa tions,
aspirations and priorities of the so- called target popula tions and their
facilitator s into accoun t . "The social groups, together with their
facilitator s, decide when an evaluatio n should take place, what should
be evaluated, how the ev aluation should be carried out, and what
should be done with the result " (1989, pp. 10-11). In this sense,
particip a tory evalu a tion is an "educational process through which
social grou ps produce action- orient e d knowledge about their reality,
clarify and articulate their norms and values, and reach consensus
about further action" (p. 11). Initially, th e evalu a tion tea m (co mprised
of all particip a n t s in the project), may be fairly depen d en t on
professional evalu a t o r s and facilitators for training, but they soon
becom e more sophisticat e d. Ultimately, they are responsible for
9
organizin g and imple m e nting the evalu a tion, disseminating its results,
systema tizing group interpretations, coordinati ng group decision
making about project change, and ens uring that action is take n .
Much participatory evalu a tion liter ature has emerged from the
international and comm u nity develo p m en t fields (Campos, 1990;
Coupal, 1995; Feuerstein, 1988; Forss, 1989; Freedm a n, 1994; Jackson
& Kassam, forthco ming; Lackey, Peterson & Pine, 1981; Rugh, 1994).
As a result, a number of PE handbooks and assorted practical
materials for grassroots groups have bee n publish e d (African
Developm e n t Foundation, n.d.; Ellis, Reid & Barns ely, 1990;
Feuerstei n, 1986;UNDP, 1997).
A comparis on of participat o r y evaluatio n approach es
While these two strea m s of participatory ev aluation are
distinguishable from one another on the basis of their central goals,
functio ns and historical and ideological roots, there is clearly an
overlap between the two. For exa m ple, it is difficult to imagin e that
particip a tion in a P-PE project that led to a deeper understanding of
program function s and proce s s e s and to the develop me n t of skills in
systema tic inquiry would not, concomitantly, empower that progr a m
practitioners (or group). Equally, it is probable that a T-PE project that
led individuals to take control of their own develop m en t project
functio ns or circumsta n c es would also prove to be of consid er a ble
practical value in project developmen t and implem e n ta tion.
Apart from the overlap amo n g central and secondary goals for
PE, both streams overlap with yet a third rationale for collaborative
10
inquiry. Identified by Levin (1993) as epist e mologic al and/or
philos ophic in natu r e , this argume n t posits that rese arch knowled g e
and evalua tion data are valid only when informed by practition e r
perspectives. While Guba and Lincoln (1989) argue this point
vehe m en t ly, their app roach to evaluatio n is not necessarily
particip a tory, given the dominant role played by the evaluator in
immersing her- or himself in the local context and cons tructin g
meanin g from th at perspectiv e. Yet one can easily imagine that the
dev elopmen t of valid local knowledg e , based on shar e d understa n d i n g
and the joint constr u ction of me a ning, would be integral to both forms
of PE.
Thus, we conclude that P-PE and T-PE differ in their primary
functio ns -- practical proble m solving versus empow erm e nt -- and
ideological and historical roots, but overlap in their secondary
functio ns and in other areas. Despite differences that are evid ent at
first blush, T-PE and P-PE have substantial similarities.
DIFFERENTIATING PROCESS DIMENSIONS OF COLLABORATIVE
INQUIRY
We propose three distinguishing characteristics of participatory
evaluation. They are: (1) control of the evalu a tion process , ranging
from control of decisions being compl et ely in the han d s of the
research e r to control being exerted entirely by practitioners. Control
here relates particularly to technical decisions, which refers to
decision s about evaluation proce s s e s an d cond u ct, as opposed to
decision s about whether and when to initiate evalu a tion. (2)
11
stakeholder selection for participation , ranging from restriction to
primary users to inclusion of all legitimate groups; and (3) depth of
particip a tion , from consultatio n (with no decision making control or
responsibility) to dee p participation (involvement in all asp e c t s of an
evaluation, from desig n, dat a collection, analysis and reporting, as
well as in decision s about dissemination of results and use). A
particip a tory evalu a tion process can be "located" somewh e r e on these
continua, depending on who controls the proce s s , who participates
and how much. Shulha and Cousins (1995) observed that th ese
distinguishing featu r e s corre sp o n d to basic dimensions or continua
along which any given collaborative research project might be located.
Cousins et al. (199 6) made a similar case for differentia ting am o n g
various form s of collaborative ev aluation an d between collaborative
and non- collabor a tive evaluation.
If we acce p t that these three dime n sions are useful for
differenti a ting collaborative approach e s to system a tic inquiry, we
might also consider th e possibility that they may be indep e n de n t of
one another. That is to say, decisions about who participat es, to what
extent they participate, and who controls evalu a tion technical decision
making can, in theory, be made inde p e n de n tly from one another.
Empirically such independ e nce see m s unlikely, but heuristically this
distinction is a useful one. Figure 1 represents the characteristics in
three- dimensio n al space. This device may be used to consid er the
collaborativ e proc es s e s associa t e d with a varie ty of genres of
collaborativ e and eve n non- collaborative inquiry. Any given exa m ple
may be considered in terms of its location on each of the dime n sions
12
or continua, thereby yielding its geometric coordin at e location in the
Figure.
Insert Figure 1 about here
We can now integrate this framework with our prior discussion of
goals and functions in order to answ e r the following questio n s:
- How do P-PE and T-PE differ from one another?
- How do forms of PE differ from other forms of collabor a tive
evaluation?
- How do forms of PE differ from other forms of collabor a tive inquiry?
As an aid to making such determinations a wide variety of approache s
to collabor a tive inquiry and ev aluation are des crib e d in Table 1 and
considered in the text to follow. It should be noted that only
represen t ative forms of the various categories in Table 1 are
portra y e d.
Insert Table 1 abou t her e
How do P-PE and T-PE differ from one another?
Differ ences in goals and functions and historical roots between
these two stream s of PE are explicated above and require no further
elabor ation here. After examining the dimensio n s of process it may
be concluded that these approaches are quite similar, with the
exception of deciding who participat es in the evaluation exercise. In
the Cousins and Earl model (199 2, 199 5), the emp h asi s is on fostering
program decision makin g and organizational problem solving, and
evaluators ten d to work in partnership with potential users who have
13
the clout to do something with the evaluation finding s or emergent
recomm e nd a tions. While this appro ac h accommod a tes participation
by othe r s, the likelihood of potential users' owner s hip of and
inclination to do some thing about evalu ation dat a will be limited
without the involvem e nt of key pers o n n e l. Indeed, such
unsatisf actory outcomes hav e bee n de mo n s t r at e d empirically
(Cousins, 1995; King, 1995; Lafleur, 1995). Part of the rationale for
limiting participation to stakehol d e r s closely associate d with program
support and mana g e m ent is that the evaluation stands a better
chance of meeting the progr a m ' s and organizational decisio n makers'
time- lines and need for information. Althou g h the evalu a t o r acts on
beh alf of the primary users by safeguarding ag ainst the intrusion of
self-servin g interests (mostly by keeping progra m practitioner
particip a n t s true to the data and findings), the model is not as useful
in cases where there is a disagre e men t or lack of consensus among
stakeholder groups abou t program goals or intentions. In such cases,
conflict amon g competing interest groups needs to be resolved, and if
stakeholder par ticipation is limited to primary decision makers, th e
evaluation is more likely to be seen as illegitimate and bias ed. For
evaluators finding thems elves in situations of this sort, questions arise
as to the prudence of their acting in a conflict- resolution mod e and/or
their ability to resist being co-opted by pow erful stakehold er s.
On the other hand, T-PE is more generally inclusive concerning
stakeholder par ticipation, especially with regar d to program
ben eficiaries, me m b ers of the program, or to the develop me n t
project's target popula tion. In this sen s e , power issues are more
14
directly addresse d . Of cours e, progr a m beneficiaries is the very
population that T-PE is inte n d e d to serve through fostering
empowerme n t and illuminating key social and progr a m issues. While
there may be direct roles for evaluators and facilitator s in training
practitioners, depend e ncy on such profession als diminishes as time
pas s e s and local experienc e is acquired. This may also be the case for
P-PE. Cousins and Earl (1995) note that dealing with organiz ational
constr aints and integrating ev aluation and participation into the
culture of organizatio n s are formida b le tasks, destin e d to unfold over
several repetitions and protra c t e d periods of time.
Both for ms of PE share the inte ntion of involving stakehold ers
and community member s in all asp e cts of the evaluation project,
includin g the highly technical ones. Based on the em e r g enc e of
practical and logistical issu e s, Cousins and Earl (1995) que stion the
value and viability of engaging practition ers in highly technical
activities. In some contexts, however, community members may be
better than evaluators at some technical tasks (Chamber s , 1997;
Gaventa, 1993). In either approach, the assump tion that ma st e ry of
such technical tasks is a form of empo w erm e nt remains intact.
How do form s of PE differ from other forms of collabor a ti v e
evaluatio n?
In Panel B of Table 1, five examples of alterna tive forms of
collaborativ e evalua tion are described. Perhaps th e best known of
these -- stakeholder- bas e d ev aluation -- bea rs the least resemblance
to either form of PE. This may seem somew h a t surprising, particularly
in view of the fact that P-PE was conceived to be an extension of the
15
stakeholder- based model (Cousins & Earl, 1992). Stakehold e r - based
evaluation has similar goals to P-PE but is perhaps bet t e r suite d to
situation s where widesprea d agreem e n t among stakehol d e r group s
about progr a m goals is lacking. By involving all legitimat e groups in
the proc ess, the evaluator is able to pit subscrib ers of different value
positio ns against one ano th er while maintai ning a gener ally neutral
stance. In working toward consensus building, the evaluativ e process
becom e s more useful to a wider audie nc e than would be the case if
only one, or some, stakehold e r group s were included. By controlling
the technical decision making, and by operati n g in the role of
mediator/fa cilitator, the evalua t o r is better able to protect her- or
himself from being co- opted . Whereas P-PE is best suited to formative
evaluation problems (Ayers, 1987; Cousins & Earl, 1992, 199 5),
stakeholder evaluatio n is a viable approach to decision- orien t e d or
decide dly su m m ative evaluati on question s . On the other hand ,
stakeholder- based evalu a tion differs from T-PE by virtue of its
practical goals, evaluator control, and limited stakeholder participation
in a wide range of evaluation activities. It should be noted that
althoug h most authors describ e its functions in pragmatic terms, Mark
and Shotland (198 5) sugg e s t that em a n cipatory an d
represen t ativene s s rationales underlie the impleme n t ation of
stakeholder- based evalu a tion. However , in a recent survey of
evaluators, Cousins et al. (1996) observ ed that much of the
collaborativ e evalua tion practice in North America is aligned with the
stakeholder- based model. Most of the reports examined sugge s t that
practical decision making and problem solving were the forces driving
16
the model's implement a tion.
Of the four alternati v e s to stak e h older- base d ev aluation listed in
Table 1 (Panel B), two are more closely aligned with the more practical
stream of PE, while the rem aining two tend to resemble the mor e
political stream. First, both democratic and empow er me nt evalu a tion
are similar to T-PE. Democratic evaluation is intended to maximize
the utility of evaluatio n in a pluralistic society (MacDonald, 1976). In
this respect it is similar to stakehold er- based evaluation. Evaluators
and participants work in part n e r s hip, sharin g the work and the
decision s. The evaluation is rendered democratic "by giving
particip a n t s considerable control over the interpretation an d releas e
of informa tion" (McTaggart, 1991a). Stakeholders include all
legitimate groups, a key point. To that en d, represent a tivene s s amo n g
legitimate stakeholder groups, and a coop er a tive working relationsh ip
betwee n evalu a t ors and stakeholders, are pivotal.
While similarly targeted on political goals, empower m ent
evaluation (Fetterma n , 1994, 1995) holds as key objectives the
empowerme n t of individ uals and groups, the illumination of issues of
concern to them, and the develop me n t of a basic sen s e of self-
det er mina tion. Since these goals are ma nife stly ema n cipatory,
empowerme n t evaluation is more closely linked to T-PE tha n to P-PE.
But this ap proach, as described by Fetterma n (1994; Fetterm a n et al.,
1996), is in som e respects enigmatic. In one instance, the evalu a t o r
acts exclusively in a facilitation mode, helping to suppo rt progra m or
project personnel in their efforts to beco m e self-sufficient. In another
instance, th e evalu a t or is "morally compelled" to assu me an advocacy
17
role for groups with less powe r and voice. Variation between these
two examples of em pow e r m ent evaluati on, both in terms of the locus
of control and the meani n g of participation, is considerable. The
approach also differs from T-PE inasmuch as ev aluators ten d to work
with those closely associa t e d with the project bein g evalu a t e d, as
opposed to a wider array of stakehold e r s . Finally, Patton (199 7 b)
conducted a careful analysis of the collection of exa m ples of
empowerme n t evaluation compiled by Fetterma n et al. (1996) and
concluded th at ma ny of these cas e s are exemplars of "particip atory,
collaborativ e , stakeholder- involving, and ev e n utilization- focussed
evaluations, and really do not meet the criteria for empowerme n t" (p.
149). This analysis suggests quite stro ngly that empowerme n t
evaluation, in practice, tends to be best conc e p t u alized as a form of P-
PE.
Similarities with P-PE are apparent in descriptions of school-
bas e d ev aluation and develop m ental evalu atio n. Nevo (1993, 1994)
advocates dev eloping "evaluatio n- mindedn e s s" in scho ols through
training, support, and school- based mec h a nisms for evaluatio n. Such
mech anisms provide a basis for dialogue between school staff and
external requests for accounta bility, and while they are conducted
internally and exclusiv ely by school staff, they could feed into
subsequ e nt external, summa t ive evalu a tions. School- bas e d
evaluation's focus on integrati ng evaluation into the organizatio n a l
culture of schools, its focus on stakeholders closely linked to the
program, and their involvem e nt in all phase s of the evaluatio n, are
features that match with P-PE. In developmen t al evalua tion, on the
18
other han d, evalu a t o rs work very closely with program developers by
helping them to integr a t e evaluation into the develo p m ent phase of
programing (Patton, 1994, 19 97a). In this model the evaluator works
in partnership with developers but stakehold er participation in the
evaluation is comparatively limited. In a sens e stakeholders represe n t
the develo p m ent function of the partnership, and while fully apprais e d
of and able to shape the evalu a tion, their direct participation re m ains
periph e r al. This appr o a c h fairly closely resembles P-PE, with the
exception of the depth of participatio n issu e.
How do form s of PE differ from other forms of collabor a ti v e
inquiry?
While evaluation is directly linked to judgme n t s of the merit and
worth of a particul ar program, project or innovatio n, and thus provides
a syst e ma t ic basis to support decision making, forms of syst e m atic
inquiry desig n e d for other purpos es may also be carried out on a
collaborativ e basis and are therefore of interest for comparing and
contras ting with PE. Indeed, such comparisons have been made
before (Huberman, 1995; King, 1995). In Table 1, Panel C, three
alter n a tive forms of participativ e inquiry are des crib e d. One of thes e ,
a North American ada p t ation of participatory action research (PAR), is
foun d to bear some rese m b l a n c e to P-PE; another, em a n cipatory
action rese a rch, is more closely related to T-PE.
As note d earlier, PAR first aros e in the internatio n a l and
comm u nity develo p m en t conte x t but was adapted in North America in
response to the limitations of other approache s to social science
research. A distinguishing feature of PAR in North Americ a is that it
19
see ks to help organizations ch ange rather than just accumulati n g facts
and examinin g implications (Whyte , 1991). Groun d e d in thre e
streams of intellectual reasoning -- social research met h o d ology,
particip a tion in decision makin g, and sociotechnical syst e ms thinking
-- this version of PAR is distinct from those forms of participatory
research that entail collaborative social science rese arch with no
action imperativ e (Tripp, 1988). A variant is participatory action
science (Argyris & Schon, 1991), which focuses on theories- in-use,
includin g strategies for uncov ering organization al defensive routines.
In the general PAR approach, stakehol d e r s who are me m b ers of the
target organization participate both as subjects of research and as co-
research e r s . PAR "aims at creating an environm e n t in which
particip a n t s give and get valid information, make free and informed
choices (including the choice to participate) and generat e internal
commitmen t to the results of their inquiry" (Argyris & Schon, 1991 p.
86). Inasmuch as the goal of PAR is to inform and improve practice
within organizations, the approach links well with P-PE. There is
gen e r ally a partnership arrangem e nt betwee n rese arch e r s and
organization member s , and the latter take on an active role in a wide
range of research activities.
Emancipatory action research (also called particip atory action
research by McTaggart, 1991b), is more closely associate d with T-PE,
but differs in important ways. This approach stems from the work of
Jurgen Habermas, and is expressly liberatio n al bec a u s e practitioners
come together with critical intent (Carr & Kemmis, 1991). Power
resid e s in the whole group , not with the facilitator or with individuals
20
or stakeholders. The practitioner group accepts responsibility for its
own e m a n cipation from irration ality, injustice, alien a tion, and failure.
A varian t stream is called critical actio n research (Tripp, 1988), which
is fully in sync with the sentiment but stop s short of action. McTagg art
(1991b) does not find the distinction us eful. Although many of these
attributes are shared with T-PE, this form of action research precludes
the involvement of conventional researcher s , who are viewed, at least
potentially, as me mbe r s of the power elite.
Finally, cooperativ e inquiry (Hero n, 1981; Reason & Heron,
1986), with its roots in humanistic psychology, is a form of res e a r c h
that arose in resp on s e to perceived deficiencie s in ortho d ox
approach e s . In cooperativ e inquiry, "all those involved in the research
are both co- researchers, whose thinking and decisio n making
contribute to generating ideas, designing and mana ging the project,
and drawing conclusions from the exp erience and also co-subjects,
particip a ting in the activity being researched" (emphasis in original,
Reason, 1994, p. 326). Proposition al knowledge about persons is
deriv ed both from their exp eriential knowledge and practical
knowledge concerns. Typically, the inquiry group is formed in
response to the initiatives of certain par ties or member s , and as such,
must struggl e with internal issues of power, decision making, and
practicality. To the extent that these issues are resolv ed internally,
the grou p's work will be productive. In cooperative inquiry, trained
research e r s normally are not participants in the inquiry group.
Participants engage in all pha s e s of the inquiry process, including
focusing, observing, reflecting and deciding.
21
ISSUES AND QUESTIONS
We end by posing a set of questions for consideratio n. Thes e
are not necessarily new, nor are they unique to PE as an approach to
collaborativ e inquiry. Many of these issues will be addressed in more
dep th elsewhere in this volume.
Power and its ramification s : Who really controls the evaluation?
How does one account for and deal with variation in power fand
influence, amo n g participants, and between particip a n t s and the
evaluator? How does one bring in the silent voices that hav e not yet
been hea rd? How much should (or can) outside evaluator s med dle in
the affairs of oth e rs, especially when the latter will need to live with
the cons e q ue n ce s long after the evaluat o r has left the scene?
Ethics : Closely related to issues of power and aut hority are issues of
ethical conduct and owner s hip of dat a. Who owns the evalu a tion
findings? Who has the power to dicta t e what dat a will be used and to
wh at end? In what ways can participants with less power influen c e
these decisions? In some inst ance s , professional ev aluators may
witness the deliberate ma nipul a tion of data or other mischiev ous
beh a vior by participa n t s . At what point does the evaluator dr aw the
line and termin a t e their participation ? Can or should the ev aluator
just walk away from such situations?
Particip a nt selecti o n : Who participates on the inquiry team, and
how are participants identified and selected? The answ ers to these
questions bear on the nature of power relationships within the context
22
in question. What are the implications for participant selection when
PE projects arise out of external mand a t es from funding agencies
and/or organizations responsible for initiating such activities? What
are the implications for participant selection in such cases? How many
particip a n t s ? How will they participate and at what junctu r e? A
relat e d concer n has to do with practical issues. To what extent,
particularly in projects involving particip ant s from a wide range of
interest groups, is the feasibility of the project compromis ed?
Technical quality : How is tech nical quality define d ? by whom? Are
there tension s relate d to data quality and relevancy of the evalu a tion
process to the local setting ? What criteria ought to be used in
deciding what to do in such a case? These questions are also related
to issues of ownershi p and control.
Cros s cultural iss u es: How can cultur al, language, or racial barriers
be addr e s s ed? To what extent does the technical knowled g e and
background of the profession al evalu a t or fit with the cultur e in
question? Can it be adapted or made to fit, and if so, how?
Evalu a tion training issu e s : How is the training of participants in
evaluation and research met h o d s to be accomplished? Will training
occur prior to the evalu a tion, during it, or some combination of the
two? To what extent do cultural and linguis tic differences intrude on
training effectiveness ? Can evaluators and other professional s
ass u m e a role of trainer / facilitator with relative eas e ? What sorts of
training should evaluators receive as they develop profe ssionally and
take on participator y projects? What knowledge and skills will be
needed? Can such be taught and, ind e e d, learn e d in formal in-service
23
and pre- service training environm e nt s ?
Conditi o n s enabling PE: Finally we ask, what conditions need to be
in place for meaningful PE to flourish? What will be the nature of
particip a n t s ' backgr o u n d s and interests? What constraints will they
bring to the task (e.g., workloa d considerations, educational
limitations, motiva tion)? Who initiates the ev aluation and why? What
sorts of time cons traints will intrud e ? How will these be addresse d ?
The foreg oing provides som e question s that we see as
challenges for participatory evaluators and people intereste d in
eng a ging in such activities. Credible answers will com e only from
sustained PE practice, and particul arly from practice that includes
deliberate mecha nism s for ongoing observ a tion and reflection. It is
our hope that both participatory evaluators an d the participants with
whom they work will report on their experiences , thus informin g
professional understanding of these important issues.
24
REFERENCES
Alkin, M. C. Evaluation theory developmen t : II. In M. W. McLaughlin, &
D. C. Phillips (Eds.), Evaluation and Education : At Quarter Century
(pp. 91-112). Chicago, IL: The University of Chicago Press. 1991.
Argyris, C., & Schon, D. A. Participatory action research and action
science: A com men t ary. In W. F. Whyte (Ed.), Participator y action
research (pp. 85- 96). Newbury Park, CA: Sage. 1991 .
Ayers, T. D. Stakeholders as partners in evaluation: A stakehol d e r-
collaborativ e approach. Evaluation and Program Planning , 1987. 10 ,
263- 271.
Brunner, I., & Guzman, A. Participatory evaluation: A tool to as s es s
projects and emp o w e r people. In R. F. Connor, & M. H. Hendricks
(Eds.), New direction s for program evalua tion : International
innovatio n s in evalu a tion methodology . No. 42 (pp. 9-17). San
Francisco: Jossey- Bass.
1989.
Campos, J. D. Towards participatory evalu a tion: An inquiry into post-
training experienc es of Guat e malan community develop me n t workers .
Doctoral disse rt a tion, Amherst, Mass: University of Massachusetts .
May, 199 0.
Carr, W., & Kemmis, S. Becomin g critical : Educatio n , knowled g e and
action research . London: Falmer. 1992.
25
Comstock, D. E. & Fox, R. Participatory research as critical theory: The
North Bonneville, USA experience. In P. Park et al. (Eds.) Voices of
change: Participat o ry research in the United States and Cana d a .
Toronto: OISE Press. 1993.
Coupal, F. Participa tory project design: Its implications for evaluation.
A cas e study from El Salvador . Pap er presented at the joint meeting of
the Canadian Evaluation Society and the American Evalua tion
Association, Vancouv e r. Novembe r 1995.
Cousins , J. B. Assessing program needs using participatory evalu a tion:
A com p a rison of high and marginal success cas e s. In J. B. Cousins, & L.
M. Earl (Eds.), Participatory evalu a tion in educatio n : Studi es in
evaluation use and orga nizational learning (pp. 55- 71). London:
Falmer. 1995.
Cousins , J. B. Consequenc e s of researcher involvemen t in
particip a tory evalu a tion. Studies in Educatio n al Evalu ation , 1996.
22 (1), 3- 27.
Cousins , J. B., Donohue, J. J., & Bloom, G. A. Collabor a tive evalu a tion
in North America: Evaluators' self- reporte d opinions, practices and
conseque n c es. Evaluation Practic e ,. 1996. 17 (3), 207- 226.
Cousins , J. B., & Earl, L. M. The case for participa t ory evaluation.
26
Educa tional Evaluation and Policy Analysis . 1992. 14 (4), 397- 418.
Cousins , J. B., & Earl, L. M. (Eds.). Participato r y evaluati on in
education : Studies in evaluation use and orga nizational learning .
London: Falmer Press. 1995.
Ellis, D., Reid, G. & Barnsley, J. Keeping on track: An evaluation guide
for community group s . Vancouver: Women's Research Centre . 1990.
Fals-Borda, O. Science and the common people . Yugoslavia:
Internatio n al Forum on Participatory Researc h . 1980.
Fals-Borda, O. & Anisur- Rah m a n , M. Action and knowled g e: Breaking
the monopoly with participato r y action rese ar ch . New York: Apex
Press. 1991.
Ferna n d es, W. & Tandon, R. Participatory research and evaluation:
Experiments in research as a proce s s of liberatio n . New Delhi: Indian
Social Institut e . 1981.
Fetter m an, D. W. Empow e r m ent evaluation. Evaluation Practice ,
1994. 15 (1), 1-15.
Fetter m an, D. W. In res pons e . Evaluation Practice , 199 5. 16 (2), 179-
199.
27
Fetter m an, D. W., Kaftarian, A. J., & Wand e r s m an, A. (Eds.). (1996).
Empowermen t evalu a tion: Knowled g e and tools for self assessm en t
and accoun tability. Thousand Oaks, CA: Sage.
Feuerstei n, M.T. Partne r s in evaluation: Evaluating development and
comm u nity progra m mes with participants . London: MacMillan. 198 6.
Feuerstei n, M.T. Finding methods to fit the people: Training for
particip a tory evalu a tion. Co