Content uploaded by David M Fetterman
Author content
All content in this area was uploaded by David M Fetterman on Aug 29, 2016
Content may be subject to copyright.
http://aje.sagepub.com
Evaluation
American Journal of
DOI: 10.1177/1098214007301350
2007; 28; 179 American Journal of Evaluation
David Fetterman and Abraham Wandersman
Empowerment Evaluation: Yesterday, Today, and Tomorrow
http://aje.sagepub.com/cgi/content/abstract/28/2/179
The online version of this article can be found at:
Published by:
http://www.sagepublications.com
On behalf of:
American Evaluation Association
can be found at:American Journal of Evaluation Additional services and information for
http://aje.sagepub.com/cgi/alerts Email Alerts:
http://aje.sagepub.com/subscriptions Subscriptions:
http://www.sagepub.com/journalsReprints.navReprints:
http://www.sagepub.com/journalsPermissions.navPermissions:
http://aje.sagepub.com/cgi/content/abstract/28/2/179#BIBL
SAGE Journals Online and HighWire Press platforms):
(this article cites 30 articles hosted on the Citations
© 2007 American Evaluation Association. All rights reserved. Not for commercial use or unauthorized distribution.
at American Evaluation Association on May 25, 2007 http://aje.sagepub.comDownloaded from
179
American Journal of Evaluation, Vol. 28 No. 2, June 2007 179-198
DOI: 10.1177/1098214007301350
© 2007 American Evaluation Association
Empowerment Evaluation
Yesterday, Today, and Tomorrow
David Fetterman
Stanford University
Abraham Wandersman
University of South Carolina, Columbia
Abstract: Empowerment evaluation continues to crystallize central issues for evaluators and the
field of evaluation. A highly attended American Evaluation Association conference panel, titled
“Empowerment Evaluation and Traditional Evaluation: 10 Years Later,” provided an opportunity to
reflect on the evolution of empowerment evaluation. Several of the presentations were expanded
and published in the American Journal of Evaluation. In the spirit of dialogue, the authors respond
to these and related comments. The authors structure their discussion in terms of empowerment
evaluation’s past, present, and future as follows: (a) Yesterday (critiques aimed at empowerment
evaluation issues that arise from its early stages of development), (b) Today (current issues associ-
ated with empowerment evaluation theory and practice), and (c) Tomorrow (the future of empow-
erment evaluation in terms of recent critiques). This response is designed to enhance conceptual
clarity, provide greater methodological specificity, and highlight empowerment evaluation’s com-
mitment to accountability and producing outcomes.
Keywords: empowerment evaluation; capacity building; Getting To Outcomes; outcomes;
empowerment
A2005 American Evaluation Association (AEA) conference panel session titled, “Empow-
erment Evaluation and Traditional Evaluation: 10 Years Later,” provided an opportunity to
engage in an ongoing dialogue in the field and reflect on the development and evolution of
empowerment evaluation. Speakers included Drs. Robin Miller, Christine Christie, Nick Smith,
Michael Scriven, Abraham Wandersman, and David Fetterman. In this Evaluation Forum article,
we engage in further dialogue and respond to comments made by panel members at AEA and
in recent publications in the American Journal of Evaluation (R. L. Miller & Campbell, 2006;
N. L. Smith, 2007). Many, including Cousins, have asked us to respond to Cousins’ (2005) criticisms
(see Patton, 2005). In the present article, we divide our responses to the current criticisms into
three categories: (a) Yesterday (where empowerment evaluation was): critiques aimed at empow-
erment evaluation at its early stages of development (many of which we have already responded
to in the literature); (b) Today (where empowerment evaluation is today): comments and/or cri-
tiques pointed at current empowerment theory and practice; and (c) Tomorrow (where we see
empowerment evaluation going in relation to the critiques): comments that are related to the
Authors’ Note: David Fetterman, Division of Evaluation, School of Medicine, 251 Campus Drive, MSOB X399,
Stanford University, Stanford, CA, 94025; phone (650) 269-5689; e-mail: profdavidf@yahoo.com or davidf@stan-
ford.edu. Abraham Wandersman, Psychology Department, 1512 Pendleton Street, University of South Carolina,
Columbia, SC 29208; phone: (803) 777-7671; e-mail: wandersman@sc.edu.
© 2007 American Evaluation Association. All rights reserved. Not for commercial use or unauthorized distribution.
at American Evaluation Association on May 25, 2007 http://aje.sagepub.comDownloaded from
180 American Journal of Evaluation / June 2007
future of empowerment evaluation. A brief discussion about the evolution of empowerment eval-
uation provides the context required to meaningfully evaluate the critiques and corresponding
responses in the literature.
Background
The first empowerment evaluation book, Empowerment Evaluation: Knowledge and Tools
for Self-assessment and Accountability (Fetterman, Kaftarian, & Wandersman, 1996), provided
an introduction to the theory and practice of this approach. It also highlighted the scope of
empowerment evaluation ranging from its use in a national educational reform movement to
its endorsement by the W. K. Kellogg Foundation’s Director of Evaluation. The book also pre-
sented examples of empowerment evaluation in various contexts, including federal, state, and
local government; HIV prevention and related health initiatives; African American communi-
ties; and battered women’s shelters. This first volume also provided various theoretical and
philosophical frameworks as well as workshop and technical assistance tools. It set the stage
for future developments.
Foundations of Empowerment Evaluation (Fetterman, 2001), the second empowerment eval-
uation book, built on the previous collection of knowledge and shared experience. The approach
was less controversial at that time. Empowerment evaluation had already become a part of the
intellectual landscape of evaluation.1The book was pragmatic, providing clear steps and case
examples of empowerment evaluation work. It also highlighted the role of the Internet to facil-
itate and disseminate the approach.
The most recent empowerment evaluation book is titled Empowerment Evaluation Principles
in Practice (Fetterman & Wandersman, 2005). It contributed to greater conceptual clarity of
empowerment evaluation by making explicit the underlying principles of the approach, ranging
from improvement and inclusion to capacity building and social justice. In addition,it highlighted
the approach’s commitment to accountability and outcomes by stating them as an explicit prin-
ciple and presenting substantive examples of outcomes. Case examples of empowerment eval-
uation are presented in educational reform, youth development programs, and child abuse
prevention programs.
All of these books have benefited immensely from lively engagement and critique by col-
leagues, including Alkin and Christie (2004), Altman (1997), Brown (1997), Cousins (2005),
Scriven (1997, 2005), Sechrest (1997), Stufflebeam (1994), Patton (1997, 2005), and Wild
(1997), among others. Building on our tradition of past responses to earlier critiques, this
response further clarifies the purpose and objectives of empowerment evaluation. It also discusses
misperceptions and differences of perspective (Fetterman, 1997a, 1997b, 2005; Wandersman &
Snell-Johns, 2005.)
Yesterday
This category focuses on critiques based on old data (empowerment as it was in its earlier
stages of development) or old arguments that have reappeared. They include the following top-
ics or issues: (a) conceptual ambiguity, methodological specificity, and outcomes; (b) empow-
ering others; (c) advocacy; (d) consumers; (e) compatibility (internal and external); (f) practical
or transformative forms; (g) empowerment evaluation as a form of evaluation; (h) bias; (i) social
agenda; (j) ideology; and (k) differences between collaborative, participatory, and empower-
ment evaluation. Although many of these issues seem like déjà vu to some, they have been
raised again and appear new to enough colleagues to merit a consolidated response.
© 2007 American Evaluation Association. All rights reserved. Not for commercial use or unauthorized distribution.
at American Evaluation Association on May 25, 2007 http://aje.sagepub.comDownloaded from
Conceptual Ambiguity, Methodological Specificity, and Outcomes
In many ways, we applaud the “taking stock” analysis by R. L. Miller and Campbell (2006)
in the American Journal of Evaluation. Based on the 2005 panel at AEA, they were the first to
publish in the symposium. Their work was designed to be a systematic review of empowerment
evaluation examples in the literature. On the positive side, R. L. Miller and Campbell per-
formed a very commendable job of highlighting types or modes of empowerment evaluation,
settings, reasons for selecting the approach, who selects the approach, and degree of involve-
ment of participants. The relationship between the type of empowerment evaluation mode, and
related variables was insightful. They provided many insights, including the continuum of flex-
ibility to structure and standardization in empowerment evaluation wording, based on the size
of the project.2R. L. Miller and Campbell also noted that the reasons for selecting empower-
ment evaluation were generally appropriate, including capacity building, self-determination,
accountability, making evaluation a part of the organizational routine, and cultivating staff buy-
in. These and other insights are very useful contributions to the literature.
Concerns about the Miller and Campbell article. A review of the references in their article
reveals a significant limitation to their findings. The majority of empowerment evaluation pro-
jects to which they refer were conducted more than a decade ago (Fetterman et al., 1996). We
agree with many of the critiques raised by R. L. Miller and Campbell (2006), as they refer to that
period, including conceptual ambiguity, methodological specificity, and outcomes. However,
the R. L. Miller and Campbell study of cases of empowerment evaluation does not reflect the
current literature. A few significant but neglected or omitted case examples in the sample include
Fetterman’s work on a $15 million Hewlett Packard Digital Village project (Fetterman, 2005,
pp. 98-107),an empowerment evaluation of academically distressed Arkansas Delta school districts
(Fetterman, 2005, pp. 107-122), and a statewide tobacco prevention empowerment evaluation
(visit http://homepage.mac.com/profdavidf/Tobacco.htm). In addition, past published examples
in a children’s hospital, a reading improvement program, and an Upward Bound program should
have been included in the sample (Fetterman, 2001). International examples from Australia,
Finland, Spain, Mexico, New Zealand, and Japan are neglected (see http://homepage.mac
.com/profdavidf). Youth empowerment evaluations are not included. A second search using
Google’s Scholar search (and limiting the search to 1999 to 2005, instead of the more complete
1994 to 2005 period) produces a result of more than 16 relevant—but it would appear
neglected—citations well within the time period under study by R. L. Miller and Campbell,
including journals such as the Evaluation Review, the Harvard Family Research Project’s
Evaluation Exchange, and Evaluation and Program Planning (Andrews, 2004; Butterworth,
2004; Gilham, Lucas, & Sivewright, 1997; Horsch, Little, Smith, Goodyear, & Harris, 2002;
Lerner, Fisher, & Weinberg, 2000; Lewis et al., 1999; Martin, Ribisl, Jefferson, & Houston, 2001;
McQuiston, 2000; W. Miller & Lennie, 2005; Reininger et al., 2003; Richards-Schuster, 2003;
Sabo, 2003; Sanstad, Stall, Goldstein, Everett, & Brousseau, 1999; Secret et al., 1999; Wilson,
2004; Zimmerman & Erbstein, 1999). This list does not even include a review of the series of
papers presented at the AEA or other international associations.
Moreover, the issues raised about conceptual ambiguity and methodology have been
addressed at length. In fact, many of these concerns motivated the writing of the next two books
on empowerment evaluation, Foundations of Empowerment Evaluation (Fetterman, 2001) and
Empowerment Evaluation Principles in Practice (Fetterman & Wandersman, 2005). The issues
that they raise in their article are briefly addressed again below.
In addition, there are a number of substantive problems with the design and execution of
the R. L. Miller and Campbell (2006) study that have an impact on their conclusions and thus
Fetterman, Wandersman / Empowerment Evaluation 181
© 2007 American Evaluation Association. All rights reserved. Not for commercial use or unauthorized distribution.
at American Evaluation Association on May 25, 2007 http://aje.sagepub.comDownloaded from
their relevance to empowerment evaluation today. Specifically, many of the publications
included in their analysis had the following problems:
•They were empowerment evaluation in name only (12 of 46), as R. L. Miller and Campbell
themselves state. This is more than 25% of the studies used in their analysis. Evaluations
identified as empowerment in name only should not have been included in the sample. This
is commingling the data at best. This confounds the clarity and accuracy of their analysis. It
certainly distorts and minimizes empowerment evaluation’s outcome-oriented track record.
•They were not written with R. L. Miller and Campbell’s criteria in mind concerning what
constitutes empowerment evaluation or a least what needs to be included in the published arti-
cle to be classified as such.
•They were not written with the latest empowerment evaluation principles in mind (i.e., “most
of the cases analyzed in this review were published before Fetterman and Wandersman put
forth this ten principle view” (R. L. Miller & Campbell, 2006, p. x).
•They were limited to journal articles, chapters, and books rather than the most common con-
duit for evaluation in general—evaluation reports.3
Empowering Others
N. L. Smith (2007) has brought up issues about the role of the empowerment evaluator in
empowering “those groups in society they seek to empower” (p. x). This issue rests on a faulty
assumption. No one empowers anyone—including empowerment evaluators—people empower
themselves. Empowerment evaluators help create an environment conducive to the development
of empowerment. This position was stated in 1996, in an attempt to anticipate this type of criti-
cism (see Fetterman, Kaftarian, & Wandersman, 1996, p. 5). This is not a simple semantic game.
It is an issue of accuracy and attribution. Empowerment evaluation helps to transform the poten-
tial energy of a community into kinetic energy. However, they are the source of that energy.
Standing at a baseball diamond, it is not the bat that drives the run home; it is the player. The bat,
like the empowerment evaluator, is only an instrument used to transform that energy.
Advocacy
N. L. Smith (2007) has raised an issue about the role of advocacy and empowerment eval-
uation, suggesting a philosopher king orientation or possibly revising the principles with a
politically neutral position. First, as we have stated in 1996 and 2001, we do not think that
any evaluation is truly neutral. We highlighted Greene’s (1997) explanation that
social program evaluators are inevitably on somebody’s side and not on somebody else’s side.
The sides chosen by evaluators are most importantly expressed in whose questions are addressed
and, therefore, what criteria are used to make judgments about program quality. (p. 25)
This does not mean that empowerment evaluators are necessarily advocates of a specific
program. In fact, empowerment evaluators do not typically advocate for a specific program, as
program staff members and participants advocate for their own programs—if the data merit it.
As stated in 2001, it would be disempowering for an empowerment evaluator to assume this
role if their clients are capable of doing so themselves4(Fetterman, 2001, pp. 115-117).
Consumers
R. L. Miller and Campbell (2006, pp. 30-31; N. L. Smith, 2007; also see Scriven, 2005) have
suggested that empowerment evaluation focuses more on staff than on participants. Scriven
182 American Journal of Evaluation / June 2007
© 2007 American Evaluation Association. All rights reserved. Not for commercial use or unauthorized distribution.
at American Evaluation Association on May 25, 2007 http://aje.sagepub.comDownloaded from
raised this issue in 1997. In an article responding to his book review and again in the second book
(Fetterman, 2001), Fetterman explained specifically that consumers, or program participants, are
often a driving force in empowerment evaluations (Fetterman, 2001, pp. 3, 118-119). In addition,
we explained how we considered this to be a problem in all forms of evaluation, especially tra-
ditional evaluation. Furthermore, although program participants should be involved, they are not
the only group in an empowerment evaluation (Fetterman, 2001). Typically, program staff
members, evaluators, donors, and participants are involved. The value of this critique then and
now, however, is that it is a useful reminder to ensure that participants have a significant role in
the evaluation, in combination with program staff members and others. This concern is addressed
further under the empowerment evaluation principles of inclusion.
Compatibility: Internal and External Evaluation and Traditional
and Empowerment Evaluation
N. L. Smith (2007) and others pit empowerment evaluation against traditional evaluation,
including experimental design. However, we have stated, from the initial launching of the
approach in both Fetterman’s presidential address and the first book, that empowerment evalu-
ation and traditional evaluation are not mutually exclusive (Fetterman, 1993; Fetterman, 2001,
pp. 122-123; Fetterman et al., 1996, p. 6). In addition to these publications, Fetterman had a
controversial exchange with Scriven (1997) about the topic. Internal and external forms of eval-
uation can be mutually reinforcing. This was also discussed in terms of how empowerment eval-
uation adheres to the spirit of the evaluation standards (Fetterman, 2001, pp. 87-99). The issue
is a false argument. Even Scriven has stated that “it is as false an argument as deciding whether
you will use qualitative verses quantitative approaches” (p. 12).
Practical or Transformative
Cousins (2005) asked whether empowerment evaluation is practical or transformative. We
responded in the same book (Fetterman & Wandersman, 2005), that similar to the distinction
Cousins and Whitmore (1998) made for participatory evaluation, empowerment evaluation can
be practical and/or transformative depending on the task at hand (Fetterman, 2005, p. 188).
Cousins (in a personal communication reported in Patton’s 2005 review of our latest book) raises
the same question. As Fetterman stated in 2001, practical empowerment evaluation focuses on
program decision making and problem solving, much like Practical Participatory Evaluation
(Cousins, 2005, p. 186). Similarly, transformative empowerment evaluation is primarily psycho-
logical and secondarily political in nature, similar to Transformative Participatory Evaluation
(Fetterman, 2001, p. 20). Within this context, the question he raises about whether empowerment
evaluation is “more about evaluation utilization than about self-determination” (Cousins, 2005,
p. 187) is a false dichotomy. It depends on the task it is being used to address.
Empowerment Evaluation as Evaluation
N. L. Smith (2007) has rhetorically resurfaced an old issue. Within the context of attempt-
ing to depict empowerment evaluation as an ideology, he raises the question of whether empow-
erment evaluation is a form of evaluation. Smith appears to cushion this critique by also calling
randomized control trials an ideology. However, this is a well-worn path in the literature.
Sechrest (1997), Stufflebeam (1994), and Scriven (1997) referred to empowerment evaluation
as a movement. The 1995 articles and book responded to that charge. Our position has been con-
sistent: Empowerment evaluation is a form of evaluation and not a movement. The 2001 and
Fetterman, Wandersman / Empowerment Evaluation 183
© 2007 American Evaluation Association. All rights reserved. Not for commercial use or unauthorized distribution.
at American Evaluation Association on May 25, 2007 http://aje.sagepub.comDownloaded from
2005 empowerment evaluation books also provide ample evidence that empowerment evalua-
tion is evaluation and can be used to fulfill two and possibly all three of Chelimsky’s (1997) pur-
poses of evaluation: development, accountability, and knowledge.
Bias (Self-Serving)
Empowerment evaluations should represent critical self-examinations of program operations.
Contrary to Cousins’(2005) position that “collaborative evaluation approaches . . . (have) . . . an
inherent tendency toward self-serving bias” (p. 206), we have found many empowerment evalu-
ations to be highly critical of their own operations, in part because they are tired of seeing the
same problems and because they want their programs to work. Similarly, empowerment evalua-
tors may be highly critical of programs that they favor because they want them to be effective and
accomplish their intended goals. It may appear counterintuitive, but in practice we have found
appropriately designed empowerment evaluations to be more critical and penetrating than many
external evaluations. This issue, along with the related issue of whether self-evaluation is possi-
ble and is evaluation, is addressed in detail in Wandersman and Snell-Johns (2005).
Social Agenda
N. L. Smith (2007) raised the question of whether empowerment evaluation should pro-
mote a social agenda. Our position is that if reducing obesity, AIDS, adolescent pregnancy,
tobacco consumption, sexual violence, and other public health problems are considered a
social agenda, the answer is yes, we are on the side of reducing them. As we stated explicitly
in Fetterman and Wandersman (2005) and Wandersman and Snell-Johns (2005), empowerment
evaluation is bottom line in its orientation, and we strive to see empowerment evaluation help
reduce these and other societal problems. We are advocates for obtaining results, and we work
in programs that aim to achieve widely agreed on social agendas. This is discussed in detail in
the 2005 book focusing on the principle of social justice. We have also found that being in sup-
port of a social agenda, such as helping dropouts and students “at risk” of dropping out, makes
us more critical about program performance because we want them to work.
Ideology
N. L. Smith (2007) proposed that empowerment evaluation is an ideology. The work of
empowerment evaluation scholars and practitioners in the areas of definition refinement, guid-
ing principles development, theory development, methodological rigor, and the politics of prac-
tice transcends the narrow classification of empowerment evaluation as an ideology. The power
of ideological terminology can be clarifying because of its simplicity. However, it can also unin-
tentionally mislead and distort. The proper use of terms and metaphors (as N. L. Smith points
out in his 1981 work on Metaphors) must take into consideration the context or environment.
Classifying an approach as a form of ideology is similar to early characterizations that empow-
erment evaluation is a movement. This label, within an academic environment, carries a lot of
baggage, trivializing and undermining conscientious efforts, reducing much hard work to
rhetoric and political posturing. This topic has been raised and addressed many times before
(Fetterman 1997a, 1997b, 2001; Scriven, 1997; Sechrest, 1997; Stufflebeam, 1994).
The use of the term ideology to highlight a value difference underlying empowerment eval-
uation as compared with randomized design demonstrates the limited utility of the term. First,
these two approaches are not mutually exclusive. Second, empowerment evaluators have used
the experimental design as a tool. Third, they also value experimental design as a tool to test the
efficacy of empowerment evaluation. Chinman and colleagues (2005), in a Centers for Disease
184 American Journal of Evaluation / June 2007
© 2007 American Evaluation Association. All rights reserved. Not for commercial use or unauthorized distribution.
at American Evaluation Association on May 25, 2007 http://aje.sagepub.comDownloaded from
Control (CDC)-funded study, are using a quasi-experimental design to examine whether schools
using empowerment evaluation (in terms of the 10-step “Getting to Outcomes” model) obtain
greater outcomes than schools that implement programs without empowerment evaluation.
They have plans to use a randomized design in future projects. We appreciate the strengths and
weaknesses of randomized control designs.
Collaborative, Participatory, and Empowerment Evaluation
Cousins (2005) and R. L. Miller and Campbell (2006) brought up the issue of conceptual
clarity again but without referencing past literature on the topic. Cousins stated that there is
“considerable confusion concerning conceptual differentiation among collaborative, participa-
tory, and empowerment approaches in evaluation” (Cousins, 2005, p. 183). In brief, there are
similarities and differences between the approaches. The difference between these similar and
reinforcing approaches has been described in the first and second books (see Dugan, 1996,
p. 283; Fetterman, 2001, pp. 112-113). For example, in our own book (Fetterman & Wandersman,
2005), Cousins (2005) has done much work to distinguish between the approaches based in part
on their collaborative evaluation process dimensions. Dugan (in the first book) explained, “In
general, participatory work follows a continuum from limited participation to an ideal of full
control. Empowerment evaluation begins closer to the end point of participatory work” (Dugan,
1996, p. 283). Fetterman builds on this theme (in the second empowerment evaluation book) by
explaining how “empowerment is at the furthest end of the continuum in terms of extensive par-
ticipation and stakeholder controlled. Participatory evaluation . . . is second along the continuum
with the same degree of extensive participation, but in the next category of balanced control”
(Fetterman, 2001, p. 112). In addition, Patton (2005, p. 408) provided additional clarity (con-
cerning how empowerment evaluation is distinguished from other similar forms of evaluation)
by explaining how empowerment evaluation has a unique and explicit commitment to self-
determination as a goal of the approach (see also Fetterman, 2001, p. 3; Fetterman, 2004,
p. 306). Related to process use, Cousins (2005, p. 205) helped distinguish between the approaches
by noting that “the most powerful aspect of empowerment evaluation for me is its obvious com-
mitment to and power in developing among members of the program community the capacity
for self-evaluation. . . . This is a strength, I think, of all forms of collaborative inquiry, but one
that is particularly central to the empowerment evaluation process.” Although there are impor-
tant distinctions among the approaches (which have been discussed in Fetterman, 2001, and in
Fetterman & Wandersman, 2005), we continue to argue that these approaches have more in
common than the differences that distinguish them from each other (and that this is positive
and reinforcing).
In sum, we have addressed important issues raised by Cousins (2005), R. L. Miller and
Campbell (2006), and Smith (2007) in their recent critiques; however, many of the issues they
raise reflect empowerment evaluation’s past, rather than its present or future. Next we move
forward to issues confronting empowerment evaluation today.
Today
In this section on empowerment evaluation, we focus on issues that are contemporary, directed
toward current practice or scholarship, and challenge our current thinking. It includes the fol-
lowing topics or issues: (a) consistency in definition, (b) making the 10 principles explicit and
elaborating on relevant empowerment concepts, (c) methodological specificity, and (d) docu-
menting outcomes.
Fetterman, Wandersman / Empowerment Evaluation 185
© 2007 American Evaluation Association. All rights reserved. Not for commercial use or unauthorized distribution.
at American Evaluation Association on May 25, 2007 http://aje.sagepub.comDownloaded from
Definition
N. L. Smith (2007) has suggested that there has been a change in the definition of empow-
erment evaluation. This is inaccurate. We have not abandoned the original definition. We
have explicitly built on the existing definition in pursuit of greater conceptual clarity. As
Wandersman et al (2005) explained,
Fetterman (2001) defined empowerment evaluation as “the use of evaluation concepts, tech-
niques, and findings to foster improvement and self-determination” (p. 3). Although this defini-
tion of empowerment evaluation has remained consistent since the origin of this approach, the
definition, theory, methods, and values have continued to evolve and become more refined over
time. (p. 140)
In 2005, nine empowerment evaluators reviewed what had been developed, built on it, and
provided the following in the book Empowerment Evaluation: Principles in Practice:
Empowerment evaluation: An evaluation approach that aims to increase the probability of achieving
program success by (1) providing program stakeholders with tools for assessing the planning,
implementation, and self-evaluation of their program, and (2) mainstreaming evaluation as part of
the planning and management of the program/organization. (Wandersman et al., 2005, p. 28)
This definition has an emphasis on program success; it builds on, instead of replaces, the orig-
inal definition. It provides more of an elaboration than a substitution.
Clarifying Empowerment Evaluation Concepts and the Principles
The charge of conceptual ambiguity by R. L. Miller and Campbell was focused on the sta-
tus of empowerment evaluation more than a decade ago (as indicated in part by the citations
they referenced, as discussed earlier). Much progress has been made since that period of time,
which was not taken into consideration in those reviews, ranging from a refined definition to
specific guiding principles.
However, the question of conceptual ambiguity can be posed fruitfully at any time. Applying
the question to present developments in the field would reveal much of the work that has been
completed in empowerment evaluation and correct some current misperceptions as well. For
example, Smith suggests that self-determination as a concept appears to be missing from our
current work. We think, as Patton argued (2005), that self-determination is a defining niche of
empowerment evaluation. This is, in part, because self-determination was defined in our earli-
est work, and it is one of the first things we returned to and elaborated on in the 2005 book
(pp. 10-12). Self-determination is basic to empowerment evaluation and will continue to be a
central part of the definition of empowerment evaluation. We also enhanced the conceptual qual-
ity of empowerment evaluation by elaborating on other concepts as well, such as the terms
empowerment and community (pp. 10-12).
The most significant improvements in conceptual clarity are the empowerment evaluation
principles. Empowerment evaluation has been guided by principles since its inception.
However, many of them were implicit rather than explicit. This led to some inconsistency in
empowerment evaluation practice. This problem motivated us to make these principles explicit
in our 2005 book. The 10 principles are as follows:
1. Improvement,
2. Community ownership,
186 American Journal of Evaluation / June 2007
© 2007 American Evaluation Association. All rights reserved. Not for commercial use or unauthorized distribution.
at American Evaluation Association on May 25, 2007 http://aje.sagepub.comDownloaded from
3. Inclusion,
4. Democratic participation,
5. Social justice,
6. Community knowledge,
7. Evidence-based strategies,
8. Capacity building,
9. Organizational learning, and
10. Accountability.
These principles are primarily designed to improve practice. “The principles guide every part
of empowerment evaluation, from conceptualization to implementation. The principles of
empowerment evaluation serve as a lens to focus an evaluation”(Fetterman, 2005, p. 2). The prin-
ciples should respond to Cousins’ (2005), N. L. Smith’s (2007), and other colleagues’ critiques
about conceptual clarity.
In essence, we agree with Patton (2005) that “its (empowerment evaluation’s) longevity and
status established and documented the question of precisely what it is becomes all the more
important” (p. 408). Therefore, we (in Fetterman & Wandersman, 2005) have worked to
(a) reiterate and refine the definition of empowerment evaluation (p. 28); (b) make the empow-
erment evaluation principles explicit (pp. 1-72); (c) provide case examples (pp. 92-122, 123-154,
155-182); (d) define high, medium, and low levels of commitment to empowerment evaluation
(pp. 55-72); and (e) suggest possible logical sequencing of the principles (pp. 210-211).
Methodological Specificity
The 1996 book provided an introductory level of methodological specificity. It highlighted
the role of taking stock, setting goals, developing strategies, and documenting progress. Today,
we have two primary methodological models with a significant degree of specificity associated
with each one of them. There is a 3-step approach and a 10-step approach. There are also a
variety of permutations to accommodate varying populations and settings. In response to
Cousins’(2005, p. 201) criticisms that there is variability in empowerment evaluation methods,
we agree. However, we think that variability is appropriate and desirable. Having only one
method and following Cousins’ dimensions (p. 189) in a uniform manner is not realistic or
desirable. Evaluation approaches need to be adapted (with quality)—not adopted by commu-
nities. The principles guiding an evaluation are more important than the specific methods used.
Nevertheless, most contemporary empowerment evaluation approaches are based on one of the
two methodological models described below. They are described in some detail in part to be
responsive to the critique aimed at empowerment evaluation’s methodological sophistication.
Three-step approach. The 3-step approach typically employs an empowerment evaluator
who facilitates empowerment evaluation exercises and helps the group to (a) establish their mis-
sion or purpose; (b) take stock or assess their current state of affairs, using a 1 (low) to 10 (high)
rating scale; and (c) plan for the future (specifying goals, strategies to achieve goals, and cred-
ible evidence). The “taking stock” step represents the group’s baseline. The “plans for the
future” step represents the group’s intervention. Traditional evaluation tools, such as surveys,
focus groups, interviews, and even treatment and control or comparison groups, are used to
determine whether the strategies (selected by the group) are working. The bottom line is: Are
the groups accomplishing their objectives and achieving desired outcomes? If they are not work-
ing, then those strategies are replaced (although the goals remain the same). Routine formative
Fetterman, Wandersman / Empowerment Evaluation 187
© 2007 American Evaluation Association. All rights reserved. Not for commercial use or unauthorized distribution.
at American Evaluation Association on May 25, 2007 http://aje.sagepub.comDownloaded from
evaluation feedback along the way allows for midcourse corrections. A second taking-stock ses-
sion is completed after the intervention has had enough time to have an impact. Then the first
taking-stock baseline ratings are compared with the second taking-stock ratings to document
change over time. This allows the group to monitor its own progress, use data to inform deci-
sion making, and foster organizational learning. Videos enhance the effort because they often
possess tremendous face validity (Anastasi, 1988; Trochim, 2006a); video summaries of some
of the projects can be found at http://homepage.mac.com/profdavidf (see Fetterman, 2001, for
details and additional case examples).
Ten-Step GTO. A second methodological approach to empowerment evaluation is the 10-step,
results-based accountability method called Getting to Outcomes (GTO) (Wandersman, Imm,
Chinman, & Kaftarian, 2000). The GTO approach asks 10 questions and helps users answer
them using relevant literature, methods, and tools. The 10 accountability questions and types of
literature to address them are as follows:
1. What are the needs and resources in your organization, school, community, or state? (needs assess-
ment; resource assessment)
2. What are the goals, target population, and desired outcomes (objectives) for your school/
community/state? (goal setting)
3. How does the intervention incorporate knowledge of science and best practices in this area?
(science and best practices)
4. How does the intervention fit with other programs already being offered? (collaboration; cultural
competence)
5. What capacities do you need to put this intervention into place with quality? (capacity building)
6. How will this intervention be carried out? (planning)
7. How will the quality of implementation be assessed? (process evaluation)
8. How well did the intervention work? (outcome and impact evaluation)
9. How will continuous quality improvement strategies be incorporated? (total quality manage-
ment; continuous quality improvement)
10. If the intervention is (or components are) successful, how will the intervention be sustained?
(sustainability and institutionalization)
This 10-step process enhances practitioners’planning, implementation, and evaluation skills.
There is a manual with worksheets designed to address how to answer each of the 10 questions
(Chinman, Imm, & Wandersman, 2004). Although GTO has been used primarily in substance
abuse prevention, new customized GTOs have been developed for preventing underage drink-
ing (Imm, Chinman, & Wandersman, 2006) and promoting positive youth development (Fisher,
Imm, Chinman, & Wandersman, 2006), and others are in preparation. Several of these books are
free (downloadable from the Web) to encourage widespread usage.
In addition, empowerment evaluations are using photo journaling, online surveys, virtual
conferencing formats, and creative youth self-assessments (Sabo, 2001). Methodological
knowledge and rigor have grown exponentially since empowerment evaluation was first intro-
duced to the field.
Documenting Outcomes
R. L. Miller and Campbell (2006), N. L. Smith (2007), and Cousins (2005) have stated that
there is a weak emphasis on the attainment of outcomes or results in empowerment evaluation.
This is incorrect and contradicted by the data. Outcomes are paramount in empowerment
188 American Journal of Evaluation / June 2007
© 2007 American Evaluation Association. All rights reserved. Not for commercial use or unauthorized distribution.
at American Evaluation Association on May 25, 2007 http://aje.sagepub.comDownloaded from
evaluation and even part of the empowerment evaluation language such as Getting to
Outcomes (GTO; Fetterman, 2001, p. 118; Fetterman, 2005, p. 50; Wandersman et al., 2000).
Self-determination and capacity building require some degree of goal attainment to be per-
ceived as meaningful and credible. Outcome accountability is part of the 10th principle in
empowerment evaluation.
Some of the same authors who claim this weakness in empowerment evaluation also
state that empowerment evaluation generates outcome-oriented data. For example, accord-
ing to R. L. Miller (personal communication, March 12, 2004), empowerment evaluation
provides an innovative vehicle for helping programs to be accountable to administrators
and the public by generating process- and outcome-oriented data within an evaluation
framework that heightens an organization’s sensitivity to its responsibility to the public and
to itself. In response to the critique concerning outcomes, we present four examples5of out-
comes to illustrate empowerment evaluation’s commitment to and ability to help generate
outcomes. The first one focuses on capacity outcomes, the second on standardized test score
outcomes, the third on explicit program outcomes, and the fourth on academic accredita-
tion outcomes. These four examples highlight the wide variety of outcomes associated with
empowerment evaluation.
Capacity Outcomes
Capacity outcomes are central to empowerment evaluation, as one of the main thrusts of the
approach is to build capacity. In 2002, Chinman and colleagues received funding from the
CDC (CCR921459-02, Chinman, PI) for the study “Participatory Research of an Empowerment
Evaluation System.” Chinman et al. (in press) employed a quasi-experimental design in two
community-based prevention coalitions (in Santa Barbara, CA and Columbia, SC) comparing
programs that used the GTO form of empowerment evaluation with programs that did not.
Programs were compared on their prevention capacity and program performance over time.
The GTO intervention involved distributing GTO manuals, delivering annual full-day training,
and providing on-site technical assistance to participating program staff and coalition
members. The study used several assessment strategies. Standardized ratings of program per-
formance show that the GTO process helped the program staff improve in the various preven-
tion activities known to be associated with outcomes (e.g., planning, conducting process, and
outcome evaluation) more than the comparison programs. The percentage improvement on the
aggregated rating of program performance after 1 year of GTO implementation was 13% for
GTO, 7% for comparison; after 2 years it was 47% versus 8%. Individual staff members who
were involved with the coalitions prior to and following the GTO implementation were also
surveyed to assess impact on capacity at the individual level. This data showed that greater
GTO participation was associated with improvements in individual prevention capacity—or
knowledge (e.g., ease with which respondents could complete various prevention tasks), atti-
tudes (e.g., importance of evaluation), and skills (e.g., frequency of doing evaluation)—across
all the domains targeted by GTO. As a result of GTO, all the programs either started new
ongoing program evaluations, whereas there were none before, or they significantly improved
their current designs. Finally, qualitative data from coalition staff about the utility of GTO
showed that it helped them better plan, implement, and evaluate their own programs, teach-
ing them “a new language” about accountability. The data collected in the CDC grant,
although on a small number of programs, suggests that GTO builds the capacity of local prac-
titioners and helps to improve the quality of performance in planning, implementation, and
evaluation of prevention programs.6
Fetterman, Wandersman / Empowerment Evaluation 189
© 2007 American Evaluation Association. All rights reserved. Not for commercial use or unauthorized distribution.
at American Evaluation Association on May 25, 2007 http://aje.sagepub.comDownloaded from
190 American Journal of Evaluation / June 2007
Standardized Test Score Outcomes
Empowerment evaluators worked for 3 years in rural Arkansas (the Delta) to help school
districts in academic distress. At the beginning of the intervention (fall 2001), 59% of Elaine
School District students scored below the 25th percentile on the Stanford 9 Achievement test.
By the end of the empowerment evaluation intervention (spring 2003), only 38.5% of students
scored below the 25th percentile, representing an improvement of more than 20 percentage
points. Similar gains were made in the Altheimer Unified School District. According to an
Arkansas Department of Education educational accountability official responsible for moni-
toring and assessing these districts,7“Empowerment evaluation was instrumental in produc-
ing Elaine and Altheimer school district improvements, including raising student test scores.”
To further address the question of attribution and threats to internal validity, it is important to
describe the educational context. The test scores had languished or declined for more than
6 years before introducing empowerment evaluation to these school districts. During the
period in which empowerment evaluation was used, there were no competing approaches or
interventions. The fields of educational interventions were as stark as the Delta landscape,
with miles of cotton, soy, and rice fields and not much else. The history threat8was largely
eliminated by a review of all past educational reform efforts in the area during that period,
based on the Arkansas State Department of Education records and individual interviews with
local school administrators. Test and instrument threats were also considered; however, the
same statewide tests were used year after year with no significant changes. To complement
our review to threats to internal validity, we documented improvements in other areas rang-
ing from discipline to parental involvement. These improvements helped create an environ-
ment conducive to learning as evidenced by increases in standardized test scores—the “coin
of the realm” in educational research and policy. (See Fetterman, 2005, pp. 116-129).
Explicit Program Outcomes: Bridging the Digital Divide
Hewlett-Packard funded a $15 million Digital Village project designed to help disenfran-
chised communities bridge the digital divide. The project included distributing laptops in the
schools and community businesses. In addition, community learning centers were established,
providing community members with access to the Internet, digital video equipment, and online
learning opportunities. One of the Digital Village communities was comprised of 18 American
Indian tribes in California. They called themselves the Tribal Digital Village. The community
used empowerment evaluation (the three-step approach) to collaboratively accomplish many of
its goals. One of the most notable achievements was the creation of the largest unlicensed9wire-
less systems in the country, according to the chair of the Federal Communications Commission.
This system helped them communicate across reservations and to the world outside the reser-
vation, including Stanford University. (See the Web videos of their efforts, which represents
another piece of data that was previously ignored or neglected in past reviews of empowerment
evaluation outcomes, at http://homepage.mac.com/profdavidf/hewlettpackard.html). This was
an explicit goal of the Tribal Digital Village. They also accomplished a number of other con-
crete outcomes using empowerment evaluation as an organizational tool (see Fetterman, 2005,
pp. 92-107 for more details).
Accreditation Outcomes
Stanford University’s School of Medicine used an empowerment evaluation approach to suc-
cessfully prepare for its accreditation site visit. Accreditation committees assess the degree of
participation and engagement in their self-study. Empowerment evaluation was an effective tool
© 2007 American Evaluation Association. All rights reserved. Not for commercial use or unauthorized distribution.
at American Evaluation Association on May 25, 2007 http://aje.sagepub.comDownloaded from
Fetterman, Wandersman / Empowerment Evaluation 191
in fostering this kind of widespread and substantive participation in the process. This was
an outcome in itself. In addition, an empowerment evaluation (three-step) approach helped
improve courses, as evidenced by dramatic increases in student course ratings10 and faculty
assessments (see Figure 1).11 Similar self-reflective tools were used at the clerkship level,
enhancing faculty effectiveness in evaluating medical students during their clerkship rotations.
An additional outcome directly associated with the use of empowerment evaluation was
greater organizational clarity concerning governance. One of the programs in the School of
Medicine engaged in a cycle of reflection and action to improve educational performance. In the
middle of one exercise, the course directors had an epiphany or “ah hah” moment. The direc-
tors of the individual programs realized that they were the de facto governing body overseeing
that part of the academic program. It was the “elephant in the room” that no one spoke about
Figure 1
Increase in Student Course Ratings
(a) Before Evaluation Feedback
45%
27% 23%
0%
20%
40%
60%
80%
100%
Percentage of
Students
low medium high
Ratings
Dbio 201 Overall Course Rating
(n = 142)
(b) After Processing Evaluation Feedback
10% 25%
62%
0%
20%
40%
60%
80%
100%
Percentage of
Students
low medium high
Ratings
DBio 201 Overall Course Rating
(n = 85)
Note: The graphs of student evaluations of this course, both before and after evaluation findings, were taken into con-
sideration by course directors and dramatically demonstrate a radical increase in student ratings.
© 2007 American Evaluation Association. All rights reserved. Not for commercial use or unauthorized distribution.
at American Evaluation Association on May 25, 2007 http://aje.sagepub.comDownloaded from
but that everyone knew was holding them back from making critical decisions. This was a crys-
tallizing moment for them, and it emerged from a dialogue in the taking-stock phase of the self-
evaluation. This is a significant outcome. This was a pivotal moment for the program because
administrative follow-through was a problem and represented a serious stumbling block in terms
of long-term planning and sustainability of the program. It was a “transformative” moment—a
common phenomenon in empowerment evaluation. The example illustrates why the language
of transformation is an important part of the empowerment evaluation process, and it responds
to R. L. Miller’s (2005, p. 317) and N. L. Smith’s (2007) concern that the language of trans-
formation is notably absent from the definition of the empowerment evaluation. Empowerment
evaluation has been used in other accreditation efforts as well (Fetterman, 2001, pp. 75-85).
Tomorrow
Empowerment evaluators are learning how to more effectively combine qualitative and quan-
titative data. They are capturing the critical “ah hah” or transformative moments more system-
atically. Empowerment evaluators are also learning how to more effectively translate what they
do into policy language. In addition, empowerment evaluators are learning to build more refined
empowerment evaluation tools and systems. Several current projects illustrate where empower-
ment evaluation is heading, focusing on both community control and traditional “coin of the
realm” measures.
Tobacco Prevention Programs
The tobacco industry is spending more than $97 million a year to encourage minority youth
to use tobacco in the state of Arkansas. The Minority Initiative Sub-Recipient Grant Office
(MISRGO) at the University of Arkansas at Pine Bluff is responsible for coordinating a
statewide effort to respond to the tobacco industry’s efforts. MISRGO has awarded contracts to
community-based organizations throughout the state to help reduce tobacco consumption. An
empowerment evaluation approach has been adopted to guide this tobacco prevention effort and
to coordinate evaluation efforts throughout the state.
One of the areas of weakness identified by the group involved the absence of a systematic
data collection system (to record the number of people who quit smoking). This self-evaluation
finding was a result of the taking-stock exercise in the three-step empowerment evaluation
process. In response to this weakness, the group developed an “evaluation monitoring system”
that enables grantees to document their effectiveness by recording the number of people who
quit smoking or the number of lives saved. The grantees also translated these findings into dol-
lars saved—specifically, in terms of reducing excess medical costs for the state. They multiplied
the number of people who quit smoking by the average excess medical costs per person. The
total saved, combining the efforts of all the grantees to date, is in excess of $84 million (see
http://homepage.mac.com/profdavidf/Tobacco.htm).
This self-assessment data has been instrumental in helping grantees monitor their effective-
ness. The collective nature of the effort has served as peer pressure to maintain the effort. The
data generated from this evaluation monitoring system have also been used to influence policy
decision making, including appearances before the Black legislative caucus in the State.
Grantees have also successfully shared this data with the news media to influence a concerned
citizenry. Emerging from this effort was the recognition that additional evaluation capacity
building was required throughout the state for other programs as well. This led to the intro-
duction of a bill to create the Arkansas Evaluation Center, which is designed to address this
192 American Journal of Evaluation / June 2007
© 2007 American Evaluation Association. All rights reserved. Not for commercial use or unauthorized distribution.
at American Evaluation Association on May 25, 2007 http://aje.sagepub.comDownloaded from
need. Outcomes in this case example can be expressed in terms of dollars and cents as well as
increased capacity.
Multi state Prevention Efforts
The scale and scope of empowerment evaluations are continually growing. We believe that
to build capacity and reach outcomes in large-scale programs, it is increasingly necessary to
develop an empowerment evaluation system that includes tools, training, technical assistance
(TA), and quality improvement/quality assurance (QI/QA; Wandersman, 2007; see Figure 2).
These are all key ingredients of a full GTO intervention. Wandersman and colleagues are devel-
oping GTO systems that work at multiple levels. They are working with the CDC to achieve out-
comes by promoting science-based approaches.
The Promoting Science-Based Approaches to Teen Pregnancy Prevention (PSBA) project
is a 5-year, capacity-building cooperative agreement between 16 (national, regional, and state
level) grantees and the CDC. All grantees are charged with building the capacity of their own
organization to serve as a TA provider in science-based approaches to teen pregnancy pre-
vention and to build the capacity of others, particularly at the local level. Ultimately, the aim
of the project is to improve the likelihood that local prevention delivery partners will select,
implement, and evaluate a science-based approach to prevent teen pregnancy by building their
capacity to do so (Lesesne et al., 2007).
Wanderman and colleagues are also working with two state agencies and multiple counties
in New York State to promote results-based accountability. The projects also have an explicit
emphasis on outcomes. The projects will represent another set of test cases concerning how
large-scale empowerment evaluations might function. In the process, they will contribute to
furthering the three themes that have been emphasized in the first part of this discussion (the
Yesterday section): conceptual clarity, methodological specificity, and concrete outcomes.
Fetterman, Wandersman / Empowerment Evaluation 193
Figure 2
An Empowerment Evaluation Theoretical Model
Training+
QI/QA+
Tools+
TA+
+
Empowerment Evaluation Principles:
1.Improvement;
2. Community Ownership;
3. Inclusion;
4. Democratic Participation;
5. Social Justice;
6. Community Knowledge;
7. Evidence-Based Strategies;
8. Capacity Building;
9. Organizational Learning;
10. Accountability
To Achieve
Desired
Outcomes
Actual
Outcomes
Achieved
Current
Level of
Capacity
=
© 2007 American Evaluation Association. All rights reserved. Not for commercial use or unauthorized distribution.
at American Evaluation Association on May 25, 2007 http://aje.sagepub.comDownloaded from
Next Steps
The future is always difficult to predict. However, there are some indicators that suggest
where empowerment evaluation is going, beyond being grounded in community control and
increasingly relying on traditional external measures. The two trends are associated with
research and technology.
Research
One of the benefits of continued growth and maturity in empowerment evaluation is that
there is more time to mine the data in more depth. In Mexico, faculty members at the Colegio
de Postgraduados, for example, are conducting a secondary analysis of the taking-stock data.
They are applying statistical analysis to the data to identify patterns associated with roles, cam-
pus location, and topic interests. Instead of simply using the initial analysis to move forward,
secondary analysis is being conducted that lends itself to traditional research activities and con-
tributes to knowledge.
Similarly, empowerment evaluations have become routinized enough to invite meta-
evaluations. Third-party evaluators are currently involved in evaluating ongoing empowerment
evaluations. For example, one of the RAND studies of empowerment evaluation tobacco-
prevention work suggested that the program has been effective in reducing tobacco consumption
in one of the most difficult regions in the state (Farley et al., 2004). Empowerment evaluation
has reached a stage in which a more distanced and reflective stance can be adopted.12
Technology
Empowerment evaluators have long realized the benefits of the Internet, ranging from Web
pages to listservs and videoconferencing to online surveys (Fetterman, 2001, pp. 129-140).
Technology is, in part, responsible for the exponential growth of the approach in a relatively
short period of time. And it appears that this relationship is only beginning to blossom.
The American Evaluation Association Collaborative, Participatory, and Empowerment
Evaluation topical interest group (TIG) recently created an interactive blog13 to enhance evalua-
tive dialogue in the field. In addition, a team of empowerment evaluators at Stanford University’s
School of Medicine is using interactive, collaborative writing software owned by Google called
Writely. It allows us to collaboratively write institutional review board submissions, evaluation
plans, reports, and articles on the Internet together. The Arkansas team of empowerment evalua-
tors is using an interactive, collaborative spreadsheet owned by Google to manage incoming data
concerning numbers of people who quit smoking and how this translates into dollars saved in
terms of excess medical costs.
In addition, Zhang and Wandersman and colleagues have a technology transfer grant
(STTR) from the National Institute on Alcohol Abuse and Alcoholism to develop an interac-
tive, Web-based GTO system and to research its utilization. In this grant, the iGTO system is
being rolled out in two states in more than 30 coalitions with 50-plus programs. The system
will be used at multiple levels: program, coalition, state. Data will be gathered and used at
each level for program improvement and program accountability purposes and travel up to a
higher level to promote appropriate technical assistance support as well as quality assurance.
iGTO is more than a hierarchical reporting system. It builds in guidance for how to do each of
the 10 steps and then helps users answer each of the steps with their data. This helps them ful-
fill results-based accountability.
The immediate future promises to build on this type of dynamic cyber-tradition in empower-
ment evaluation. This Web-based engagement has produced innovations, created new opportunities
194 American Journal of Evaluation / June 2007
© 2007 American Evaluation Association. All rights reserved. Not for commercial use or unauthorized distribution.
at American Evaluation Association on May 25, 2007 http://aje.sagepub.comDownloaded from
for collaboration, and taught numerous lessons about process use and knowledge use that extend far
beyond the walls of empowerment evaluation. The lessons learned from this exchange are applica-
ble to evaluation as a whole as the entire field grows and evolves in the digital age.
Conclusion
Empowerment evaluation has captured the imagination of many evaluators, program staff, and
program participants who are committed to achieving outcomes on important educational, health,
and human service concerns. We know of no other evaluation approach that is currently being
held to the same standard to prove its effectiveness with demonstrable outcomes. However, we
welcome the challenge. We have made advances in conceptual clarity, methodological specificity
and rigor, and documentation of outcomes. The seeds of empowerment evaluation have been
planted both in the community and the field of evaluation and they have taken hold. Now it is
time to cultivate the field and help the approach grow. Admittedly, there is much work ahead.
However, as Thomas Edison said, “Opportunity is missed by most people because it is dressed
in overalls, and looks like work.” We, on the contrary, are eager to seize this opportunity to con-
tinue to work with communities, as we all expand our understanding and insight into empower-
ment evaluation. We also eagerly await a review of the next 10 years by the same community of
evaluators, such as Alkin, Altman, Brown, Campbell, Christie, Cousins, R. L. Miller, Patton,
Sechrest, Scriven, Smith, Stufflebeam, and Wild, as well as a host of new stars shining over the
intellectual landscape of evaluation.
Notes
1. More than a fourth of the AEA membership specified an affiliation with the Collaborative, Participatory, and
Empowerment Evaluation TIG (S. Kistler, personal communication, October 4, 2006: “Approximately 1,133/4,999 =
23% in your TIG”).
2. There are parallels with the introduction of ethnography into evaluation. Smaller-scale projects allowed for
greater flexibility. However, the larger the project was, the more structure was required to facilitate meaningful data
collection and analysis (A. G. Smith & Robbins, 1984).
3. Carol Weiss asked Fetterman to help her collect relevant evaluation reports to inform her revision of her
Evaluation textbook while at the Center for Advanced Studies in the Behavioral Sciences at Stanford University. Weiss
argued, and Fetterman agreed, that this is one of the most credible forms of data to determine what evaluators do in the
field and should be at least a minimum criteria or basic standard for inquiry in the field of evaluation.
4. There are exceptions in which an empowerment evaluator works collaboratively with clients to advocate for
continued support, such as when a group is not organized or developmentally prepared to do so, and their prospects
for continued funding are dim but the data overwhelmingly support continued operations.
5. Cousins (2005, p. 203) distinguished between case example and case study, suggesting that most examples
provided are case examples instead of studies. However, the authors have been immersed in the study of these
programs, conducting virtual ethnographies (Fetterman, 1998). A more detailed study is required if the empower-
ment evaluator is not an integral member of the program and its day-to-day operations. In addition, these authors
have generated a brief summary of the case examples because they are more appropriate and digestible than full-
length case studies or ethnographies given the task at hand.
6. See Chinman et al. (in press) concerning steps taken to rule out alternative explanations that might pose a threat
to internal validity, including the implemented research design. See Trochim (2006b) concerning single group threats:
http://www.socialresearchmethods.net/kb/intsing.php.
7. A second official corroborated this during the same conference call (Wilson, 2004).
8. Maturation, testing, instrumentation, and mortality threats were also considered.
9. This is an unlicensed system because the American Indians are a sovereign nation and not subject to the local
licensing requirements.
10. For example, course ratings have dramatically improved after collaborative action was taken to assess and
revise problematic courses. One of the preclinical course ratings (based on student assessments) was 45% low and
23% high before receiving evaluation feedback. However, after student data was presented and considered, faculty
Fetterman, Wandersman / Empowerment Evaluation 195
© 2007 American Evaluation Association. All rights reserved. Not for commercial use or unauthorized distribution.
at American Evaluation Association on May 25, 2007 http://aje.sagepub.comDownloaded from
and students created a plan of action (to revise the curriculum). Activities ranged from faculty attending each other’s
lectures to reduce redundancy to student assistance in revising the syllabus. The ratings after receiving evaluation
feedback and implementing a course of action were 10% low ratings and 62% high ratings for the same course. These
are quantifiable outcomes of the evaluative process.
11. The analysis of the differential attrition suggested that it was not a threat to internal validity. In addition, the
same patterns were documented with a number of other courses during the same period using the same approach.
12. R. L. Miller and Campbell’s analysis also shows evidence of this.
13. The empowerment evaluation blog is at: http://eevaluation.blogspot.com.
References
Alkin, M., & Christie, C. (2004). An evaluation theory tree. In M. Alkin (Ed.), Evaluation roots: Tracing theorists’
views and influences (pp. 381-392). Thousand Oaks, CA: Sage.
Altman, D. (1997). Review of the book Empowerment Evaluation: Knowledge and Tools for Self-Assessment
and Accountability. Community Psychologist,30(4), 16-17. Retrieved from http://www.stanford.edu/~davidf/
altmanbkreview.html
Anastasi, A. (1988). Psychological testing. New York: Macmillan.
Andrews, A. (2004). Start at the end: Empowerment evaluation product planning. Evaluation and Program Planning,
27, 275-285.
Brown, J. (1997). Review of the book Empowerment Evaluation: Knowledge and Tools for Self-Assessment and
Accountability. Health Education & Behavior,24(3), 388-391. Retrieved from http://www.stanford.edu/~davidf/
brown.html
Butterworth, I. (2004, October 2-3). Health cities evaluation: Tracking processes and outcomes across the social
system. Paper presented at the International Conference of Health, Tainan, Taiwan.
Chelimsky, E. (1997). The coming transformation in evaluation. In E. Chelimsky & W. Shadish (Eds.), Evaluation
for the 21st century: A handbook. Thousand Oaks, CA: Sage.
Chinman, M. (2005). Building community capacity to conduct effective violence prevention (CE05-012; CDC
Principal Investigator; 9/1/05-8/31/08). Atlanta, GA: Centers for Disease Control and Prevention.
Chinman, M., Hunter, S. B., Ebener, P., Paddock, S., Stillman, L., Imm, P., & Wandersman, A. (in press). The Getting
To Outcomes demonstration and evaluation: An illustration of the prevention support system. American Journal
of Community Psychology.
Chinman, M., Imm, P., & Wandersman, A. (2004). Getting To Outcomes 2004: Promoting accountability through
methods and tools for planning, implementation, and evaluation (TR-TR101). Santa Monica, CA: RAND.
Retrieved from http://www.rand.org/publications/TR/TR101/ or (in Spanish) http://www.rand.org/pubs/
technical_reports/TR101.1
Chinman, M., Hannah, G., Wandersman, A., Ebener, P., Hunter, S., Imm, P., et al. (2005). Developing a community
science research agenda for building community capacity for effective preventive interventions. American Journal
of Community Psychology,3(4), 143-157.
Cousins, J. B. (2005). Will the real empowerment evaluation please stand up? A critical friend perspective. In
D. M. Fetterman & A. Wandersman (Eds.), Empowerment evaluation principles in practice (pp. 183-208).
New York: Guilford.
Cousins, J. B., & Whitmore, E. (1998). Framing participatory evaluation. New Directions for Evaluation,80, 5-23.
Dugan, M. (1996). Participatory and empowerment evaluation: Lessons learned in training and technical assistance.
In D. M. Fetterman, S. Kaftarian, & A. Wandersman (Eds.), Empowerment evaluation: Knowledge and tools for
self-assessment and accountability (pp. 277-303). Thousand Oaks, CA: Sage.
Farley, D., Chinman, M., D’Amico, E., Dausey, D., Engberg, J., Hunter, S., et al. (2004). Evaluation of the Arkansas
Tobacco Settlement Program: Progress from program inception to 2004. Santa Monica, CA: RAND. Retrieved
from http://www.rand.org
Fetterman, D. M. (1994). Empowerment evaluation. 1993 Presidential address. Evaluation Practice, 15(1), 1-15.
Fetterman, D. M. (1997a). Empowerment evaluation: A response to Patton and Scriven. Evaluation Practice,18(3),
253-266. Retrieved from http://www.stanford.edu/~davidf/pattonscriven.html
Fetterman, D. M. (1997b). Response to L. Sechrest review of Empowerment Evaluation: Knowledge and Tools for
Self-Assessment and Accountability. Environment and Behavior,29(3), 427-436. Retrieved from http://www
.stanford.edu/~davidf/fettermansechrest.html
Fetterman, D. M. (1998). Ethnography: Step by step. Thousand Oaks, CA: Sage.
Fetterman, D. M. (2001). Foundations of empowerment evaluation. Thousand Oaks, CA: Sage.
196 American Journal of Evaluation / June 2007
© 2007 American Evaluation Association. All rights reserved. Not for commercial use or unauthorized distribution.
at American Evaluation Association on May 25, 2007 http://aje.sagepub.comDownloaded from
Fetterman, D. M. (2004). Branching out or standing on a limb? Looking at our roots for insight. In M. C. Alkin (Ed.),
Evaluation roots: Tracing theorists’views and influences (pp. 304-318).Thousand Oaks, CA: Sage.
Fetterman, D. M. (2005). Empowerment evaluation: From the digital divide to academic distress. In D. M. Fetterman
& A. Wandersman (Eds.), Empowerment evaluation principles in practice (pp. 92-122). New York: Guilford.
Fetterman, D. M., Kaftarian, S., & Wandersman, A. (Eds.). (1996). Empowerment evaluation: Knowledge and tools
for self-assessment and accountability. Thousand Oaks, CA: Sage.
Fetterman, D. M., & Wandersman, A. (Eds.). (2005). Empowerment evaluation principles in practice. New York:
Guilford.
Fisher, D., Imm, P., Chinman, M., & Wandersman, A. (2006) Getting To Outcomes with developmental assets: Ten
steps to measuring success in youth programs and communities. Minneapolis, MN: Search.
Gilham, S., Lucas, W., & Sivewright, D. (1997). The impact of drug education and prevention programs. Evaluation
Review,21(5), 589-613.
Horsch, K., Little, P., Smith, J., Goodyear, L., & Harris, E. (2002, February). Youth involvement in evaluation and
research. Harvard Family Research Project, No. 1, pp. 1-5.
Imm, P., Chinman, M., Wandersman, A. Rosenbloom, D., Guckenburg, S., & Leis, R. (2006). Preventing underage
drinking: Using Getting To Outcomes with the SAMHSA strategic prevention framework to achieve results. Santa
Monica, CA: RAND Corporation.
Lerner, R., Fisher, C., & Weinberg, R. (2000). Toward a science for and of the people: Promoting civil society
through the application of developmental science. Child Development,71(1), 11-20.
Lesesne, C. A., Lewis, K. M., Wandersman, A., Duffy, J., Green, D. & White, C. (2007). Promoting science-based
approaches to teen pregnancy prevention: Engaging the three systems of the Interactive Systems Framework.
Manuscript in preparation.
Lewis, R., Paine-Andrews, A., Fisher, J., Custard, C, Fleming-Randle, M., & Fawcett, S. (1999, July). Reducing the
risk for adolescent pregnancy: Evaluation of a school/community partnership in a midwestern military community.
Family & Community Health,22(2), 16-30.
Martin, J., Ribisl, K., Jefferson, D., & Houston, A. (2001, September/October). Teen empowerment movement to pre-
vent tobacco use by North Carolina’s youth. North Carolina Medical Journal,62(5).
McQuiston, T. (2000). Empowerment evaluation of worker safety and health education programs. American Journal
of Industrial Medicine,38(5), 584-597.
Miller, R. L. (2005). Review: Empowerment Evaluation Principles in Practice, edited by David Fetterman and
Abraham Wandersman. Evaluation and Program Planning,28, 317-319.
Miller, R. L., & Campbell, R. (2006). Taking stock of empowerment evaluation: An empirical review. American
Journal of Evaluation,27(9), 296-319.
Miller, W., & Lennie, J. (2005). Empowerment evaluation: A practical method for evaluating a national school break-
fast program. Evaluation Journal of Australasia,5(2), 18-26.
Patton, M. (1997). Toward distinguishing empowerment evaluation and placing it in a larger context. Evaluation
Practice,15(3), 311-320. Retrieved from http://www.stanford.edu/~davidf/patton.html
Patton, M. (2005). Toward distinguishing empowerment evaluation and placing it in a larger context: Take two.
American Journal of Evaluation,26, 408-414.
Reininger, B., Vincent, M., Griffin, S.,Valois, R., Taylor, D., Parra-Medina, D., et al. (2003). Evaluation of statewide
teen pregnancy prevention initiatives: Challenges, methods, and lessons learned. Health Promotion Practice,
4(3), 323-335.
Richards-Schuster, K. (2003). Youth participation in community evaluation research. American Journal of Evaluation,
24(1), 21-33.
Sabo, K. (Ed.). (2001). New directions in evaluation: Special edition on youth involvement in evaluation. San
Francisco: Jossey-Bass.
Sabo, K. (2003). Youth participatory evaluation: A field in the making. New Directions for Evaluation,98, 33-45.
Sanstad, K., Stall, R., Goldstein, E., Everett, W., & Brousseau, R. (1999). Collaborative Community Research
Consortium: A model for HIV prevention. Health Education & Behavior,26(2), 171-184.
Scriven, M. (1997). Empowerment evaluation examined. Evaluation Practice,18(2), 165-175. Retrieved from http://
www.stanford.edu/~davidf/scriven.html
Scriven, M. (2005). Review of the book: Empowerment Evaluation Principles in Practice. American Journal of
Evaluation,26(3), 415-417.
Sechrest, L. (1997). Review of the book Empowerment Evaluation: Knowledge and Tools for Self-Assessment and
Accountability. Environment and Behavior,29(3), 422-426.
Secret, M., Jordan, A., & Ford, J. (1999). Empowerment evaluation as a social work strategy. Health & Social Work,
24(2), 120-128.
Smith, N. L. (Ed.). (1981). Metaphors for evaluation: Sources of new methods. Beverly Hills, CA: Sage.
Fetterman, Wandersman / Empowerment Evaluation 197
© 2007 American Evaluation Association. All rights reserved. Not for commercial use or unauthorized distribution.
at American Evaluation Association on May 25, 2007 http://aje.sagepub.comDownloaded from
Smith, N. L. (2007). Empowerment evaluation as evaluation ideology. American Journal of Evaluation. 28(2), 169-178.
Smith, A. G., & Robbins, A. E. (1984). Multimethod policy research: A case study of structure and flexibility. In
D. M. Fetterman (Ed.), Ethnography in educational evaluation (pp. 115-132). Beverly Hills, CA: Sage.
Stufflebeam, D. (1994). Empowerment evaluation, objectivist evaluation, and evaluation standards: Where the future
of evaluation should not go and where it needs to go. Evaluation Practice,15(3), 321-338.
Trochim, W. (2006a). Face validity. Retrieved from the Center for Social Research Methods Web Site: http://www
.socialresearchmethods.net/kb/measval.htm
Trochim, W. (2006b). Single group threats. Retrieved from the Center for Social Research Methods Web Site:
http://www.socialresearchmethods.net/kb/intsing.php
Wandersman, A. (2007, February). Science, evaluation, and accountability: Systems approaches to building capacity
and Getting To Outcomes in practice. Paper presented to Centers for Disease Control and Prevention, Atlanta, GA.
Wandersman, A., Imm, P., Chinman, M., & Kaftarian, S. (2000). Getting To Outcomes: A results-based approach to
accountability. Evaluation and Program Planning,23, 389-395.
Wandersman, A., & Snell-Johns, J. (2005). Empowerment evaluation: Clarity, dialogue and growth. American Journal
of Evaluation,26(3), 421-428.
Wandersman, A., Snell-Johns, J., Lentz, B., Fetterman, D., Keener, D. C., Livet, M., et al. (2005). The principles of
empowerment evaluation. In D. M. Fetterman & A. Wandersman (Eds.), Empowerment evaluation principles in
practice (pp. 27-41). New York: Guilford.
Wild, T. (1997). Review of Empowerment Evaluation: Knowledge and Tools for Self-Assessment and Accountability.
Canadian Journal of Program Evaluation,11(2), 170-172. Retrieved from http://www.stanford.edu/~davidf/
wild.html
Wilson, W. (2004). Introduction: Indigenous knowledge recovery is indigenous empowerment. The American Indian
Quarterly,28(3-4), 359-372.
Zimmerman, K., & Erbstein, N. (1999). Promising practices: Youth empowerment evaluation. Evaluation
Exchange,5(1).
198 American Journal of Evaluation / June 2007
© 2007 American Evaluation Association. All rights reserved. Not for commercial use or unauthorized distribution.
at American Evaluation Association on May 25, 2007 http://aje.sagepub.comDownloaded from