A preview of this full-text is provided by American Psychological Association.
Content available from Journal of Applied Psychology
This content is subject to copyright. Terms and conditions apply.
Effectiveness of Training in Organizations:
A Meta-Analysis of Design and Evaluation Features
Winfred Arthur Jr.
Texas A&M University Winston Bennett Jr.
Air Force Research Laboratory
Pamela S. Edens and Suzanne T. Bell
Texas A&M University
The authors used meta-analytic procedures to examine the relationship between specified training design
and evaluation features and the effectiveness of training in organizations. Results of the meta-analysis
revealed training effectiveness sample-weighted mean ds of 0.60 (k⫽15, N⫽936) for reaction criteria,
0.63 (k⫽234, N⫽15,014) for learning criteria, 0.62 (k⫽122, N⫽15,627) for behavioral criteria, and
0.62 (k⫽26, N⫽1,748) for results criteria. These results suggest a medium to large effect size for
organizational training. In addition, the training method used, the skill or task characteristic trained, and
the choice of evaluation criteria were related to the effectiveness of training programs. Limitations of the
study along with suggestions for future research are discussed.
The continued need for individual and organizational develop-
ment can be traced to numerous demands, including maintaining
superiority in the marketplace, enhancing employee skills and
knowledge, and increasing productivity. Training is one of the
most pervasive methods for enhancing the productivity of individ-
uals and communicating organizational goals to new personnel. In
2000, U.S. organizations with 100 or more employees budgeted to
spend $54 billion on formal training (“Industry Report,” 2000).
Given the importance and potential impact of training on organi-
zations and the costs associated with the development and imple-
mentation of training, it is important that both researchers and
practitioners have a better understanding of the relationship be-
tween design and evaluation features and the effectiveness of
training and development efforts.
Meta-analysis quantitatively aggregates the results of primary
studies to arrive at an overall conclusion or summary across these
studies. In addition, meta-analysis makes it possible to assess
relationships not investigated in the original primary studies.
These, among others (see Arthur, Bennett, & Huffcutt, 2001), are
some of the advantages of meta-analysis over narrative reviews.
Although there have been a multitude of meta-analyses in other
domains of industrial/organizational psychology (e.g., cognitive
ability, employment interviews, assessment centers, and
employment-related personality testing) that now allow research-
ers to make broad summary statements about observable effects
and relationships in these domains, summaries of the training
effectiveness literature appear to be limited to the periodic narra-
tive Annual Reviews. A notable exception is Burke and Day
(1986), who, however, limited their meta-analysis to the effective-
ness of only managerial training.
Consequently, the goal of the present article is to address this
gap in the training effectiveness literature by conducting a meta-
analysis of the relationship between specified design and evalua-
tion features and the effectiveness of training in organizations. We
accomplish this goal by first identifying design and evaluation
features related to the effectiveness of organizational training
programs and interventions, focusing specifically on those features
over which practitioners and researchers have a reasonable degree
of control. We then discuss our use of meta-analytic procedures to
quantify the effect of each feature and conclude with a discussion
of the implications of our findings for both practitioners and
researchers.
Overview of Design and Evaluation Features Related to
the Effectiveness of Training
Over the past 30 years, there have been six cumulative reviews
of the training and development literature (Campbell, 1971; Gold-
stein, 1980; Latham, 1988; Salas & Cannon-Bowers, 2001; Tan-
nenbaum & Yukl, 1992; Wexley, 1984). On the basis of these and
other pertinent literature, we identified several design and evalu-
ation features that are related to the effectiveness of training and
development programs. However, the scope of the present article
is limited to those features over which trainers and researchers
have a reasonable degree of control. Specifically, we focus on (a)
the type of evaluation criteria, (b) the implementation of training
needs assessment, (c) the skill or task characteristics trained, and
Winfred Arthur Jr., Pamela S. Edens, and Suzanne T. Bell, Department
of Psychology, Texas A&M University; Winston Bennett Jr., Air Force
Research Laboratory, Warfighter Training Research Division, Mesa,
Arizona.
This research is based in part on Winston Bennett Jr.’s doctoral disser-
tation, completed in 1995 at Texas A&M University and directed by
Winfred Arthur Jr.
Correspondence concerning this article should be addressed to Winfred
Arthur Jr., Department of Psychology, Texas A&M University, College
Station, Texas 77843-4235. E-mail: wea@psyc.tamu.edu
Journal of Applied Psychology Copyright 2003 by the American Psychological Association, Inc.
2003, Vol. 88, No. 2, 234–245 0021-9010/03/$12.00 DOI: 10.1037/0021-9010.88.2.234
234
This
document
is
copyrighted
by
the
American
Psychological
Association
or
one
of
its
allied
publishers.
This
article
is
intended
solely
for
the
personal
use
of
the
individual
user
and
is
not
to
be
disseminated
broadly.