Content uploaded by Juan I. Sanchez
Author content
All content in this area was uploaded by Juan I. Sanchez on Oct 03, 2016
Content may be subject to copyright.
PS63CH16-Sanchez ARI 31 October 2011 12:26
The Rise and Fall of Job
Analysis and the Future
of Work Analysis
Juan I. Sanchez1and Edward L. Levine2
1Department of Management and International Business, Florida International University,
Miami, Florida 33199; email: sanchezj@fiu.edu
2Psychology Department, University of South Florida, Tampa, Florida 33620;
email: elevine@mail.usf.edu
Annu. Rev. Psychol. 2012. 63:397–425
First published online as a Review in Advance on
September 28, 2011
The Annual Review of Psychology is online at
psych.annualreviews.org
This article’s doi:
10.1146/annurev-psych-120710-100401
Copyright c
2012 by Annual Reviews.
All rights reserved
0066-4308/12/0110-0397$20.00
Keywords
occupations, job profile, selection, validity, KSAO, competencies
Abstract
This review begins by contrasting the importance ascribed to the study
of occupational requirements observed in the early twentieth-century
beginnings of industrial-organizational psychology with the diminish-
ing numbers of job analysis articles appearing in top journals in recent
times. To highlight the many pending questions associated with the
job-analytic needs of today’s organizations that demand further inquiry,
research on the three primary types of job analysis data, namely work
activities, worker attributes, and work context, is reviewed. Research
on competencies is also reviewed along with the goals of a potential
research agenda for the emerging trend of competency modeling. The
cross-fertilization of job analysis research with research from other do-
mains such as the meaning of work, job design, job crafting, strategic
change, and interactional psychology is proposed as a means of respond-
ing to the demands of today’s organizations through new forms of work
analysis.
397
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
Click here for quick links to
Annual Reviews content online,
including:
• Other articles in this volume
• Top cited articles
• Top downloaded articles
• Our comprehensive search
Further
ANNUAL
REVIEWS
PS63CH16-Sanchez ARI 31 October 2011 12:26
Job analysis: the
process through
which one gains an
understanding of the
activities, goals, and
requirements
demanded by a work
assignment
HR: human resources
P-E fit:
person-environment
fit
Contents
INTRODUCTION.................. 398
THERISEANDFALLOFJOB
ANALYSIS........................ 399
THE OBJECT OF STUDY IN JOB
ANALYSIS........................ 400
RESEARCH ON WORK ACTIVITY
INFORMATION................. 401
Reliability Studies . . . . . . . . . . . . . . . . . . 401
Carelessness Studies . . . . . . . . . . . . . . . 404
Validity Studies . . . . . . . . . . . . . . . . . . . . 405
Competencies..................... 407
RESEARCH ON WORKER
ATTRIBUTE INFORMATION . . 408
Reliability Studies . . . . . . . . . . . . . . . . . . 408
Validity Studies . . . . . . . . . . . . . . . . . . . . 410
RESEARCH ON WORK CONTEXT
INFORMATION................. 414
CONCLUSIONS AND FUTURE
TRENDS......................... 417
INTRODUCTION
Job analysis constitutes the preceding step of
every application of psychology to human re-
sources (HR) management including, but not
limited to, the development of selection, train-
ing, performance evaluation, job design, de-
ployment, and compensation systems (Brannick
et al. 2007, Gael et al. 1988, Harvey 1991,
Levine 1983). Because it serves as a founda-
tion of so many applications, one would as-
sume that job analysis research, much like re-
search on other areas of applied psychology
such as selection that has had a long history
of coverage in the Annual Reviews (e.g., from
Taylor & Naviz 1961 to Sackett & Lievens
2008), would have been the object of periodic
Annual Review of Psychology articles. Ours is,
however, the very first Annual Reviews chap-
ter ever dedicated to job-analytic research,
notwithstanding the brief coverage of selected
developments in job-analytic research included
in prior syntheses of the selection literature
(e.g., Borman et al. 1997, p. 301; Hough &
Oswald 2000, p. 632; Landy et al. 1994, p. 266;
Sackett & Lievens 2008, p. 429).
This relatively sparse coverage of job
analysis research is startling in light of the
principle of person-environment fit (P-E
fit), which underlies most HR management
applications of psychology since early pioneers
began to wonder how to best fit individuals to
occupations and vice versa (M¨
unsterberg 1913,
Parsons 1909). One would argue that a success-
ful P-E match depends on the quality of the
study of both sides of this equation, the E side
and the interaction between P and E being core
elements in job analysis. However, the purpose
of our review is not to fill this void by providing
an exhaustive account of job analysis research
to date, because such monographs are already
available elsewhere (Brannick et al. 2007;
Harvey 1991; Morgeson & Dierdorff 2011;
Pearlman & Sanchez 2010; Sanchez & Levine
1999, 2001), as well as accounts of the history
of job analysis (Mitchell & Driskill 1996,
Primoff & Fine 1988, Wilson 2007). Instead,
we were inspired by calls to adapt job analysis
practice and research to the changing nature of
work (Sanchez 1994, 2000; Sanchez & Levine
1999; Schneider & Konz 1989; Siddique 2004;
Singh 2008), as well as by recent observations
that job analysis research is not keeping up
with the staffing practices demanded by today’s
dynamic and diverse workplaces (Morgeson
& Dierdorff 2011, Sackett & Laczo 2003).
The inability of traditional job analysis to
answer the demands of today’s organizations
is illustrated by the warm reception of the
proposal to rename the field “work analysis”
(Sanchez 1994; Sanchez & Levine 1999, 2001),
a label that best reflects the boundaryless
nature of the evolving roles that individuals
play within organizations (Ilgen & Hollenbeck
1991, Morgeson & Dierdorff 2011). As a
result, we aim to identify not only the trends
in the evolution of job analysis research that
account for current thinking in the domain, but
also those that represent promising avenues
by which the job analysis domain may catch
up with the needs of today’s organizations.
With this purpose in mind, we not only culled
398 Sanchez ·Levine
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63CH16-Sanchez ARI 31 October 2011 12:26
the job analysis literature, but also borrowed
insights from research in a number of related
domains (e.g., the experience of work, work
stress) that, through cross-fertilization, may
stimulate the kind of innovative job analysis
research demanded by today’s world of work.
In fact, an overarching conclusion of our
review is that we must thoroughly revise the
core assumptions that have dominated the job
analysis domain in the face of the magnitude
of the transformations that have taken place in
the world of work over recent decades.
The review is organized as follows. First,
we contrast our view that job analysis research
has lost ground in recent times with the cen-
tral role that job analysis was accorded in the
beginnings of the field of industrial and organi-
zational psychology. Next, we review the debate
concerning what the appropriate object of study
should be in job analysis in the context of the
various types of job-analytic data, namely work
activities, worker attributes, and work context.
We then proceed to review research concerning
these major types of data, emphasizing the latest
research trends such as research on competen-
cies. Because job-analytic research has largely
focused on the quality of job-analytic data, we
also group research around the primary criteria
by which data have been evaluated. Specifically,
we distinguish among evaluations that have fo-
cused on the reliability, the validity, and the
consequences (i.e., the inferences drawn from
job-analytic data and the rules employed to
draw them). Finally, we offer a set of conclu-
sions and suggestions regarding the reposition-
ing of job analysis research.
An important caveat about the scope of our
literature review is in order. The wide variety
of job analysis applications has led to clearly
separated streams of literature such as research
on human factors and engineering psychology
(e.g., cognitive task analysis; Schraagen et al.
2000). This line of research has been covered in
prior Annual Reviews articles (e.g., Carroll 1997,
Proctor & Vu 2010). Related applications of job
analysis in the study of training needs analy-
sis and in the determination of job worth have
also been covered in former Annual Reviews
O∗NET:
Occupational
Information Network
articles (e.g., Aguinis & Kraiger 2009 and Eng-
land & Dunn 1988, respectively). Thus, our re-
view does not delve into these domain-specific
applications, even though the research reviewed
here has obvious implications for them.
Moreover, instead of dedicating a separate
section to the Occupational Information Net-
work (O∗NET), which was developed by the
U.S. Department of Labor (Peterson et al.
1999), we interspersed O∗NET-related re-
search within those sections where we felt it fit
best throughout our review.
THE RISE AND FALL
OF JOB ANALYSIS
The reduced space dedicated to job analysis in
recent reviews of the selection literature men-
tioned earlier is justifiable in light of Morgeson
& Dierdorff’s (2011) compilation of job analy-
sis journal articles published since 1960. They
found that, even though the volume of job-
analytic research has not decreased in the past
two decades, the proportion of job analysis ar-
ticles published in the top journals in industrial
and organizational psychology and HR man-
agement has decreased considerably from an
all-time high in the 1960–1979 period, when
approximately 77% of the total of job analysis
articles published appeared in a list of seven top
outlets, to just 27% of the total of job analy-
sis articles published since 2000. This decline
is dramatically illustrated by the counts of arti-
cles published in the Journal of Applied Psychology
(JAP)and in Personnel Psychology (PP) provided
by Cascio & Aguinis (2008), from an all-time
peak of 22 articles dedicated to job analysis in
the 1978–1982 period to just four in the 2003–
2007 period. The declining rate of job analysis
publications contrasts sharply with the steady
flow of articles concerned with predictors of
performance published in JAP and PP (Cascio
& Aguinis 2008).
Accounts of early research in personnel se-
lection in the first part of the twentieth century,
however, suggest a better balance between the
spread of relative interest in the two sides of the
P-E equation than that observed in recent times
www.annualreviews.org •The Rise and Fall of Job Analysis and the Future of Work Analysis 399
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63CH16-Sanchez ARI 31 October 2011 12:26
(Salgado et al. 2010). For instance, the German
psychologist Stern (1911) developed “psychog-
raphy” to compare an individual profile to the
profile of the attributes presumably demanded
by an occupation (Lamiell 2000, Stern 1934).
A partial English translation of the very first
structured job analysis questionnaire created
by his German colleague Otto Lipmann was
published in the Monthly Review of the U.S. Bu-
reau of Census Statistics (1918, pp. 131–133).
A similarly balanced P-E emphasis seemed
to have dominated pre-WWII occupational
research in the United States, where Viteles
(1923) was an early adopter of Stern’s psycho-
graphic methods. Even the U.S. Department
of Labor’s Division of Standards and Research
was organized in two sections dedicated to
worker and job analysis, respectively (Otis
2009, Primoff & Fine 1988, Shartle 1959).
The reduced status of job analysis research
in recent times, however, is not due to a
lack of important, pending research develop-
ments that respond to the emerging HR trends
(e.g., personality-oriented work analysis, team
and cognitive task analysis, and strategic com-
petency modeling), which have been advo-
cated elsewhere (Morgeson & Dierdorff 2011;
Sackett & Laczo 2003; Sanchez 1994; Sanchez
& Levine 1999, 2009; Schneider & Konz 1989;
Siddique 2004; Singh 2008). In sections to fol-
low, we not only identify gaps but also uncover
insights from related domains to stimulate re-
search of the high caliber sought by top out-
lets, hopefully taking a step toward remediat-
ing the absence of job-analytic research that
answers the most pressing HR management
questions while advancing scientific knowledge
across domains in which job analysis plays a
role.
THE OBJECT OF STUDY
IN JOB ANALYSIS
Harvey (1991, p. 73) and Harvey & Wil-
son (2000) took the stance that job analysis
should be concerned solely with “objective” or
“verifiable” aspects of jobs, such as job be-
haviors and working conditions, and should
exclude inferences concerning job speci-
fications or human attributes required for
performance. By contrast, Sanchez & Levine
(2001) argued that deriving the worker char-
acteristics required for job performance is
an intrinsic component of job analysis (e.g.,
Primoff 1975), opining that the formulation of
worker attributes is what makes job analysis a
truly psychological endeavor. An examination
of selection texts suggests that the derivation
of worker attributes or job specifications tends
to be included under the rubric of job analysis
(Gatewood et al. 2008, Guion & Highhouse
2006, Heneman & Judge 2009, Ployhart et al.
2006). Therefore, we review research on not
only observables such as work behavior, but also
construals such as human attributes thought to
be required for successful performance.
The distinction between two broadly
defined kinds of job-analytic data, namely tasks
and the characteristics or attributes of people
performing such tasks, is widely accepted
(Sackett & Laczo 2003; Sanchez & Levine
1999, p. 56). We also review a third but equally
important type of job analysis data concerning
the environment or context in which work ac-
tivities are performed, including the situational
opportunities and constraints that influence
behavior (Meyer et al. 2010). These three major
objects of job-analytic study (i.e., work behav-
ior, worker attributes, and context) resemble
the building blocks of successful job analysis
proposed by Fine & Cronshaw (1999, p. 21).
In the next three sections, research on each one
of these building blocks is grouped according
to the criteria along which the job-analytic
data were evaluated, beginning with reliability
and validity. These psychometric properties
are important because they influence the
inferences that such data are meant to inform
(Dierdorff & Wilson 2003, McCormick 1976,
Morgeson & Campion 1997). However, we
also include a third class of studies concerned
with the type of consequence-oriented criteria,
such as the inferences derived from job-analytic
data and the rules governing the making of
such inferences, that Sanchez & Levine (2000)
advocated for the evaluation of job analysis.
400 Sanchez ·Levine
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63CH16-Sanchez ARI 31 October 2011 12:26
We believe that the focus on inferences and
on the rules by which they are made is criti-
cal to propel job analysis research beyond its
current stalemate. Partly, this stalemate might
have been fueled by the support obtained for
the validity of general mental ability (GMA)
tests for most jobs in most settings (see Le et al.
2007 for a summary of these findings), which
has fed the conclusion that a detailed job analy-
sis may constitute an unnecessary expense when
the purpose is to ascertain the generalizability
of GMA tests (Pearlman et al. 1980, p. 376;
Schmidt et al. 1981). An unwarranted gener-
alization drawn from this stream of research is
that there is not much return on investment in a
detailed job analysis because job-analytic infor-
mation is not helpful to identify the conditions
under which a test may or may not work. This
conclusion is predicated on the false premises
that (a) validity generalization findings regard-
ing GMA tests can be extended to other predic-
tors such as personality and psychomotor tests
and employment interviews and, perhaps most
importantly, (b) current job-analytic practices
already provide the best information the field
has to offer in regard to potential occupational
moderators of validity. The evidence to be re-
viewed here suggests otherwise. For instance,
O∗NET-based determinations of specific abil-
ity requirements rely on single-item scales of
limited discriminant validity (Harvey & Wilson
2010, Sanchez & Autor 2010). Similarly, inas-
much as meta-analyses suggest that personal-
ity measures can predict job performance (e.g.,
Barrick & Mount 1991, Hough 1992, Salgado
1997, Tett et al. 1991), evidence concerning the
specific occupational conditions under which
such tests work best is only beginning to emerge
(Meyer et al. 2010, Raymark et al. 1997, Tett
& Burnett 2003).
We purposefully avoided using the term
“descriptor” when referring to any kind of
job-analytic information because we disagree
with the implicit assumption that the primary
purpose of job analysis is to describe jobs.
Instead, job analysis should aim to understand
the successful experience of work, and there-
fore many of the pieces of data produced in job
GMA: general mental
ability
GWAs: generalized
work activities
analysis research are unobservable construals
meant to explain rather than describe the
worker’s behavior.
RESEARCH ON WORK ACTIVITY
INFORMATION
Although terms such as job, duty, function,
responsibility, and task are often employed to
refer to work activities, most researchers agree
that these terms reflect work activities ranging
from the very specific or molecular level (i.e.,
task), to a medium level (i.e., functions, duties,
or responsibilities), to the general or molar
level (i.e., groupings of activities that comprise
a job) (Gael 1983, p. 7). A majority of research
has been conducted using task inventories
prepared in the tradition of Allen’s (1919)
“trade analysis.” These inventories depict
long lists of prestandardized tasks, which
are rated on scales such as frequency, time
spent, and difficulty (Christal & Weissmuller
1988). Research has also emerged on the 42
generalized work activities (GWAs) included
in O∗NET, which were derived through a
literature review of various taxonomies of work
activity data (Cunningham & Ballentine 1982,
McCormick et al. 1972) to form a common
metric for all occupations (Cunningham 1996).
Reliability Studies
Studies of the reliability of work activity in-
ventories have employed two basic approaches:
intrarater (i.e., test-retest or repeated items
within the same administration) and inter-
rater reliability (Gael 1983, p. 23). However,
disagreement among incumbents of the same
job title may reflect legitimate variation, such
as differences in positions classified under the
same job title (Harvey 1991, Lindell et al.
1998, Sanchez et al. 1998, Stutzman 1983,
Wilson 1997). Sanchez & Levine (2000)
warned that interrater disagreement may also
reflect idiosyncratic approaches to the manner
in which two or more incumbents interpret
and carry out the same job. Harvey & Wilson
(2000) noted their disagreement with Sanchez
www.annualreviews.org •The Rise and Fall of Job Analysis and the Future of Work Analysis 401
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63CH16-Sanchez ARI 31 October 2011 12:26
& Levine’s stance, indicating that interrater
reliability is an appropriate means to gauge
reliability when the object of study is what in
their view constitutes verifiable job information
(e.g., work behaviors). Nevertheless, the re-
search on reliability of job analysis data yields a
complex and not altogether consistent picture.
Dierdorff & Wilson’s (2003) meta-analysis
revealed that task data produced higher esti-
mates of interrater reliability than statements
of broader GWAs (weighted r=0.77 versus
0.60). Dierdorff & Morgeson (2009) reported
similar differences in the interrater reliability
estimates of GWAs and tasks (i.e., 0.65 versus
0.80). In contrast, in a different meta-analysis
Voskuijl & van Sliedgregt (2002) reported the
exact opposite finding, namely that task state-
ments were less reliable than broader behaviors
(0.29 versus 0.62). Findings from research on
the merits of decomposed (or task-based) versus
holistic (job-based) ratings have been equally
mixed, with a majority of studies indicating
the superior interrater reliability of molecular
estimates (Butler & Harvey 1988, Gibson et al.
2004, Harvey et al. 1994, Sanchez & Levine
1994), whereas some suggested no differences
(Cornelius & Lyness 1980). One of the reasons
offered for these somewhat mixed findings
is that the presumably challenging demands
of holistic judgments, which require a great
deal of information integration (Cornelius &
Lyness 1980), are sometimes exceeded by the
demands involved in rating a very large number
of molecular (task) units. Still another expla-
nation, which is consistent with Dierdorff &
Wilson’s finding that molecular-molar reliabil-
ity differences are largely confined to interrater
reliability estimates, is that incumbents are
more likely to endorse idiosyncratic views of the
role expectations associated with their job than
of the specific activities involved in discharging
such roles. Whether these idiosyncratic opin-
ions regarding their role represent unreliability
is questionable because they may capture
real differences in how the job is interpreted
and even performed (Dierdorff & Morgeson
2007, Dierdorff et al. 2010, Sanchez & Levine
2000).
Jeanneret et al. (1999) reported GWA in-
traclass correlations obtained in a pilot study of
35 occupations with 4 to 88 incumbents. For
the level scale, they reported correlations of at
least 0.90 for 35 of the 42 GWAs. Slightly lower
reliabilities were reported for the importance
and frequency scales (the frequency scale was
later eliminated in the final version of O∗NET).
Their results did not significantly change when
GWAs rated as “not relevant” were eliminated,
in spite of the potentially inflating effects of
“does not apply” items on interrater reliabil-
ity (Friedman & Harvey 1986, Smith & Hakel
1979). Dierdorff & Morgeson (2009) reported
a lower mean interrater reliability of 0.65 for
O∗NET GWAs using a large sample of incum-
bents (N =47,137) spanning over 300 different
occupations whose ratings had been collected
by the U.S. Department of Labor to populate
O∗NET.
Dierdorff & Wilson (2003) observed that
the pattern of reliabilities differed between
their interrater and intrarater estimates. These
differences were most notable for ratings pro-
duced by technical experts (r=0.81 versus 0.47
for intra- and interrater, respectively). Whereas
descriptive scales dealing with perceptions of
relative value (i.e., importance) showed higher
interrater reliabilities than those of scales
involving temporal judgments (i.e., frequency),
importance and frequency had similarly accept-
able intrarater reliabilities. Intrarater ratings
of difficulty were lower than interrater ones.
Again, it could be that incumbents’ ratings
of constructs that are most closely associated
with the process of learning one’s job, such as
task difficulty, legitimately change over time,
even though perceptions of the relative value
of tasks (e.g., task importance) do not.
Taken together, these findings appear to
question the assumption that the reliability of
work activity ratings can be equivalently mea-
sured through either interrater or intrarater
designs. Specifically, interrater reliability es-
timates do not distinguish between variance
due to random factors and variance due to
legitimate differences in the manner in
which each incumbent approaches his/her job.
402 Sanchez ·Levine
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63CH16-Sanchez ARI 31 October 2011 12:26
Similarly, although intrarater designs con-
cerned with the stability of ratings over time
are not affected by between-rater differences in
idiosyncratic views of the job, they are still likely
to reflect true variations in the longitudinal evo-
lution of the incumbent’s approach to the job.
Longitudinal studies that track incumbent rat-
ings of, for instance, time spent and difficulty
may illuminate the learning sequence through
which incumbents acquire job mastery.
Note that our recommendation to sepa-
rately examine the estimates provided by each
type of reliability design does not deny the
importance of the psychometric properties of
job-analytic data. In fact, our recommendation
is predicated on one of classical reliability’s
primary tenets, specifically, the distinction
between systematic and random variance. We
are simply arguing that some of the variance
that is sometimes deemed “random” in work
activity ratings may indeed reflect systematic
differences in the way some incumbents
interpret and, most importantly, perform their
job. Sanchez & Levine (2009) argued that the
“objectified” (see also Cronshaw 1998) under-
standing of a job as an object or entity that
displays minimal variation across each of the in-
cumbents holding the same job was a reasonable
assumption to make when work was organized
around the principles of Taylorism such as task
standardization and division of labor. However,
such an assumption holds less well in today’s
world of work, where electronic equipment has
taken over many standardized activities and
where the emphasis often is on empowering
employees to perform tasks according to
their own discretion, all of which is likely to
exacerbate the amount of legitimate, between-
position variance within the same job title.
Prior research has indeed suggested that in-
terrater differences may reflect not just percep-
tual differences of dubious theoretical or prac-
tical value, but also tangible correlates in the
manner in which incumbents perform their job.
For instance, Borman et al. (1992) found that
time spent amounts declared for some tasks by
high performers differed from those reported
by low performers. Dierdorff et al. (2010) and
Morrison (1994) provided further evidence
that employees’ views of certain work activities
were associated with the extent to which they
engaged in citizenship behavior. Further evi-
dence that variability in within-job title ratings
is not always random was provided by Sanchez
et al. (1998), who found that the job-analytic
rating profile of branch managers working for
a temporary personnel agency moderated sales
performance, with high performers endorsing
a more sales-oriented conception of the job
and low performers endorsing a more adminis-
trative view. In a separate study reported in the
same article, Sanchez et al. (1998) also revealed
that prior professional experience shaped the
tasks that assistant public defenders emphasized
in their ratings, such that those with prior trial
experience declared themselves more likely
to litigate rather than settle cases than those
without such experience. Prien et al. (2003)
reported that social workers with longer profes-
sional tenure tended to perform their job quite
differently from those with shorter professional
tenure. Befort & Hattrup (2003) found that the
importance that managers assigned to task and
contextual performance varied as a function
of their experience, with more experienced
managers placing a higher value on contextual
behaviors such as compliance and extra effort.
It appears that the premise that jobs are
stable objects with fixed properties, which has
prevailed in job analysis research until recently
(Cronshaw 1998, Sanchez & Levine 2009), has
resulted in a rather passive view of incumbents,
who are conceived as merely the recipients of
a job assignment rather than the actors who
shape it according to their own initiative. Other
streams of research, however, have endorsed
a more agentic view, thereby recognizing that
job incumbents are active agents who perform
their jobs according to their role identity, past
experience, motivation, and personal and pro-
fessional goals. Wrzesniewski & Dutton (2001)
termed this process job crafting, which they
defined as “the physical and cognitive changes
individuals make in the task or relational
boundaries of their work” (p. 179). Other theo-
ries, including role theory (Biddle 1986), share
www.annualreviews.org •The Rise and Fall of Job Analysis and the Future of Work Analysis 403
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63CH16-Sanchez ARI 31 October 2011 12:26
this view of incumbents as the main architects
of their job rather than the mere executers of
a predetermined work assignment (Dierdorff
et al. 2009, Grant 2007, Roberts et al. 2005).
This notion applıes perhaps even more to
self-directed work teams (Mathieu et al. 2008).
Within-job title variability, however, may
not exist uniformly across all individuals and
all jobs. For instance, job incumbents have been
shown to differ in their motivation to craft their
job in unique ways, and antecedents of this mo-
tivation, such as self-image, perceived control,
readiness to change (e.g., Lyons 2008), role ori-
entation (Parker 2007), and the desire to make
a prosocial difference (Grant 2007), have been
uncovered. However, certain jobs are more
likely to provide situational opportunity to en-
gage in job crafting than others (Wrzesniewski
& Dutton 2001). Research should attempt to
gain a better understanding of the sources of
interrater variation (Sanchez & Levine 2000),
which should largely coincide with the factors
promoting or inhibiting the situational oppor-
tunity to shape one’s role as explained by job
crafting and role theories.
A number of studies have begun to pursue
this research goal. First, Sanchez et al. (1998)
hypothesized that job complexity would make
idiosyncratic interpretations of the job more
likely. Using a sample of incumbents and job
analysts for 19 jobs, they found that agree-
ment between incumbents and nonincumbents
was indeed moderated by job data-oriented
occupational complexity, such that agreement
was highest for the less complex jobs. Using
individual-level O∗NET ratings from 20,000
incumbents across 98 occupations collected
by the U.S. Department of Labor, Dierdorff
& Morgeson (2007) found support for a se-
ries of role theory-based predictions arguing
that the context wherein employees work pro-
motes or restricts within-title variance. These
authors found that some elements of the oc-
cupational context (i.e., interdependence and
routinization) increased the level of agree-
ment in O∗NET ratings, presumably because
they suppressed individuation in role enact-
ment. They also found that autonomy reduced
rating consensus, presumably because it pro-
motes exploring new tasks. Lievens et al. (2010)
found that certain kinds of work activities, such
as the extent to which occupations involved
equipment-related and direct contact activi-
ties, increased consensus on competency rat-
ings, whereas managerial activities decreased it.
As a whole, these findings enhance our
understanding of the conditions that foster
job individuation, thereby strengthening job
crafting theory, which has recognized the
existence of situational antecedents of job
crafting but focused instead on its individual
difference antecedents (e.g., Grant 2007). In
addition, because the presence of interrater dis-
agreement among incumbents of the same job
title understandably hurts the face validity of
the job analysis data ( Jones et al. 2001, Sanchez
& Levine 2000), a better understanding of the
nonrandom sources of disagreement should
increase practitioners’ ability to explain to end
users the pros of further exploring the sources
of within-job title variation (e.g., uncovering
different approaches to carrying out work activ-
ities in the same job that may impact outcomes
such as employee performance; Borman et al.
1992, Sanchez et al. 1998). This is a particularly
pressing concern given the calls for greater
discretion for workers in loosely organized
units such as self-directed work teams.
Carelessness Studies
A stream of research has developed around
ways to detect rater indifference or purposeful
obstruction in work activity ratings. One
approach taken has relied upon repeating some
of the same items to see if respondents answer
them consistently. However, the presence of
repeated items in the same inventory may
puzzle respondents (Wilson et al. 1990), who
may choose to answer them inconsistently for
a variety of reasons.
A different route to assess the trustwor-
thiness of the data gathered involves the
computation of veracity and carelessness
indices, which typically include work activ-
ities that are known to be performed by all
404 Sanchez ·Levine
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63CH16-Sanchez ARI 31 October 2011 12:26
incumbents or bogus items that are known not
to be part of the job at all, respectively (Pine
1995). Green & Veres (1990) found that indices
relying on the frequency with which bogus
items were endorsed correlated significantly
with the elevation of respondents’ task ratings.
Green & Stutzman (1986), however, reported
that different indices of carelessness led to
discarding different incumbents’ data. These
procedures warrant continued research because
it is not inconceivable that the inclusion of
bogus items may, as occurs with repeated items,
induce the respondents to answer these items
in unsuspected ways. In addition, Dierdorff &
Rubin (2007) found that these items might not
always capture carelessness or biases, but rather
legitimate variations in incumbents’ interpre-
tation (and possible enactment) of their job,
such as differences in incumbents’ perceived
role ambiguity. Further research is needed to
uncover the constructs and response sets that
arise when these types of items are employed.
From a practical standpoint, discarding those
respondents’ data whose answers suggest care-
lessness according to these indices may, as it did
in the case of those responding inconsistently
to repeated items (Wilson et al. 1990), result in
significant reductions in reliability and sample
size. A more potentially fruitful research
avenue involves reducing the time and the
cognitive demands imposed on subject matter
experts (SMEs) through cognitive-oriented
redesign of lengthy inventories (Willis 2005).
Validity Studies
Assessments of the validity of work activity
information span a number of different ap-
proaches that vary in the manner and in the
degree to which they assess validity. One of
the most straightforward approaches involves
asking SMEs how well the inventory covers
the scope of activities that comprise the job,
usually in the form of a percentage judgment.
Wilson (1997) conducted a field experiment
revealing that both incumbents and supervisors
provided unrealistically high judgments of
inventory completeness, even when presented
SMEs: subject matter
experts
with inventories where two-thirds of the tasks
had been removed. Wilson recommended a
serious re-examination of this approach to
estimate the quality of work inventories, which
may be vulnerable to experimenter demands
and other forms of biases. A potentially fruitful
research avenue involves conducting interviews
and other forms of qualitative research on the
types of work behaviors that are missing in
these inventories. For instance, well-rounded
incumbents may be most likely to detect
the absence of other-oriented and extrarole
activities, which are work requirements of
critical importance in today’s organizations
(Borman & Motowidlo 1993).
Early research by McCormick and his
associates revealed that asking SMEs to make
precise estimates of time spent (e.g., allocating
a percentage of time to each work activity)
was problematic (McCormick 1960), as SMEs
lacked the ability to judge time spent with such
precision. As a result, later research adopted
primarily “relative scales,” which were sup-
posed to represent a less demanding judgment
because they simply asked SMEs to compare
tasks to each other (e.g., “compared to all other
tasks on the job, how much time do you spend
on this one?”). Harvey (1991) challenged the
use of relative scales, arguing that relative scales
require ipsative judgments that preclude cross-
job comparisons, plus they do not meet the
statistical assumptions needed for many types
of data analysis. These limitations, however,
appear to be more conceptual than empirical,
as Manson et al. (2000) found that relative
and absolute scales of the same and different
constructs had generally satisfactory patterns
of convergent and discriminant validity and
provided virtually equivalent rank-orderings of
tasks within the same job. Absolute judgments,
however, are sometimes necessary to quantify,
for instance, the frequency and time spent
on physically challenging tasks such as lifting
objects of different weights. This type of
research on job analysis for physically arduous
jobs is sorely needed, given an aging popula-
tion, the postponement of retirement age, and
the larger number of workers seeking partial
www.annualreviews.org •The Rise and Fall of Job Analysis and the Future of Work Analysis 405
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63CH16-Sanchez ARI 31 October 2011 12:26
DOT: Dictionary of
Occupational Titles
or total disability certification (Fleishman
et al. 1986). In the United States, the Social
Security Administration (SSA) decided that
O∗NET was not a suitable replacement of
the Dictionary of Occupational Titles (DOT) for
purposes of disability determination, so the
SSA is now embarked on a project to develop
an occupational information system capable of
evaluating the physical and mental demands of
work (Occup. Inform. Advis. Panel 2009).
Research on the construct validity of work
activity scales such as criticality or importance
has been bolstered by the job-relatedness
provisions of the Uniform Guidelines (Equal
Employ. Opport. Comm. 1978), which call
for selection procedures that are demonstrably
linked to job behaviors identified to be critical
or important. Prior research suggested that
work activity scales load on one of two major
factors: a time-oriented factor represented by
time spent, frequency, and duration scales,
and an importance/complexity factor involv-
ing scales of criticality, overall importance,
difficulty, and difficulty of learning (Friedman
1990, 1991; Manson et al. 2000; Sanchez &
Fraser 1992; Sanchez & Levine 1989). It is thus
not surprising that O∗NET scales of impor-
tance and level, which are employed for GWAs
and other types of items, are largely redundant
(their intercorrelation is r=0.95 according to
analyses performed using pilot O∗NET data by
Hubbard et al. 2000). Subsequent analyses us-
ing the aggregated ratings included in the 14.0
O∗NET database by Sanchez & Autor (2010)
revealed similarly high importance by level
correlations for GWAs (r=0.92), with type of
scale (i.e., importance versus level) accounting
for only 0.50% of the variance in GWA ratings.
Still another indirect but fairly widespread
approach to assessing the validity of work
activity data involves examining the presence
and the magnitude of presumptively extraneous
sources of variance in work activity ratings.
The logic underlying these studies is that
third variables such as job experience, sex, and
other demographic variables are job unrelated;
therefore, their detection would signal the
presence of some kind of bias in job-analytic
data, thus casting doubt on their validity. Work
experience seems to be the most widely studied
extraneous influence. Evidence of experience
effects, however, has been elusive. Whereas
some studies have failed to detect experience
effects (Schmitt & Cohen 1989, Silverman et al.
1984), a majority of them have uncovered some
form of experience effect (Borman et al. 1992,
Ford et al. 1991, Landy & Vasey 1991, Tross
& Maurer 2000). As we argued in the section
dedicated to reliability, we believe that just
searching for effects of experience and of other
demographic variables is not likely to advance
the theory and practice of job analysis beyond
what we currently know. First, as illustrated
earlier, many substantive variables are con-
founded with demographic variables such as
work experience (e.g., Lindell et al. 1998, Prien
et al. 2003, Sanchez et al. 1998); therefore,
interrater differences associated with work
experience may reflect true differences in how
incumbents not only interpret but also perform
the job. For instance, the differences between
less- and more-experienced branch managers
and stockbrokers uncovered by Borman et al.
(1992) and Sanchez et al. (1998), respectively,
in regard to sales-oriented tasks had tangible
correlates such as higher sales among those
who emphasize sales-oriented tasks.
The occasionally null correlations between
work activity ratings and performance (Aamodt
et al. 1982, Conley & Sackett 1987, Wexley &
Silverman 1978) are not surprising in light of
studies suggesting that job crafting is unlikely
to surface when the job context does not
provide a great deal of discretion to incum-
bents (Dierdorff & Morgeson 2007, Lievens
et al. 2010, Lindell et al. 1998). Also, different
approaches to carrying out work activities may
impact performance criteria that have not been
measured in a given setting. However, even if
differences in work activity ratings associated
with experience were merely perceptual and
did not affect the manner in which incumbents
performed the job, such differences could not
easily be attributed to erroneous or biasing
factors. Consider, for example, the case of
an arguably “objective” property of the job
406 Sanchez ·Levine
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63CH16-Sanchez ARI 31 October 2011 12:26
such as task importance. When judging task
importance, it appears that experienced job
incumbents focus on time spent, as suggested
by the relationship between these two variables,
whereas inexperienced ones focus on difficulty
of learning the task (Ford et al. 1991, Sanchez
1990). That is, the conceptual definition of
their job differs across incumbents because
new employees rely on their still fresh memory
of how hard it is to learn certain tasks when
they evaluate their importance. This memory
has probably faded among experienced in-
cumbents, whose judgments of time spent on
each task may provide a more logical standard
of what tasks are truly important. Affirming
that some incumbents are correct whereas
others are mistaken overlooks that incumbents
employ a different frame of reference when
judging the importance of their job demands.
We argue that future research should follow
the path already initiated by others (Prien et al.
2003, Sanchez et al. 1998) and focus on un-
derstanding the substantive roots of why work
experience and other demographic characteris-
tics influence work activity ratings rather than
on whether such effects are present. Morgeson
& Dierdorff (2011) suggested that Tesluk &
Jacobs’ (1998) model of work experience, which
distinguishes among indices of work experience
(i.e., amount, time, density, timing, and type)
and levels of analysis (i.e., task, job, work group,
organization, and career/occupation), provides
a useful framework along which theoretical
and empirical inquiry may proceed. Indeed, we
agree that continued “fishing” for differences
observed among incumbents as a function of
demographic breakdowns of dubious theoret-
ical value (e.g., incumbents’ race or sex) will
simply replicate what we already know, namely
that statistically significant differences among
such groupings of incumbents are erratic,
their effect sizes small, and their practical
significance questionable (Arvey et al. 1977,
1982; Hazel et al. 1964; Landy & Vasey 1991;
Meyer 1959; Schmitt & Cohen 1989). More
substantive variables, such as the manner
in which incumbents define their profes-
sional and social identity, may better explain
CM: competency
modeling
differences in how they view their jobs, includ-
ing which tasks they deem most important.
This information might be useful in, for
example, framing training programs according
to the level of career maturity of prospec-
tive trainees. Again, our recommendation is
that instead of trying to hide or eliminate
disagreement, job analysis research should
embrace it by looking more deeply into its
causes. Legitimate disagreement represents
unique ways in which incumbents experience
their job, and a better understanding of their
ideographic representations might increase
our grasp on the various forms in which jobs
can be crafted along with their requirements
and consequences. It is this broader purpose
of understanding the experience of work that
in our opinion holds the key to the future of
work analysis (Rosso et al. 2010) because of
its potential to better explain worker outcomes
such as performance.
Competencies
Many organizations have incorporated com-
petency modeling (CM) as opposed to job
analyses in their HR applications (Lucia &
Lepsinger 1999, Schippmann 1999). The
difference between job analysis and CM,
however, seems still blurry, as the two are often
lumped together. A group of experts surveyed
regarding the main differences between job
analysis and CM opined that, unlike job
analysis, CM is linked to strategic goals, but
also that it is less rigorous than job analysis
in regard to data collection, level of detail,
assessment of reliability, and documentation of
the research process (Schippmann et al. 2000).
A more definitive answer to the difference
between CM and job analysis probably awaits
clarity in the definition of “competency,” which
has been vaguely defined as “any individual
characteristic that can be measured or counted
reliably and that can be shown to differentiate
significantly between superior and average
performers” (Spencer et al. 1994, p. 4). Some
have recently suggested that competencies
refer to knowledge, skill, ability, and other
www.annualreviews.org •The Rise and Fall of Job Analysis and the Future of Work Analysis 407
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63CH16-Sanchez ARI 31 October 2011 12:26
KSAOs: knowledge,
skill, ability, and other
characteristics
characteristics (KSAOs) that are needed for
effective performance in the jobs in question
(Campion et al. 2011). However, some degree
of consensus is beginning to emerge around the
view of competencies as broadly defined ele-
ments of the job performance space (Tett et al.
2000), which led us to include them in this sec-
tion dedicated to work activities. In the words
of Bartram (2005), competencies are “sets of
behaviors that are instrumental in the delivery
of desired results or outcomes” (p. 1187).
Sanchez & Levine (2009) also noted that most
lists of competencies resemble loosely coupled
patterns of behavior or “behavioral themes”
that are considered to be critical success factors
or strategic performance drivers (see also
Becker et al. 2001). Lievens et al. (2010) also
took the position that competencies are best
classified as part of the performance space.
The definition of competencies as sets of
behaviors or behavioral themes that are instru-
mental in the delivery of strategic results is
seemingly consistent with the primary purpose
of CM. In this respect, Sanchez & Levine
(2009) suggested that whereas the purpose of
job analysis is to better understand and measure
work assignments, the primary purpose of CM
is to influence the manner in which such
assignments are performed so that presumably
strategic, behavioral themes are emphasized
when performing every job. They drew a
parallel with the notions of “trait relevance”
and “situation strength,” which correspond to
the notions of “channel” and “volume” in signal
detection theory (Tett & Burnett 2003). In
other words, whereas job analysis is concerned
with determining attribute or trait relevance
or the appropriate channels that are called for
by the nature of the work assignment, CM
attempts to raise the volume of those channels
that signal the importance of certain behav-
ioral themes aligned with the organization’s
strategy—i.e., situation strength. These “loud”
signals are intended to create a shared climate
or collective understanding of the behavioral
themes that are expected and rewarded (Bowen
& Ostroff 2004, Chatman & Cha 2003,
O’Reily & Chatman 1996, Werbel & DeMarie
2005). Thus, according to Sanchez & Levine
(2009), job analysis and CM belong in different
domains: Job analysis is best positioned in the
domain of applied measurement, whereas CM
is closest to a mechanism of informal control.
The relatively scarce CM research to date
has largely mirrored the research questions
that are often pursued in job analysis research,
thus focusing on the accuracy, interrater agree-
ment, and discriminant validity of competency
ratings. Not surprisingly, the results of such
exercises are frequently disappointing because
ratings of broadly defined competencies often
have trouble meeting the levels of interrater
agreement found for job analysis data, such
as job tasks (Lievens et al. 2004, Lievens
& Sanchez 2007, Morgeson et al. 2004).
Instead, Sanchez & Levine (2009) suggested
that CM research should focus on the main
dependent variable of CM, that is, the extent to
which CM influences employees’ day-to-day
behavior along strategic lines, including the
development of competency language that is
accessible to end users, the development of
behavioral examples that are demonstrative
of each competency for different jobs, and
the cross-fertilization of job analysis and CM
to develop measurement models for each
competency, so that the underlying traits of
the relatively complex behavioral syndromes
dubbed competencies are better understood.
RESEARCH ON WORKER
ATTRIBUTE INFORMATION
Reliability Studies
Generalizability analysis has been employed
to evaluate the proportion of variance in job
analysis ratings that is attributable to idiosyn-
cratic sources as compared to the facets of
the job that are purportedly being evaluated
(Dierdorff & Morgeson 2007, Lievens et al.
2010, Sanchez et al. 1998). This approach is
based on the premise that variance due to
raters prevents the reliable aggregation of rat-
ings across raters. Van Iddekinge et al. (2005)
used this approach to analyze KSAO ratings
408 Sanchez ·Levine
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63CH16-Sanchez ARI 31 October 2011 12:26
produced by 381 raters across five organiza-
tions. They partitioned the variance due to
raters, rater-by-KSAO, and error, and found
considerable idiosyncratic variance as repre-
sented by the rater-by-KSAO component.
Subsequent analyses found that variance com-
ponents due to rater-by-KSAO and to error
were not explained by the organization, posi-
tion level, and demographic characteristics of
the raters, hence casting doubt on the sources
of variance underlying these ratings.
In regard to personality attributes, which
are termed work styles in the O∗NET model,
Borman et al. (1999) reported a median intra-
class reliability of 0.66 for the level scale using
a pilot O∗NET study of 35 occupations with
4 to 88 incumbents. The attribute Depend-
ability had the lowest reliability at 0.15, and
personality attributes had a similar range of
reliabilities when evaluated on the importance
scale. Using ratings collected by the U.S.
Department of Labor to populate O∗NET
from 47,137 incumbents spanning more than
300 occupations, Dierdorff & Morgeson
(2009) found that variance due to raters was
more pervasive among ratings of personality
traits (up to 35%) than among responsibility
ratings (16%). Similarly, sample-size weighted
estimates of reliability were 0.45 and 0.80 for
personality traits and tasks, respectively.
Turning now to the domain of abilities,
research by Fleishman and his colleagues
resulted in the development of a set of single-
item scales to gauge job requirements along
52 abilities (Fleishman & Quaintance 1984,
Fleishman & Reilly 1992). This set of scales
has been incorporated into O∗NET in a
functionally equivalent form as compared to
the original developed by Fleishman, even
though the critical incidents (Flanagan 1954)
or behavioral anchors included in the scales,
which represent various levels of the abilities,
were apparently rescaled for O∗NET (Peterson
et al. 1999, p. 185). Fleishman et al. (1999)
employed the same O∗NET pilot study of 35
occupations with 4 to 88 incumbents men-
tioned earlier to assess the reliability of the 52
ability scales. They reported that most of the
intraclass correlation reliabilities were above
0.80. The O∗NET project developed similar,
behaviorally anchored scales for other types
of worker attribute data such as personality
requirements (termed work styles) and skills.
Unlike those in other O∗NET domains,
questionnaires relating to the ability and skill
domains are completed by occupational ana-
lysts, not incumbents. Apparently, the decision
to have analysts rate abilities and skills was based
on theoretical and practical considerations, in-
cluding the assumption that trained analysts
are more likely to understand the ability and
skill constructs than incumbents are. Whether
O∗NET work styles, which capture similarly
psychological constructs but in the personality
arena, should continue to be rated by incum-
bents warrants further research. Nevertheless,
a study conducted by the O∗NET Center found
that incumbents provided higher ratings than
analysts and that analysts’ ratings were more
reliable than incumbents’ ratings were, even
though these differences were deemed minimal
(Tsacoumis & Van Iddekinge 2006). A series
of reliability studies conducted on the analyst
ratings associated with each wave of O∗NET
data collection reported median intraclass cor-
relation reliabilities of 0.95 (Tsacoumis 2009a).
These reliability studies used a maximum of 31
unique analysts, who apparently are responsible
for all of the ability and skill ratings produced in
the various cycles of O∗NET data collection to
date, with some occupations having been rated
by a minimum of eight analysts (Tsacoumis
& Van Iddekinge 2006). Whether these intra-
class correlations overestimate the reliability of
O∗NET ratings, however, has been the object
of debate (Harvey 2009, Tsacoumis 2009b).
Other research has shown that analysts
may produce more reliable activity-attribute
linkages, i.e., the presumptive extent to which
an attribute is called for in carrying out an
activity, than incumbents would (Baranowski &
Anderson 2005). However, a potentially more
important aspect than whether analysts or
incumbents are employed to make ratings is
the information or stimulus on which such
ratings are based. For example, in O∗NET,
www.annualreviews.org •The Rise and Fall of Job Analysis and the Future of Work Analysis 409
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63CH16-Sanchez ARI 31 October 2011 12:26
since analysts neither interview nor observe
incumbents, the rating materials are the sole
information on which analyst ratings are based.
The O∗NET rating materials provided to
analysts are prepared to rid the rating stimulus
materials of items (i.e., knowledge, skills, edu-
cation and training, and work styles) thought to
be unimportant for ability ratings (Donsbach
et al. 2003). The materials are further sim-
plified by selecting GWAs and work context
items that were judged to be relevant to the
focal ability, regardless of the occupation, by
a panel of eight industrial and organizational
psychologists. These GWA and work context
items were further screened by selecting
those that had achieved a certain cut-off
among incumbent ratings. Although it can be
argued that these streamlined materials (they
occupy about one page of information for each
ability rating, according to appendix E of the
Donsbach et al. 2003 report) eliminate un-
necessary information and therefore result in
more reliable ratings, future research should
investigate whether or not such reliability
gains are made at the expense of eliminating
potentially relevant job information, including
information that could be gained firsthand by
interviewing or observing incumbents rather
than by studying a paper description of the
job. In this respect, Voskuijl & Sliedregt’s
(2002) meta-analysis suggests that occupational
analysts produce more reliable ratings when
such ratings are based on actual contact with
job incumbents rather than a job description
(r=0.87 versus 0.71). Prior research also
suggests that increasing (rather than reducing)
the amount of job information can indeed have
a positive effect on job-analytic ratings of both
work activity and worker attributes (Harvey &
Lozada-Larsen 1988, Lievens et al. 2004).
Hubbard et al. (2000) reported that the
behavioral anchors used in the O∗NET ability
rating scales were potentially confusing. They
speculated that these anchors may be confusing
because they were drawn from occupations
with which most job incumbents are unfamiliar
and, therefore, the level of difficulty of the
requirements is confounded with the degree of
familiarity with the occupation. For instance,
the anchor “reading a scientific article de-
scribing surgical procedures,” which appears
at the high end of the reading comprehension
scale, may in fact gauge a relatively low level of
reading comprehension for trained surgeons.
Further research on how to anchor attribute
scales for validity, user acceptability, and ease of
usage is warranted, but as it has also occurred in
the performance appraisal domain (Tziner et al.
2000), the employment of critical incidents as
behavioral anchors may not be the answer.
Still a more substantive argument advanced
to explain the typically lower reliabilities
obtained for worker attributes is that they
represent unobservable construals that re-
quire a larger “inferential leap” than ratings
of more observable aspects of the job such
as work activities (Dierdorff & Morgeson
2009). Whether reliable ratings of ostensibly
complex, unobservable construals such as the
“flexibility of closure” ability can be reliably
formulated using the type of single-item scales
employed in O∗NET has also been questioned
(Harvey 2009, Harvey & Wilson 2010). Further
research comparing single- to multiple-item
scales of these constructs is warranted.
Validity Studies
A stream of research that has indirectly ex-
amined the validity of worker attributes is
concerned with the extent to which ratings
are influenced or biased by cognitive pro-
cesses. This research stems from the recog-
nition that job analysis places a great burden
on the information-processing capabilities of
SMEs (Arvey et al. 1982), and it draws from
the literature on the shortcomings of human
judgment (Hogarth 1981).
The use of rater training has been explored
to eliminate or reduce the potential biases
thought to influence raters. Sanchez & Levine
(1994) found that a rater training program
intended to reduce the presumptively bias-
ing effect of Tversky & Kahneman’s (1974)
representativeness and availability heuristics
increased interrater agreement as long as the
410 Sanchez ·Levine
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63CH16-Sanchez ARI 31 October 2011 12:26
number of ratings was low to moderate. Using
the frame of reference (FOR) rater training
paradigm, which attempts to standardize the
FOR employed by raters, Lievens & Sanchez
(2007) found that rater training increased
interrater agreement and discriminant validity
of competency ratings. Aguinis et al. (2009)
used FOR training to reduce the correlation
between SMEs’ self-reported personality and
job-analytic ratings of personality require-
ments (rater training also lowered job-analytic
ratings). Although rater training interventions
might indeed suppress idiosyncratic variance,
whether this suppression of idiosyncratic views
comes at the expense of significant information
losses in the manner in which incumbents expe-
rience the demands of their job merits further
research.
Morgeson & Campion (1997) identified 16
distinct potential social and cognitive sources of
biases in job analysis ratings. Social sources are
thought to represent normative pressures from
the social environment in which individuals
are embedded (e.g., conformity, group polar-
ization, impression management), whereas the
cognitive sources capture limitations in raters’
information-processing capabilities (e.g.,
information overload, heuristics). Morgeson
et al. (2004) began to test some of these biasing
factors. Specifically they hypothesized that
self-presentation biases would result in higher
ratings and more frequent endorsements of
ability statements than of task statements.
Their findings supported their prediction,
because ability statements that were identical
to task statements but were preceded by the
phrase “ability to” drew higher ratings than
their corresponding tasks.
Overall, a potential concern with studies
examining social or cognitive biases lies in the
absence of a true score that would allow an
objective estimation of bias. For instance, the
elevated ratings assigned to certain items by
certain individuals may simply reflect these
individuals’ unique but legitimate approach to
performing the job. In addition, differences
in rating elevation between scales of differ-
ent constructs do not necessarily signal the
FOR: frame of
reference
presence of biases. For instance, abilities may
be legitimately scaled quite differently from
tasks, as a higher level of ability may indeed be
required by an only moderately important or
infrequent task. As others have noted (Hogarth
1981, Kruglanski 1989), many of the so-called
biases or inaccuracies observed in laboratory
tasks reflect simplifying judgment strategies
that indeed have functional value when judging
complex environments such as one’s job. In our
opinion, experimental and quasi-experimental
studies that attempt to detect or reduce biases
or “inaccuracy” in job-analytic judgments
should make sure that the differences that are
thought to demonstrate such biases do not re-
flect substantive variance that may increase our
understanding of how people truly approach
and experience their jobs. A less-than-desirable
course of action for job analysis research would
be to repeat the same mistakes made in the
performance appraisal literature, whose find-
ings regarding performance rating biases and
inaccuracy have been qualified on the account
of their limited utility (Bretz et al. 1992).
Turning now to the discriminant validity
of worker attributes, the factorial structure of
O∗NET ability ratings has been explored (e.g.,
Fleishman et al. 1999). It appears that the single
ratings employed to gauge each one of the 52
abilities included in the model can understand-
ably be reduced to a smaller set of higher-order
factors (e.g., a broad psychomotor/perceptual
factor grouping abilities such as depth percep-
tion and dynamic strength), which are capable
of explaining the majority of the variance in
these ratings. The seemingly high colinearity
among the single ratings representing each
ability is not altogether surprising because abil-
ity estimates based on limited job information
may understandably produce items showing
less discriminant validity than those resulting
from assessment scores of individuals on those
same abilities. This redundancy is likely to
increase when average ratings across SMEs are
factor analyzed, as illustrated by Sanchez &
Autor’s (2010) finding that a single factor
accounts for 43% of the variance in the aggre-
gated ability ratings included in the 14.0 version
www.annualreviews.org •The Rise and Fall of Job Analysis and the Future of Work Analysis 411
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63CH16-Sanchez ARI 31 October 2011 12:26
of the O∗NET database—aggregated ratings
are the only O∗NET ratings publicly available
to O∗NET users or to researchers outside
of the O∗NET development team. Harvey
& Wilson (2010) also provided evidence
suggesting that ratings of O∗NET abilities
can be more parsimoniously explained by a
reduced set of higher-order factors. Whether
information on ability requirements and on
other worker attributes is too redundant should
be determined in future investigations. The
criteria for such determination should include
practical significance and cost-effectiveness of
data collection (e.g., gathering data on fewer
abilities may not impact many of the typically
coarse human resource usages of O∗NET data
uncovered in a recent survey of O∗NET users;
Natl. Res. Counc. 2010, pp. 140–148).
In addition to potential redundancy among
worker attributes, there seems to be redundancy
in the two scales employed to rate attributes in
O∗NET, namely importance and level. Sanchez
& Autor (2010) reported level by importance
Pearson correlations among the aggregated
ratings of 832 occupations included in the
O∗NET 14.0 database of 0.97, 0.95, and 0.97
for abilities, skills, and knowledge, respectively.
Similarly, the type of scale (i.e., importance or
level) accounted for just 3%, 1.54%, and 1.31%
of the variance in ability, skill, and knowledge
ratings, respectively. These findings suggest the
information provided by these two scales in the
O∗NET database is largely redundant. Over-
all, more research is needed on the discrimi-
nant and convergent validity of worker attribute
scales, which is certainly more scarce than re-
search on the scales employed to characterize
work activities such as time spent and criticality
(Friedman 1990, 1991; Sanchez & Fraser 1992;
Sanchez & Levine 1989).
One of the worker attribute scales in need
of additional research attention is trainability
or the extent to which worker attributes are
appropriately learned after significant exposure
on the job or in a training program versus pos-
sessed by applicants at the point of hire or easily
acquired soon afterward. This determination
is mandated by the Uniform Guidelines on
Employee Selection Procedures (Equal Em-
ploy. Opportun. Commiss. 1978), which
advises against the use of easy-to-learn or al-
ready mastered KSAOs in selection procedures.
A study by Van Iddekinge et al. (2011) cor-
related ratings of the extent to which KSAOs
were needed at entry with an external criterion
of perceived KSAO trainability formulated by a
panel of 31 organizational psychologists. Their
findings indicated less validity evidence for rat-
ings of the more abstract “AO” attributes than
for those of more concrete “KS” attributes.
Whereas job experts rated certain attributes as
needed-at-entry, psychologists identified them
as ones that could be developed on the job.
More uncommon are studies that have
attempted to validate attribute ratings against
consequence-oriented criteria of the type pro-
posed by Sanchez & Levine (2000) and Levine
& Sanchez (2007), such as the inferences
made using job-analytic ratings. Jones et al.
(2001) found that job analysts made better
predictions of worker attribute trainability than
incumbents and students when trainability
ratings were compared with actual changes in
student learning. Although the results of Jones
et al. (2001) suggest that the validity of worker
attribute ratings may vary depending on the
source of the ratings, we recommend that, in
keeping with themes we have developed earlier,
the psychological factors that account for these
differences should be the focus. In this respect,
the work of Jones et al. highlights the idea
that ratings of presumably more malleable KSs
require different expertise from those of more
fixed abilities and other characteristics AOs, a
point that has also been raised by others (e.g.,
Harvey 1991, Morgeson & Campion 1997).
Still another example of consequence-
oriented evaluation was provided by Levine
et al. (1980), who showed that different depic-
tions of jobs analyzed by different methods led
HR professionals to develop very similar ex-
amination plans in the selection context. Yet
there were small rated differences in the quality
of assessment and screening approaches, sug-
gesting for instance that the critical incidents
method resulted in higher-quality examination
412 Sanchez ·Levine
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63CH16-Sanchez ARI 31 October 2011 12:26
plans than those derived from other methods.
Manson (2004), on the other hand, found that
the amount and specificity of information had
an effect on the cognitive challenge and the
quality of the selection plans prepared on the
basis of job-analytic information, thereby sup-
porting the collection of at least moderately
specific information such as the ten most im-
portant tasks and ten most important KSAOs.
However, the question of whether detailed job
analysis has consequences that are equivalent to
those of cursory job analysis is moot unless one
considers the goals that the job analysis serves.
For instance, even though different job-analytic
methodologies varying in the degree of detail
have been found to produce similar job classifi-
cations (Sackett et al. 1981), whether detailed
job analyses make a difference in potentially
more complex decisions, such as developing a
testing plan, warrants further research.
Another approach that is ripe for an ex-
amination of its consequential validity is the
mechanical estimation of worker attributes
through job component validation (Arvey
et al. 1992, Cunningham 1964, Goiffin &
Woycheshin 2006, McCormick et al. 1972,
Sanchez & Fraser 1994). Job component
validation, which may be classified as a case
of synthetic validity, involves statistically
capturing the form in which worker attributes
are predictable from scores on more specific
job components. For example, LaPolice et al.
(2008) used a job component validation ap-
proach that relied on O∗NET data to identify
adult literacy requirements across occupations.
They found multiple correlation coefficients
ranging from 0.79 to 0.81 (corrected for
shrinkage) when predicting literacy scores
from O∗NET items. Jeanneret & Strong
(2003) followed a similar procedure to predict
general aptitude test scores using GWA data
from O∗NET and found lower multiple
correlations ranging from 0.35 to 0.89. An
issue with job component validation research
is how good the statistical predictions or
multiple Rs need to be in order to consider the
mechanically estimated scores to be equivalent
to actual ratings of SMEs (Harvey 2011,
Walmsley et al. 2011). However, whether
scores determined through job component
validation are statistically different from
those directly produced by SMEs may not
be as important as determining if, when, and
through what rules they lead to practically
different inferences and decisions regarding,
for instance, an assessment strategy.
Indeed, future evaluations of the conse-
quences of job-analytic data should consider the
rules governing the manner in which data are
employed to support inferences. For instance,
the exact same data on work activities and
worker attributes may produce rather different
selection plans when the elaborate procedures
for establishing linkages between work activi-
ties and underlying worker attributes outlined
by several authors (Baranowski & Anderson
2005, Goldstein et al. 1993, Landy 1988) are
applied than when the selection plan is deter-
mined solely on the grounds of loosely defined
professional judgment. Similarly, the very
specific rules provided by Fine & Cronshaw
(1999, pp. 133–136) regarding the use of a task
bank to develop behavioral questions in an
employment interview may result in more valid
interviewing than simply letting interviewers
formulate their own questions after studying
the job analysis. More research on the impact
of the rules through which job-analytic data are
transformed into inferences, including infer-
ences regarding appropriate assessment tools,
is needed, because the failure to demonstrate
that detailed information matters may feed
continued skepticism about the need to invest
in detailed job analyses. Such research should
serve to inform evidence-based standards of
job analysis practice for HR programs.
The conclusion that a molar job analysis
suffices in most applications has been formu-
lated in the context of discussing the validity
of GMA tests, which has proven robust in spite
of relatively large task differences among jobs
(Le et al. 2007). Unfortunately, this argument
against detailed job analysis probably found
fertile grounds in many business settings, where
job analysis is accused of being a legalistic obsta-
cle to flexibility and innovation (Drucker 1987,
www.annualreviews.org •The Rise and Fall of Job Analysis and the Future of Work Analysis 413
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63CH16-Sanchez ARI 31 October 2011 12:26
Olian & Rynes 1991). As Sanchez & Levine
(1999) lamented, the job-relatedness provisions
embodied in the Uniform Guidelines on Em-
ployee Selection Procedures (Equal Employ.
Opportun. Commiss. 1978) and in the Amer-
icans with Disabilities Act (U.S. Dept. Justice
1991) were not meant to boost the role of job
analysis as a risk-management device to be used
in litigation. Instead, these provisions were
meant to promote the development of selection
procedures that were tied to business results
and, as a result, would be more effective at iden-
tifying top performers. Nevertheless, one of
the unintended consequences of this legislation
has been promoting the perception of job anal-
ysis as a necessary evil whose sole purpose is to
mitigate the risk associated with potential legal
challenges to selection procedures (Olian &
Rynes 1991). Research demonstrating that job
analysis, and specifically, detailed job analysis,
can be consequential in terms of facilitating bet-
ter inferences is needed to overcome prejudice
against job analysis (Sanchez & Levine 2000).
An impediment to the acceptability of
worker attributes as the language of choice
when discussing work lies in the develop-
ment of suitable job-analytic terminology. In-
dustrial and organizational psychologists have
long aspired to a “common metric” in the lan-
guage of work through which work require-
ments could be compared across jobs. This as-
piration led to the development of the DOT
(U.S. Dept. Labor 1965a,b). In fact, one of the
motivations behind the DOT’s replacement,
namely O∗NET, was the DOT’s reliance on
occupation-specific tasks that interfered with
cross-occupational comparisons. A review of
current usages of O∗NET (Natl. Res. Counc.
2010, pp. 139–155), however, suggested that
many of the psychologically worded items em-
ployed in O∗NET, especially those intended
to capture abilities like “flexibility of closure”
in the abilities domain, are understandably es-
chewed in favor of more user-friendly labels in
applications like career planning.
The popularity of competency models that
translate these types of worker attribute terms
into more accessible ones for end users suggests
that traditional taxonomies of worker attributes
that employ rather arcane terminology are un-
likely to become the language of choice when
discussing the content of work, at least among
end users (Sanchez & Levine 2009). Under-
standability is a key determinant of the extent to
which such terminology is likely to be adopted
in HRM systems (Bowen & Ostroff 2004), and
therefore traditional job analytic terminology
may have to be revised to rid it of unnecessary
jargon. Our recommendation is not to water
down job analysis research by replacing tra-
ditional terms with pop-psychology ones, but
simply to recognize that the acceptability of job
analysis by its end users is key in any job anal-
ysis application and that such acceptability is
better served by user-friendly terminology ac-
cessible to those in charge of performing the
jobs (Sanchez & Levine 2009).
RESEARCH ON WORK
CONTEXT INFORMATION
Interactional psychology has recognized that
the situation or context moderates the relation-
ship between dispositions or traits and behav-
ior (Frederiksen 1972, Hattrup & Jackson 1996,
Johns 2006, Mischel 1977). Situational strength
refers to the characteristics of situations that
do or do not restrict the expression of individ-
ual differences, particularly those in nonabil-
ity domains such as personality traits (Meyer
et al. 2010, Mullins & Cummings 1999, Weiss
& Adler 1984). Although situational strength
has been operationalized in ways that recog-
nize the importance of situational constraints
(LaFrance et al. 2003), it has not been opera-
tionalized in job-analytic terms until recently.
Meyer et al. (2009) constructed an O∗NET-
based measure of situational strength using 14
items from the GWAs and work context do-
mains. They distinguished between two aspects
of situational strength: constraints and conse-
quences. Although their meta-analysis of valid-
ity coefficients of personality measures showed
stronger validity coefficients for occupations
that were deemed weak from a situational
strength viewpoint, the differences were small.
414 Sanchez ·Levine
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63CH16-Sanchez ARI 31 October 2011 12:26
Meyer et al. (2010) provided a more in-depth
analysis of the various occupational elements
that may contribute to situational strength
and which should be incorporated in future
studies.
One of the obstacles to the infusion of an
interactional view of context in job analysis
is that the traditional view of contextual
factors such as physical working conditions,
environmental hazards, and the machines,
tools, and equipment employed on the job has
typically considered them to be a “main effect”
type of job demand. That is, context, just like
work activities, has been considered a source
that calls for certain worker attributes, such as
harsh working conditions calling for physical
resilience. Drawing an analogy with signal
detection theory (Tett & Burnett 2003), job
analysis has traditionally viewed context as job
demands that, like work activities, determine
the “channels” or worker attributes required for
job performance. This view of context is not an
interactive one at all because it ignores that con-
text is an interactional variable that alters the
functional relationships between job demands
and behavior ( Johns 2006). In the terminology
of signal detection theory, an interactional
approach suggests that context raises or lowers
the “volume” of certain channels; for example,
performing certain work activities may call
for increased levels of social sensitivity if
performed in a certain social context.
Tett & Burnett (2003) propose that con-
text provides trait-relevant cues through three
sources (organizational, social, and task) that
moderate the relationship between traits and
work behavior. They further speculate that job
demands, which are presumably derived from
the job responsibilities or work activities to be
carried out on the job, activate certain traits,
but that such activation interacts with context
or situation features that distract, constrain, re-
lease, or facilitate the expression of those traits
or worker attributes. For instance, agreeable-
ness may be activated by job demands involv-
ing helping customers, but it may be distracted
by groupthink conditions in one’s work unit and
constrained by a mechanistic atmosphere in the
organization (Tett & Burnett 2003). This ap-
proach goes beyond the more simplistic worker
activity ×worker attribute matrix that has been
proposed elsewhere (Baranowski & Anderson
2005) because it suggests that such activity-
attribute relationships are altered by contextual
variables.
Further understanding of these contingen-
cies requires a departure from the manner in
which SMEs are usually approached in job anal-
ysis research. Indeed, SMEs are typically em-
ployed as “observers” of an allegedly external
reality dubbed the job, while their subjective
experience of such a reality has been largely ig-
nored in a manner that is consistent with the
rejection of subjectivity as a valid object of psy-
chological study that has prevailed in industrial
and organizational psychology (Weiss & Rupp
2011). A person-centric approach to the analy-
sis of work is needed to better understand how
the demands of work as job incumbents expe-
rience and interpret them are affected by con-
textual aspects that may augment or constrain
them. In other words, job analysis should delve
more deeply into the study of the psychology of
the workers’ experience and, more specifically,
into the contextual aspects that are perceived
to modify the extent to which job demands call
for certain responses. Qualitative job analysis
methods such as the critical incidents technique
(Flanagan 1954) may be used to identify these
types of contingencies by exploring the rela-
tionships between the three basic elements of a
critical incident: the situation, the behavior to
which it is perceived to have led, and the con-
sequences of such behavior.
Note that when we advocate delving deeper
into the manner in which incumbents experi-
ence their work, we are not promoting a purely
phenomenological approach to job analysis that
denies or ignores the objective reality in which
job incumbents are embedded; neither do we
advocate solipsism or the belief that reality
(work experiences in our case) is the creation
of one’s mind (Connell & Nord 1996). Instead,
we are simply arguing for the study of how in-
cumbents perceive and interpret the objective
reality of their work because such study does,
www.annualreviews.org •The Rise and Fall of Job Analysis and the Future of Work Analysis 415
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63CH16-Sanchez ARI 31 October 2011 12:26
in our opinion, hold the key to a better under-
standing of work requirements.
Research from other domains may also pro-
vide useful conceptual models to frame these
contextual influences. For instance, work stress
research has noted that reactions to aspects of
the work environment are moderated by sec-
ondary appraisals or the extent to which em-
ployees perceive to have an adequate repertoire
of coping responses (Lazarus & Folkman 1984).
In this respect, certain job demands accompa-
nied by contextual factors that are perceived
to make them insurmountable would exacer-
bate the need for certain worker attributes. This
approach probably requires new types of job-
analytic inquiries from SMEs, such as the ex-
tent to which they feel capable of coping with
certain job demands under varying sets of con-
textual conditions. Other examples of interac-
tional models that could be fruitfully borrowed
by job analysis researchers exist in the assess-
ment center literature, where trait activation
theories have been employed to explain behav-
ioral inconsistencies as a function of situational
cues (e.g., Lievens et al. 2006).
One more area that relates to context
concerns the research topic of person–work
environment fit. That stream of research
attempts to assess work environments and their
components such as teams, jobs, supervisors,
vocations, or organization culture on the one
hand and parallel personal attributes on the
other (Edwards et al. 2006, Kristof-Brown
et al. 2005). The extent of match or mismatch
is then related to presumptive outcomes
such as job satisfaction or job performance.
Although a modicum of success at predicting
these outcomes has been demonstrated using
measures of match (Kristof-Brown et al. 2005),
the methods employed to assess environments
and their components and the degree of fit fall
outside the realm of conventional job analysis
approaches. Indeed, often the most successful
predictions are found when respondents report
their perceptions of the degree to which they fit
with their environment, a method termed mo-
lar fit by Edwards et al. (2006). Although this
research stream on P-E fit is not considered part
of job analysis, it highlights the need to broaden
the notion of work context, which should also
incorporate multilevel variables such as shared
team cognition, shared climate, and other
team and organization-level variables. A better
understanding of these cross-level interactions
should illuminate mechanisms by which con-
textual cues modify the demands on workers
to employ types and levels of worker attributes
(Ployhart & Moliterno 2011).
Conventionally, job analysis assumes that
there will be a linear relationship between the
attributes and job outcomes—a more-is-better
notion. However, an emerging stream of re-
search suggests that contextual factors interact
with worker attributes such that there is a non-
linear relationship between certain personality
attributes such as openness to experience and
certain contextual conditions such as support
for creativity (Baer & Oldham 2006, Burke &
Witt 2002, George & Zhou 2001, Shalley et al.
2004). Further research is needed on whether
these nonlinear relationships may apply to cog-
nitive attributes. For example, the widely used
Wonderlic Personnel Test provides min-max
normative test scores for a host of jobs such that
people scoring above and below the ideal range
are predicted to be less successful once hired
(e.g., Levine 1997). The potential determina-
tion of these types of nonlinear relationships de-
pends to a large extent on future improvements
in the measurement of work context at multiple
levels of analysis so that the contextual condi-
tions that act as moderators of worker attributes
can be reliably pinpointed. Clearly, more re-
search is needed to refine extant taxonomies of
work context influences. For instance, in spite
of the generally acceptable reliabilities reported
in pilot O∗NET studies, whether there is con-
ceptual and empirical overlap between the task,
physical, and social context variables adopted in
O∗NET deserves further examination (Strong
et al. 1999).
Future research should also acknowledge
that work context is a dynamic phenomenon,
and therefore there are wide variations in work
context within the same job title. Dierdorff &
Morgeson (2007) and Dierdorff et al. (2009)
416 Sanchez ·Levine
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63CH16-Sanchez ARI 31 October 2011 12:26
have reported research suggesting that work
context induces variations in the manner in
which incumbents of the same job experience
job demands, especially incumbents of manage-
rial and other loosely defined jobs. This con-
ception is consistent with interactional mod-
els of behavior such as the cognitive-affective
personality system model proposed by Mischel
& Shoda (1995, 1998), in which within-person
variability is explained by situation-response
contingencies such as, “if this situation, then
that response.” Mischel & Shoda (1995, 1998)
summarized empirical evidence suggesting that
individual variability in behavior across situ-
ations can be explained by within-job varia-
tions in context, which are likely to trigger dif-
ferent job demands throughout the course of
discharging one’s job responsibilities. Future
research may incorporate experience-sampling
methodology, which is increasingly being
employed to study dynamic organizational
phenomena such as momentary performance
(Fisher & Noble 2004) and organizational citi-
zenship behavior (Ilies et al. 2006).
CONCLUSIONS AND
FUTURE TRENDS
Recent reviews of the job-analytic literature
have largely been organized around decisions
related to the procedure through which job in-
formation should be gathered, thereby empha-
sizing the various choices among the sources,
methods, and level of detail of the data to be
gathered (Pearlman & Sanchez 2010; Sackett
& Laczo 2003; Sanchez & Levine 1999, 2001).
The view of job analysis as an information-
gathering process whose sole purpose is to
serve as the antecedent of other applications
has possibly fed the notion of job analysis as
essentially nothing more than a set of methods.
As Pearlman & Sanchez (2010) put it, job anal-
ysis is “...seldom an end in itself but is almost
always a tool in service of some application, a
means to an end.” The notion of job analysis
as an information-gathering tool might have
unintentionally created the impression that its
sole purpose is to do the dirty work needed for
subsequent, truly scientific endeavors such as
selection. Several authors have expressed their
discontent with this prevailing perception of
job analysis within the discipline of industrial
and organizational psychology (Cunningham
1989, Mitchell & Driskill 1996, Morgeson &
Dierdorff 2011), which Harvey (1991) synthe-
sized as the “image problem” of job analysis.
The view of job analysis as a support or
subservient activity might have deterred inter-
est in cutting-edge research on the job analysis
domain. This view might be unintentionally
fueled by the stance that jobs consist of solely
objective or verifiable behaviors and working
conditions and that their analysis is therefore
a somewhat cut-and-dry actuarial task. This
emphasis on observables implicitly assumes
that jobs are epistemologically self-sustaining
objects, and it resembles the approach taken in
the physical and biological sciences, where an
object is studied externally through primarily
unobtrusive observation and measurement
(Cronshaw 1998). Primoff & Fine (1988)
perceptively noted that this objectified ap-
proach to job analysis is shortsighted, because
job analysts should not forget that unlike (to
use the words of Primoff & Fine) flowers
and rocks, jobs do not exist separately from
the individuals who perform them. In fact,
Primoff & Fine observed that the sole process
of analyzing the job often changes it, as
incumbents are led to reflect on their approach
to fulfill their job duties, and this reflection
frequently alters the manner in which the job is
performed afterward. We maintain that it is the
insight into work demands as experienced by
incumbents that turns job analysis into a truly
psychological endeavor (Sanchez & Levine
1999, p. 72) whose primary goal is precisely
to gain an understanding of the psychological
requirements of jobs. Fortunately, our review
of job analysis research suggests that the job
analysis domain has already turned that corner,
and accordingly, the scope of job analysis
research is being expanded toward a better
understanding of work demands as experienced
by job incumbents, both individually and
collectively through shared perceptions. As
such, our hope is that this review will be the
www.annualreviews.org •The Rise and Fall of Job Analysis and the Future of Work Analysis 417
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63CH16-Sanchez ARI 31 October 2011 12:26
first of many to cover job and work analysis
in the Annual Review of Psychology over time,
thereby documenting meaningful advances that
may enable optimization of the outcomes pro-
duced and enjoyed by people in one of the most
critical domains of human activity—their work.
DISCLOSURE STATEMENT
The authors are unaware of any affiliation, funding, or financial holdings that might be perceived
as affecting the objectivity of this review.
LITERATURE CITED
Aamodt MG, Kimbrough WW, Keller RJ, Crawford KJ. 1982. Relationship between sex, race, and job
performance level and the generation of critical incidents. Educ. Psychol. Res. 2:227–34
Aguinis H, Kraiger K. 2009. Benefits of training and development for individuals and teams, organizations,
and society. Annu. Rev. Psychol. 60:451–74
Aguinis H, Mazurkiewicz MD, Heggestad ED. 2009. Using web-based frame-of-reference training to decrease
biases in personality-based job analysis: an experimental field study. Pers. Psychol. 62:405–38
Allen CR. 1919. The Instructor, the Man and the Job. Philadelphia, PA: Lippincott
Arvey RD, Davis GA, McGowen SL, Dipboye RL. 1982. Potential sources of bias on job analytic processes.
Acad. Manag. J. 25:618–29
Arvey RD, Passino EM, Lounsbury JW. 1977. Job analysis results as influenced by sex of incumbent and sex
of analyst. J. Appl. Psychol. 62:411–16
Arvey RD, Salas E, Gialluca KA. 1992. Using task inventories to forecast skills and abilities. Hum. Perform.
5:171–90
Baer M, Oldhman GR. 2006. The curvilinear relation between experienced creative time pressure and creativ-
ity: moderating effects of openness to experience and support for creativity. J. Appl. Psychol. 91:963–70
Baranowski LE, Anderson LE. 2005. Examining rating source variation in work behavior to KSA linkages.
Pers. Psychol. 58:1041–54
Barrick MR, Mount MK. 1991. The Big Five personality dimensions and job performance: a meta-analysis.
Pers. Psychol. 44:1–26
Bartram D. 2005. The great eight competencies: a criterion-centric approach to validation. J. Appl. Psychol.
90:1185–203
Becker BE, Huselid MA, Ulrich D. 2001. The HR Scorecard: Linking People, Strategy, and Performance.Boston,
MA: Harvard Bus. School Press
Befort N, Hattrup K. 2003. Valuing task and contextual performance: experience, job roles, and ratings of the
importance of job behaviors. Appl. Hum. Resour. Manag. Res. 8(1):17–32
Biddle BJ. 1986. Recent developments in role theory. Annu. Rev. Sociol. 12:67–92
Borman WC, Dorsey D, Ackerman L. 1992. Time-spent responses as time allocation strategies: relations with
sales performance in a stockbroker sample. Pers. Psychol. 45:763–77
Borman WC, Hanson MA, Hedge JW. 1997. Personnel selection. Annu. Rev. Psychol. 48:299–337
Borman WC, Kubisiak UC, Schneider RJ. 1999. Work styles. In An Occupational Information System for the
21st Century: The Development of O∗NET, ed. NG Peterson, MD Mumford, WC Borman, PR Jeanneret,
EA Fleishman, pp. 213–26. Washington, DC: Am. Psychol. Assoc.
Borman WC, Motowidlo SJ. 1993. Expanding the criterion domain to included elements of contextual per-
formance. In Personnel Selection in Organizations, ed. N Schmitt, WC Borman, pp. 71–98. San Francisco,
CA: Jossey-Bass
Bowen DE, Ostroff C. 2004. Understanding HRM-firm performance linkages: the role of the “strength” of
the HRM system. Acad. Manag. Rev. 29:203–21
Brannick MT, Levine EL, Morgeson FP. 2007. Job Analysis: Methods, Research, and Applications for Human
Resource Management. Thousand Oaks, CA: Sage. 2nd ed.
Bretz RD, Milkovich GT, Read W. 1992. Current state of performance appraisal research and practice:
concerns, directions, and implications. J. Manag. 18:321–52
Burke LA, Witt LA. 2002. Moderators of the openness to experience-performance relationship. J. Manag.
Psychol. 17:712–21
418 Sanchez ·Levine
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63CH16-Sanchez ARI 31 October 2011 12:26
Butler SK, Harvey RJ. 1988. A comparison of holistic versus decomposed rating of Position Analysis Ques-
tionnaire work dimensions. Pers. Psychol. 41:761–71
Campion MA, Fink AA, Ruggeberg BJ, Carr L, Phillips GM, Odman RB. 2011. Doing competencies well:
best practices in competency modeling. Pers. Psychol. 64:225–62
Carroll JM. 1997. Human-computer interaction. Psychology as a science of design. Annu. Rev. Psychol. 48:61–
83
Cascio WF, Aguinis H. 2008. Research in industrial and organizational psychology from 1963 to 2007.
J. Appl. Psychol. 93:1062–81
Chatman JA, Cha SE. 2003. Leading by leveraging culture. Calif. Manag. Rev. 45:20–34
Christal RE, Weissmuller JJ. 1988. Job-task inventory analysis. In The Job Analysis Handbook for Business,
Industry, and Government, Vol. II, ed. S Gael, pp. 1036–50. New York: Wiley
Conley PR, Sackett PR. 1987. Effects of using high- versus low-performing job incumbents as sources of
job-analysis information. J. Appl. Psychol. 72:434–37
Connell AF, Nord WR. 1996. The bloodless coup: the infiltration of organization science by uncertainty and
values. J.Appl.Behav.Sci.32:407–27
Cornelius ET, DeNisi AS, Blencoe AG. 1984a. Expert and na¨
ıve raters using the PAQ: Does it matter? Pers.
Psychol. 37:453–64
Cornelius ET, Lyness KS. 1980. A comparison of holistic and decomposed judgment strategies in job analysis
by job incumbents. J. Appl. Psychol. 65:155–63
Cornelius ET, Schmidt FL, Carron TJ. 1984b. Job classification approaches and the implementation of validity
generalization results. Pers. Psychol. 37:247–60
Cronshaw SF. 1998. Job analysis: changing nature of work. Can. Psychol. 39(1):5–13
Cunningham JW. 1964. Worker-oriented job variables: their factor structure and use in determining job requirements.
Unpublished doctoral dissertation, Purdue Univ., West Lafayette, IN
Cunningham JW. 1989. Discussion. In Applied measurement issues in job analysis, ed. RJ Harvey (Chair).
Symposium presented at annu. meet. Am. Psychol. Assoc., New Orleans, LA
Cunningham JW. 1996. Generic job descriptors: a likely direction in occupational analysis. Mil. Psychol.
8(3):247–62
Cunningham JW, Ballentine RD. 1982. The General Work Inventory. Raleigh, NC: Authors
Dierdorff EC, Morgeson FP. 2007. Consensus in work role requirements: the influence of discrete occupational
context on role expectations. J. Appl. Psychol. 92:1228–41
Dierdorff EC, Morgeson FP. 2009. Effects of descriptor specificity and observability on incumbent work
analysis ratings. Pers. Psychol. 62:601–28
Dierdorff EC, Rubin RS. 2007. Carelessness and discriminability in work role requirement judgments: influ-
ences of role ambiguity and cognitive complexity. Pers. Psychol. 60:597–625
Dierdorff EC, Rubin RS, Bachrach DG. 2010. Role expectations as antecedents of citizenship and the mod-
erating effect of work context. J. Manag. doi: 10.1177/0149206309359199. In press
Dierdorff EC, Rubin RS, Morgeson FP. 2009. The milieu of managerial work: an integrative framework
linking work context to role requirements. J. Appl. Psychol. 94:972–88
Dierdorff EC, Wilson MA. 2003. A meta-analysis of job analysis reliability. J. Appl. Psychol. 88:635–46
Donsbach J, Tsacoumis S, Sager C, Updegraff J. 2003. O∗NET Analyst Occupational Abilities Ratings: Procedures.
Raleigh, NC: Natl. Cent. O∗NET Dev. http://www.onetcenter.org/reports/AnalystProc.html
Drucker PF. 1987. Workers’ hands bound by tradition. Wall Street J. Aug. 2, p. 18
Edwards J, Cable D, Williamson I, Lambert L, Shipp A. 2006. The phenomenology of fit: linking the person
and environment to the subjective experience of person-environment fit. J. Appl. Psychol. 91:802–27
England P, Dunn D. 1988. Evaluating work and comparable worth. Annu. Rev. Sociol. 14:227–48
Equal Employ. Opportun. Commiss., Civil Serv. Commiss., Dep. Labor, Dep. Justice. 1978. Uniform Guide-
lines on Employee Selection Procedures. Fed. Regist. 43(166):38295–309
Fine SA, Cronshaw SF. 1999 . Functional Job Analysis. A Foundation for Human Resource Management. Mahwah,
NJ: Erlbaum
Fisher CD, Noble CS. 2004. A within-person examination of correlates of performance and emotions while
working. Hum. Perform. 17:145–68
www.annualreviews.org •The Rise and Fall of Job Analysis and the Future of Work Analysis 419
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63CH16-Sanchez ARI 31 October 2011 12:26
Flanagan JC. 1954. The critical incident technique. Psychol. Bull. 51:327–58
Fleishman EA, Costanza DP, Marshall-Mies J. 1999. Abilities. In An Occupational Information System for the
21st Century: The Development of O∗NET, ed. NG Peterson, MD Mumford, WC Borman, PR Jeanneret,
EA Fleishman. Washington, DC: Am. Psychol. Assoc.
Fleishman EA, Gebhardt DL, Hogan JC. 1986. The perception of physical effort in job tasks. In The Perception
of Exertion in Physical Work, ed. G Borg, D Ottoson, pp. 225–42. Stockholm, Sweden: Macmillan
Fleishman EA, Quaintance MK. 1984. Taxonomies of Human Performance. Orlando, FL: Academic
Fleishman EA, Reilly ME. 1992. Handbook of Human Abilities. Definitions, Measurements, and Job Task Require-
ments. Palo Alto, CA: Consult. Psychol. Press
Ford JK, Smith EM, Sego DJ, Quinones MA. 1991. Impact of task experience and individual factors on
training-emphasis ratings. J. Appl. Psychol. 78:583–90
Frederiksen N. 1972. Toward a taxonomy of situations. Am. Psychol. 27:114–23
Friedman L. 1990. Degree of redundancy between time, importance, and frequency task ratings. J. Appl.
Psychol. 75:748–52
Friedman L. 1991. Correction to Friedman 1990. J. Appl. Psychol. 76:366
Friedman L, Harvey RJ. 1986. Can raters with reduced job descriptive information provide accurate position
analysis questionnaire (PAQ) ratings? Pers. Psychol. 39:779–89
Gael S. 1983. Job Analysis: A Guide to Assessing Work Activities. San Francisco, CA: Jossey-Bass
Gael S, Cornelıus ET III, Levine EL, Salvendy G, eds. 1988. The Job Analysis Handbook for Business, Industry,
and Government.NewYork:Wiley
Gatewood RD, Field HS, Barrick M. 2008. Human Resource Selection. Mason, OH: Thomson/South-Western.
6th ed.
George JM, Zhou J. 2001. When openness to experience and conscientiousness are related to creative behavior:
an interactional approach. J. Appl. Psychol. 86:513–24
Gibson SG, Harvey RJ, Quintela Y. 2004. Holistic versus decomposed ratings of general dimensions of work activity.
Presented at Annu. Conf. Soc. Ind. Organ. Psychol., Chicago, IL
Goiffin RD, Woycheshin DE. 2006. An empirical method of determining employee competencies/KSAOs
from task-based job analysis. Mil. Psychol. 18:121–30
Goldstein IL, Zedeck S, Schneider B. 1993. An exploration of the job analysis-content validity process. In
Personnel Selection in Organizations, ed. N Schmitt, WC Borman, pp. 3–34. San Francisco, CA: Jossey-Bass
Grant AM. 2007. Relational job design and the motivation to make a prosocial difference. Acad. Manag. Rev.
32:393–417
Green SB, Stutzman T. 1986. An evaluation of methods to select respondents to structured job-analysis
questionnaires. Pers. Psychol. 39:543–64
Green SB, Veres JG. 1990. Evaluation of an index to detect inaccurate respondents to a task analysis inventory.
J. Bus. Psychol. 5:47–61
Guion RM, Highhouse S. 2006. Essentials of Personnel Assessment and Selection. Mahwah, NJ: Erlbaum
Harvey RJ. 1991. Job analysis. In Handbook of Industrial and Organizational Psychology, ed. MD Dunnette,
LM Hough, vol. 2, pp. 71–163. Palo Alto, CA: Consult. Psychol. Press. 2nd ed.
Harvey RJ. 2009. The O∗NET: Do too-abstract titles +unverifiable holistic ratings +questionable raters +low agree-
ment +inadequate sampling +aggregation bias =(a) validity, (b) reliability, (c) utility, or (d) none of the above?
Paper provided to Panel to Rev. Occupat. Inform. Netw. (O∗NET). http://www7.nationalacademies.
org/cfe/O_NET_RJHarvey_Paper1.pdf
Harvey RJ. 2011. Deriving Synthetic Validity Models: Is R =0.80 Large Enough? Presented at Annu. Conf. Soc.
Ind. Organ. Psychol., Chicago, IL
Harvey RJ, Lozada-Larsen SR. 1988. Influence of amount of job descriptive information on job analysis rating
accuracy. J. Appl. Psychol. 73:457–61
Harvey RJ, Wilson MA. 2000. Yes Virginia, there is an objective reality in job analysis. J. Organ. Behav.
21:829–54
Harvey RJ, Wilson MA. 2010. Discriminant validity concerns with the O∗NET holistic rating scales.Presentedat
Annu. Conv. Soc. Ind. Org. Psychol., Atlanta
Harvey RJ, Wilson MA, Blunt JH. 1994. A comparison of rational/holistic versus empirical/decomposed methods of
identifying and rating general work behaviors. Presented at Ann. Conf. Soc. Ind. Org. Psychol., Nashville
420 Sanchez ·Levine
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63CH16-Sanchez ARI 31 October 2011 12:26
Hattrup K, Jackson SE. 1996. Learning about individual differences by taking situations seriously. In Individual
Differences and Behavior in Organizations, ed. KR Murphy, pp. 507–47. San Francisco, CA: Jossey-Bass
Hazel JT, Madden JM, Christal EE. 1964. Agreement between worker-supervisor descriptions of the worker’s
job. J. Ind. Psychol. 2:71–79
Heneman HG, Judge TA. 2009. Staffing Organizations. Boston, MA: Irwin/McGraw Hill. 6th ed.
Hogarth RM. 1981. Beyond discrete biases: functional and dysfunctional aspects of judgmental heuristics.
Psychol. Bull. 90:197–217
Hough LM, Oswald FL. 2000. Personnel selection: looking toward the future—remembering the past. Annu.
Rev. Psychol. 51:631–64
Hubbard M, McCloy R, Campbell J, Nottingham J, Lewis P, et al. 2000. Revision of O∗NET Data
Collection Procedures. Raleigh, NC: Natl. Cent. O∗NET Dev. http://www.onetcenter.org/reports/
Data_appnd.html
Hough LM. 1992. The “Big Five” personality variables—construct confusion: description versus prediction.
Hum. Perform. 5:139–55
Ilgen DR, Hollenbeck JR. 1991. The structure of work: job design and roles. In Handbook of Industrial and
Organizational Psychology, ed. MD Dunnette, LM Hough, vol. 2, pp. 165–207. Palo Alto, CA: Consult.
Psychol. Press. 2nd ed.
Ilies R, Scott BA, Judge TA. 2006. A multilevel analysis of the effects of positive personal traits, positive
experienced states and their interactions on intraindividual patterns of citizenship behavior at work. Acad.
Manag. J. 49:561–75
Jeanneret PR, Borman WC, Kubisiak UC, Hanson MA. 1999. Generalized work activities. In An Occupational
Information System for the 21st Century: The Development of O∗NET, ed. NG Peterson, MD Mumford, WC
Borman, PR Jeanneret, EA Fleishman, pp. 105–25. Washington, DC: Am. Psychol. Assoc.
Jeanneret PR, Strong MH. 2003. Linking O∗NET job analysis information to job require predictors: an
O∗NET application. Pers. Psychol. 56:465–92
Johns G. 2006. The essential impact of context on organizational behavior. Acad. Manag. Rev. 31:386–408
Jones RG, Sanchez JI, Parameswaran G, Phelps J, Shoptaugh C, et al. 2001. Selection or training? A two-fold
test of the validity of job-analytic ratings of trainability. J. Bus. Psychol. 15:363–89
Kristof-Brown A, Zimmerman R, Johnson E. 2005. Consequences of individuals’ fit at work: a meta-analysis
of person-job, person-organization, person-group, and person-supervisor fit. Pers. Psychol. 58:281–342
Kruglanski AW. 1989. The psychology of being “right”: the problem of accuracy in social perception and
cognition. Psychol. Bull. 106:395–409
LaFrance M, Hecht MA, Paluck EL. 2003. The contingent smile: a meta-analysis of sex differences in smiling.
Psychol. Bull. 129:305–34
Lamiell JT. 2000. A periodic table of personality elements? The “Big Five” and trait “psychology” in critical
perspective. J. Theor. Philos. Psychol. 20:1–24
Landy FJ. 1988. Selection procedure development and usage. In The Job Analysis Handbook for Business, Industry,
and Government, vol. I, ed. S Gael, pp. 271–87. New York: Wiley
Landy FJ, Shankster LJ, Kohler SS. 1994. Personnel selection and placement. Annu. Rev. Psychol. 45:261–92
Landy FJ, Vasey J. 1991. Job analysis: the composition of SME samples. Pers. Psychol. 44:27–50
LaPolice CC, Carter GW, Johnson JJ. 2008. Linking O∗NET descriptors to occupational literacy require-
ments using job component validation. Pers. Psychol. 61:405–41
Lazarus RS, Folkman S. 1984. Stress, Appraisal, and Coping. New York: Springer
Le H, Oh I, Shaffer J, Schmidt F. 2007. Implications of methodological advances for the practice of personnel
selection: how practitioners benefit from meta-analysis. Acad. Manag. Perspect. 21:6–15
Levine EL. 1983. Everything You Always Wanted to Know About Job Analysis. Tampa, FL: Mariner
Levine EL. 1997. Review of the Wonderlic Personnel Test (WPT). Secur. J. 8:179–81
Levine EL, Ash RA, Bennett N. 1980. Exploratory comparative study of four job analysis methods. J. Appl.
Psychol. 65:524–35
Levine EL, Sanchez JI. 2007. Evaluating work analysis in the 21st century. Ergometrika 4:1–11
Lievens F, Chasteen CS, Day EA, Christiansen ND. 2006. Large-scale investigation of the role of trait
activation theory for understanding assessment center convergent and discriminant validity. J. Appl.
Psychol. 91:247–58
www.annualreviews.org •The Rise and Fall of Job Analysis and the Future of Work Analysis 421
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63CH16-Sanchez ARI 31 October 2011 12:26
Lievens F, Sanchez JI. 2007. Can training improve the quality of inferences made by raters in competency
modeling? A quasi-experiment. J. Appl. Psychol. 92:812–19
Lievens F, Sanchez JI, Bartram D, Brown A. 2010. Lack of consensus among competency ratings of the same
occupation: noise or substance? J. Appl. Psychol. 95:562–71
Lievens F, Sanchez JI, De Corte W. 2004. Easing the inferential leap in competency modeling: the effects of
task-related information and subject matter expertise. Pers. Psychol. 57:881–904
Lindell MK, Clause CS, Brandt CJ, Landis RS. 1998. Relationship between organizational context and job
analysis task ratings. J. Appl. Psychol. 83:769–76
Lucia A, Lepsinger R. 1999. The Art and Science of Competency Models: Pinpointing Critical Success Factors in
Organizations. San Francisco, CA: Jossey-Bass
Lyons P. 2008. The crafting of jobs and individual differences. J. Bus. Psychol. 23:25–36
Manson TM. 2004. Cursory Versus Comprehensive Job Analysis for Personnel Selection: A Consequential Validity
Analysis. Unpublished doctoral dissertation, Univ. S. Florida, Tampa
Manson TM, Levine EL, Brannick MT. 2000. The construct validity of task inventory ratings: a multitrait-
multimethod analysis. Hum. Perform. 13:1–22
Mathieu J, Maynard MT, Rapp T, Gilson L. 2008. Team effectiveness 1997–2007: a review of recent advance-
ments and a glimpse into the future. J. Manag. 34:410–76
McCormick EJ. 1960. Effect of amount of job information required on reliability of incumbents’ check-list
reports. USAF Wright Air Dev. Div. Tech. Note 60–142
McCormick EJ. 1976. Job and task analysis. In Handbook of Industrial and Organizational Psychology,ed.
MD Dunnette, pp. 651–96. Chicago, IL: Rand McNally
McCormick EJ, Jeanneret PR, Mecham RC. 1972. A study of job characteristics and job dimensions as based
on the position analysis questionnaire (PAQ). J. Appl. Psychol. 56:347–68
Meyer HH. 1959. A comparison of foreman and general foreman conceptions of the foreman’s job responsi-
bilities. Pers. Psychol. 12:445–52
Meyer RD, Dalal RS, Bonaccio S. 2009. A meta-analytic investigation into the moderating effects of situational
strength on the conscientiousness-performance relationship. J. Organ. Behav. 30:1077–102
Meyer RD, Dalal RS, Hermida R. 2010. A review and synthesis of situational strength in the organizational
sciences. J. Manag. 36:121–40
Mischel W. 1977. The interaction of person and situation. In Personality at the Crossroads: Current Issues in
Interactional Psychology, ed. D Magnusson, NS Endler, pp. 333–52. Hillsdale, NJ: Erlbaum
Mischel W, Shoda Y. 1995. A cognitive–affective system theory of personality: reconceptualizing situations,
dispositions, dynamics, and invariance in personality structure. Psychol. Rev. 102:246–68
Mischel W, Shoda Y. 1998. Reconciling processing dynamics and personality dispositions. Annu. Rev. Psychol.
49:229–58
Mitchell JL, Driskill WE. 1996. Military job analysis: a historical perspective. Mil. Psychol. 8:119–42
Morgeson FP, Campion MA. 1997. Social and cognitive sources of potential inaccuracy in job analysis.
J. Appl. Psychol. 82:627–55
Morgeson FP, Delaney-Klinger K, Mayfield MS, Ferrara P, Campion MA. 2004. Self-presentation processes
in job analysis: a field experiment investigating inflation in abilities, tasks, and competencies. J. Appl.
Psychol. 89:674–86
Morgeson FP, Dierdorff EC. 2011. Work analysis: from technique to theory. In APA Handbook of Industrial
and Organizational Psychology, ed. S Zedeck, vol. 2, pp. 3–41. Washington, DC: Am. Psychol. Assoc.
Morrison EW. 1994. Role definitions and organizational citizenship behavior: the importance of the em-
ployee’s perspective. Acad. Manag. J. 37:1543–67
Mullins JM, Cummings LL. 1999. Situational strength: a framework for understanding the role of individuals
in initiating proactive strategic change. J. Organ. Change Manag. 12:462–79
M¨
unsterberg H. 1913. Psychology and Industrial Efficiency. Boston, MA: Houghton Mifflin
Natl. Res. Counc. 2010. A database for a changing economy: review of the Occupational Information Network
(O∗NET). Panel to Review the Occupational Information Network (O∗NET). In Committee on National
Statistics, Division of Behavioral and Social Sciences and Education, ed. NT Tippins, ML Hilton. Washington,
DC: Natl. Acad. Press
422 Sanchez ·Levine
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63CH16-Sanchez ARI 31 October 2011 12:26
Occup. Inform. Advis. Panel. 2009. Content Model and Classification Recommendations for the So-
cial Security Administration Occupational Information System. http://www.ssa.gov/oidap/Documents/
FinalReportRecommendations.pdf
Olian JD, Rynes SL. 1991. Making total quality work: aligning organizational processes, performance mea-
sures, and stakeholders. Hum. Resour. Manag. 30:303–33
O’Reily C, Chatman J. 1996. Cultures as social control: corporations, cults, and commitment. In Research in
Organizational Behavior, vol. 18, ed. L Cummings, B Staw, pp. 157–200. Greenwich, CT: JAI
Otis J. 2009. Bits and Pieces of My Life. Bowling Green, OH: Soc. Industr. Organ. Psychol. http://www.
siop.org/presidents/otis.aspx
Parker SK. 2007. “That is my job”: How employees’ role orientation affects their job performance. Hum.
Relat. 60:403–34
Parsons F. 1909. Choosing a Vocation. Boston, MA: Houghton Mifflin.
Pearlman K, Sanchez JI. 2010. Work analysis. In Handbook of Employee Selection, ed. JL Farr, NT Tippins,
pp. 73–98. New York: Routledge
Pearlman K, Schmidt FL, Hunter JE. 1980. Validity generalization results for tests used to predict job profi-
ciency and training success in clerical occupations. J. Appl. Psychol. 65:373–406
Peterson NG, Mumford MD, Borman WC, Jeanneret PR, Fleishman EA. 1999. An Occupational Information
System for the 21st Century: The Development of O∗NET. Washington, DC: Am. Psychol. Assoc.
Pine DE. 1995. Assessing the validity of job ratings: an empirical study of false reporting in task inventories.
Public Pers. Manag. 24:451–59
Ployhart RE, Moliterno TP. 2011. Emergence of the human capital resource: a multilevel model. Acad. Manag.
Rev. 36:127–50
Ployhart RE, Schneider B, Schmitt N. 2006. Staffing Organizations: Contemporary Theory and Practice.Mahwah,
NJ: Erlbaum
Prien KO, Prien EP, Wooten W. 2003. Interrater reliability in job analysis: differences in strategy and per-
spective. Public Pers. Manag. 32:125–41
Primoff ES. 1975. How to prepare and conduct job-element examinations. U.S. Civil Serv. Commiss. Tech.
Study 75-1. Washington, DC: U.S. Gov. Printing Off.
Primoff ES, Fine SA. 1988. A history of job analysis. In The Job Analysis Handbook for Business, Industry, and
Government, vol. I, ed. S Gael, pp. 14–29. New York: Wiley
Proctor RW, Vu KL. 2010. Cumulative knowledge and progress in human factors. Annu. Rev. Psychol. 61:623–
51
Raymark PH, Schmit MJ, Guion RM. 1997. Identifying potentially useful personality constructs for employee
selection. Pers. Psychol. 50:723–36
Roberts LM, Dutton JE, Spreitzer GM, Heaphy ED, Quinn RE. 2005. Composing the reflected best-self
portrait: building pathways for becoming extraordinary in work organizations. Acad. Manag. Rev. 30:712–
36
Rosso BD, Dekas KH, Wrzesniewski A. 2010. On the meaning of work: a theoretical integration and review.
Res. Organ. Behav. 30:91–127
Sackett PR, Cornelius ET, Carron ET. 1981. A comparison of global judgment versus task-oriented approaches
to job classification. Pers. Psychol. 34:791–804
Sackett PR, Laczo RM. 2003. Job and work analysis. In Comprehensive Handbook of Psychology: Industrial and
Organizational Psychology, vol. 12, ed. WC Borman, DR Ilgen, RJ Klimoski, pp. 21–37. New York: Wiley
Sackett PR, Lievens F. 2008. Personnel selection. Annu. Rev. Psychol. 59:419–50
Salgado JF. 1997. The five-factor model of personality and job performance in the European Community.
J. Appl. Psychol. 82:30–43
Salgado JF, Anderson NR, H¨
ulsheger UR. 2010. Employee selection in Europe: psychotechnics and the
forgotten history of modern scientific employee selection. In Handbook of Employee Selection, ed. JL Farr,
NT Tippins, pp. 921–42. New York: Routledge
Sanchez JI. 1990. The effects of job experience on judgments of task importance. Presented at Annu. Conf. Soc. Ind.
Organ. Psychol., Miami, FL
Sanchez JI. 1994. From documentation to innovation: reshaping job analysis to meet emerging business needs.
Hum. Resour. Manag. Rev. 4:51–74
www.annualreviews.org •The Rise and Fall of Job Analysis and the Future of Work Analysis 423
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63CH16-Sanchez ARI 31 October 2011 12:26
Sanchez JI. 2000. Adapting work analysis to a fast-paced and electronic business world. Int. J. Sel. Assess.
8:204–12
Sanchez JI, Autor DH. 2010. Dissent. In A Database for a Changing Economy: Review of the Occupational
Information Network (O∗NET), ed. NT Tippins, ML Hilton, pp. 195–97. Washington, DC: Natl. Acad.
Press
Sanchez JI, Fraser SL. 1992. On the choice of scales for task analysis. J. Appl. Psychol. 77:545–53
Sanchez JI, Fraser SL. 1994. An empirical procedure to identify job duty-skill linkages in managerial jobs: a
case example. J. Bus. Psychol. 8:309–26
Sanchez JI, Levine EL. 1989. Determining important tasks within jobs: a policy-capturing approach. J. Appl.
Psychol. 74:336–42
Sanchez JI, Levine EL. 1994. The impact of raters’ cognition on judgment accuracy: an extension to the job
analysis domain. J. Bus. Psychol. 9:47–58
Sanchez JI, Levine EL. 1999. Is job analysis dead, misunderstood, or both? New forms of work analysis and
design. In Evolving Practices in Human Resource Management, ed. A Kraut, A Korman, pp. 43–68. San
Francisco, CA: Jossey-Bass
Sanchez JI, Prager I, Wilson A, Viswesvaran C. 1998. Understanding within-job title variance in job-analytic
ratings. J. Bus. Psychol. 12:407–20
Sanchez JI, Levine EL. 2000. Accuracy or consequential validity: Which is the better standard for job analysis
data? J. Organ. Behav. 21:809–18
Sanchez JI, Levine EL. 2001. The analysis of work in the 20th and 21st centuries. In Handbook of Industrial,
Work and Organizational Psychology, ed. N Anderson, DS Ones, HK Sinangil, C Viswesvaran, vol. 1,
pp. 71–89. Thousand Oaks, CA: Sage
Sanchez JI, Levine EL. 2009. What is (or should be) the difference between competency modeling and
traditional job analysis? Hum. Resour. Manag. Rev. 19:53–63
Schippmann JS. 1999. Strategic Job Modeling: Working at the Core of Integrated Human Resources. Mahwah, NJ:
Erlbaum
Schippmann JS, Ash RA, Battista M, Carr L, Eyde LD, et al. 2000. The practice of competency modeling.
Pers. Psychol. 53:703–40
Schmidt FL, Hunter JE, Pearlman K. 1981. Task differences as moderators of aptitude test validity in selection:
a red herring. J. Appl. Psychol. 66:166–85
Schmitt N, Cohen SA. 1989. Internal analyses of task ratings by job incumbents. J. Appl. Psychol. 74:96–104
Schneider B, Konz AM. 1989. Strategic job analysis. Hum. Resour. Manag. 28:51–63
Schraagen JM, Chipman SF, Shalin VL, eds. 2000. Cognitive Task Analysis. Mahwah, NJ: Erlbaum
Shalley CE, Zhou J, Oldham GR. 2004. The effects of personal and contextual characteristics on creativity:
Where should we go from here? J. Manag. 30:933–58
Shartle C. 1959. Occupational Information: Its Developments and Application. Englewood Cliffs, NJ: Prentice-Hall
Siddique CM. 2004. Job analysis: a strategic human resource management practice. Int. J. Hum. Resour. Manag.
15:219–44
Silverman SB, Wexley KN, Johnson JC. 1984. The effects of age and job experience on employee responses
to a structured job analysis questionnaire. Public Pers. Manag. 13:355–59
Singh P. 2008. Job analysis for a changing workplace. Hum. Resour. Manag. Rev. 18:87–99
Smith J, Hakel MD. 1979. Convergence among data sources, response bias, and reliability and validity of a
structured job analysis questionnaire. Pers. Psychol. 32:677–92
Spencer LM, McLelland DC, Spencer S. 1994. Competency Assessment Methods: History and State of the Art.
Boston, MA: Hay-McBer
Stern W. 1911. Die Differentielle Psychologie in ihren methodischen Grundlagen. Leipzig: Barth
Stern W. 1934. Otto Lipmann: 1880–1933. Am. J. Psychol. 46:152–54
Strong MH, Jeanneret PR, McPhail SM, Blakley BR, D’Egidio EL. 1999. Work context: taxonomy and
measurement of the work environment. In An Occupational Information System for the 21st Century: The
Development of O∗NET, ed. NG Peterson, MD Mumford, WC Borman, PR Jeanneret, EA Fleishman,
pp. 127–45. Washington, DC: Am. Psychol. Assoc.
Stutzman TM. 1983. Within classification job differences. Pers. Psychol. 36:503–16
424 Sanchez ·Levine
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63CH16-Sanchez ARI 31 October 2011 12:26
Taylor EK, Nevis EC. 1961. Personnel selection. Annu. Rev. Psychol. 12:389–412
Tesluk PE, Jacobs RR. 1998. Toward an integrated model of work experience. Pers. Psychol. 51:321–55
Tett RP, Burnett DD. 2003. A personality trait-based interactionist model of job performance. J. Appl. Psychol.
88:500–17
Tett RP, Guterman HA, Bleier A, Murphy PJ. 2000. Development and content validation of a “hyperdimen-
sional” taxonomy of managerial competence. Hum. Perform. 13:205–51
Tett RP, Jackson DN, Rothstein M. 1991. Personality measures as predictors of job performance: a meta-
analytic review. Pers. Psychol. 44:703–42
Tross SA, Maurer TJ. 2000. The relationship between SME job experience and job analysis ratings: findings
with and without statistical control. J. Bus. Psychol. 15:97–110
Tsacoumis S. 2009a. O∗NET analyst ratings. Presented to NRC Panel to Rev. Occup. Inform. Netw.
http://www7.nationalacademies.org/cfe/O_NET_Suzanne_Tsacoumis_Presentation.pdf
Tsacoumis S. 2009b. Responses to Harvey’s criticisms of HumRRO’s analysis of the O∗NET analysts’ ratings.
Paper provided to Panel to Rev. Occup. Inform. Netw (O∗NET). http://www7.nationalacademies.org/
cfe/Response%20to%2oRJ%20Harvey%20Criticism.pdf
Tsacoumis S, Van Iddekinge C. 2006. A Comparison of Incumbent and Analyst Ratings of O∗NET Skills.
http://www.onetcenter.org/reports/SkillsComp.html
Tversky A, Kahneman D. 1974. Judgment under uncertainty: heuristics and biases. Science 185:1124–31
Tziner A, Joanis C, Murphy KR. 2000. A comparison of three methods of performance appraisal with regard
to goal properties, goal perception, and ratee satisfaction. Group Organ. Manag. 25:175–90
U.S. Bur. Census Stat. 1918. Monthly review of the U.S. Bureau Census Statistics. 6(4):131–33
U.S. Dep. Justice. 1991. The Americans with Disabilities Act. Questions and Answers. Washington, DC: U.S. Dep.
Justice, Civil Rights Div.
U.S. Dep. Labor. 1965a. Dictionary of Occupational Titles, Volume 1. Washington, DC: U.S. Gov. Print. Off.
3rd ed.
U.S. Dep. Labor. 1965b. Dictionary of Occupational Titles, Volume 2. Washington, DC: U.S. Gov. Print. Off.
3rd ed.
Van Iddekinge CH, Putka DJ, Raymark PH, Eidson CE. 2005. Modeling error variance in job specification
ratings: the influence of rater, job, and organization-level factors. J. Appl. Psychol. 90:323–34
Van Iddekinge CH, Raymark PH, Edison CE. 2011. An examination of the validity and incremental value of
needed-at-entry ratings for a customer service job. Appl. Psychol. Int. Rev. 60:24–45
Viteles MS. 1923. Psychology in business—in England, France, and Germany. Ann. Am. Acad. Pol. Soc. Sci.
110:207–20
Voskuijl OF, van Sliedregt T. 2002. Determinants of interrater reliability of job analysis: a meta-analysis.
Eur. J. Psychol. Assess. 18:52–62
Walmsley P, Natali M, Campbell JP. 2011. Only incumbent ratings in O∗NET? Yes! Oh no! Presented at Annu.
Conf. Soc. Ind. Organ. Psychol., Chicago, IL
Weiss HM, Adler S. 1984. Personality and organizational behavior. Res. Organ. Behav. 6:1–50
Weiss HM, Rupp DE. 2011. Experiencing work: an essay on a person-centric work psychology. Ind. Organ.
Psychol. Perspect. Sci. Pract. 4:83–97
Werbel JD, DeMarie SM. 2005. Aligning strategic human resource management and person–environment
fit. Hum. Resour. Manag. Rev. 15:247–62
Wexley KN, Silverman SB. 1978. An examination of differences between managerial effectiveness and response
patterns on a structured job analysis questionnaire. J. Appl. Psychol. 63:646–49
Willis G. 2005. Cognitive Interviewing: A Tool for Improving Questionnaire Design.NewburyPark,CA:Sage
Wilson MA. 1997. The validity of task coverage ratings by incumbents and supervisors. J. Bus. Psychol. 12:85–95
Wilson MA. 2007. A history of job analysis. In Historical Perspectives in Industrial and Organizational Psychology,
ed. L Koppes, pp. 219–41. Mahwah, NJ: Erlbaum
Wilson MA, Harvey RJ, Macy BA. 1990. Repeating items to estimate the test-retest reliability of task inventory
ratings. J. Appl. Psychol. 75:158–63
Wrzesniewski A, Dutton JE. 2001. Crafting a job: revisioning employees as active crafters of their work. Acad.
Manag. Rev. 26:179–201
www.annualreviews.org •The Rise and Fall of Job Analysis and the Future of Work Analysis 425
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63-FrontMatter ARI 10 November 2011 9:52
Annual Review of
Psychology
Volume 63, 2012 Contents
Prefatory
Working Memory: Theories, Models, and Controversies
Alan Baddeley pppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp1
Developmental Psychobiology
Learning to See Words
Brian A. Wandell, Andreas M. Rauschecker, and Jason D. Yeatman ppppppppppppppppppppp31
Memory
Remembering in Conversations: The Social Sharing
and Reshaping of Memories
William Hirst and Gerald Echterhoff pppppppppppppppppppppppppppppppppppppppppppppppppppppppp55
Judgment and Decision Making
Experimental Philosophy
Joshua Knobe, Wesley Buckwalter, Shaun Nichols, Philip Robbins,
Hagop Sarkissian, and Tamler Sommers pppppppppppppppppppppppppppppppppppppppppppppppppp81
Brain Imaging/Cognitive Neuroscience
Distributed Representations in Memory: Insights from Functional
Brain Imaging
Jesse Rissman and Anthony D. Wagner ppppppppppppppppppppppppppppppppppppppppppppppppppp101
Neuroscience of Learning
Fear Extinction as a Model for Translational Neuroscience:
Ten Years of Progress
Mohammed R. Milad and Gregory J. Quirk ppppppppppppppppppppppppppppppppppppppppppppp129
Comparative Psychology
The Evolutionary Origins of Friendship
Robert M. Seyfarth and Dorothy L. Cheney ppppppppppppppppppppppppppppppppppppppppppppppp153
Emotional, Social, and Personality Development
Religion, Morality, Evolution
Paul Bloom pppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp179
vi
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63-FrontMatter ARI 10 November 2011 9:52
Adulthood and Aging
Consequences of Age-Related Cognitive Declines
Timothy Salthouse pppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp201
Development in Societal Context
Child Development in the Context of Disaster, War, and Terrorism:
Pathways of Risk and Resilience
Ann S. Masten and Angela J. Narayan ppppppppppppppppppppppppppppppppppppppppppppppppppp227
Social Development, Social Personality, Social Motivation, Social Emotion
Social Functionality of Human Emotion
Paula M. Niedenthal and Markus Brauer pppppppppppppppppppppppppppppppppppppppppppppppp259
Social Neuroscience
Mechanisms of Social Cognition
Chris D. Frith and Uta Frith pppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp287
Personality Processes
Personality Processes: Mechanisms by Which Personality Traits
“Get Outside the Skin”
Sarah E. Hampson pppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp315
Work Attitudes
Job Attitudes
Timothy A. Judge and John D. Kammeyer-Mueller ppppppppppppppppppppppppppppppppppppp341
The Individual Experience of Unemployment
Connie R. Wanberg ppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp369
Job/Work Analysis
The Rise and Fall of Job Analysis and the Future of Work Analysis
Juan I. Sanchez and Edward L. Levine ppppppppppppppppppppppppppppppppppppppppppppppppppp397
Education of Special Populations
Rapid Automatized Naming (RAN) and Reading Fluency:
Implications for Understanding and Treatment of Reading Disabilities
Elizabeth S. Norton and Maryanne Wolf ppppppppppppppppppppppppppppppppppppppppppppppppp427
Human Abilities
Intelligence
Ian J. Deary ppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp453
Research Methodology
Decoding Patterns of Human Brain Activity
Frank Tong and Michael S. Pratte pppppppppppppppppppppppppppppppppppppppppppppppppppppppp483
Contents vii
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.
PS63-FrontMatter ARI 10 November 2011 9:52
Human Intracranial Recordings and Cognitive Neuroscience
Roy Mukamel and Itzhak Fried ppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp511
Sources of Method Bias in Social Science Research
and Recommendations on How to Control It
Philip M. Podsakoff, Scott B. MacKenzie, and Nathan P. Podsakoff pppppppppppppppppppp539
Neuroscience Methods
Neuroethics: The Ethical, Legal, and Societal Impact of Neuroscience
Martha J. Farah pppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp571
Indexes
Cumulative Index of Contributing Authors, Volumes 53–63 ppppppppppppppppppppppppppp593
Cumulative Index of Chapter Titles, Volumes 53–63 ppppppppppppppppppppppppppppppppppp598
Errata
An online log of corrections to Annual Review of Psychology articles may be found at
http://psych.AnnualReviews.org/errata.shtml
viii Contents
Annu. Rev. Psychol. 2012.63:397-425. Downloaded from www.annualreviews.org
by Florida International University on 02/09/12. For personal use only.