ArticlePDF Available

Stubborn Reliance on Intuition and Subjectivity in Employee Selection

Authors:

Abstract and Figures

The focus of this article is on implicit beliefs that inhibit adoption of selection decision aids (e.g., paper-and-pencil tests, structured interviews, mechanical combination of predictors). Understanding these beliefs is just as important as understanding organizational constraints to the adoption of selection technologies and may be more useful for informing the design of successful interventions. One of these is the implicit belief that it is theoretically possible to achieve near-perfect precision in predicting performance on the job. That is, people have an inherent resistance to analytical approaches to selection because they fail to view selection as probabilistic and subject to error. Another is the implicit belief that prediction of human behavior is improved through experience. This myth of expertise results in an overreliance on intuition and a reluctance to undermine one’s own credibility by using a selection decision aid.
Content may be subject to copyright.
FOCAL ARTICLE
Stubborn Reliance on Intuition and
Subjectivity in Employee Selection
SCOTT HIGHHOUSE
Bowling Green State University
Abstract
The focus of this article is on implicit beliefs that inhibit adoption of selection decision aids (e.g., paper-and-pencil
tests, structured interviews, mechanical combination of predictors). Understanding these beliefs is just as impor-
tant as understanding organizational constraints to the adoption of selection technologies and may be more useful
for informing the design of successful interventions. One of these is the implicit belief that it is theoretically
possible to achieve near-perfect precision in predicting performance on the job. That is, people have an inherent
resistance to analytical approaches to selection because they fail to view selection as probabilistic and subject to
error. Another is the implicit belief that prediction of human behavior is improved through experience. This myth
of expertise results in an overreliance on intuition and a reluctance to undermine one’s own credibility by using
a selection decision aid.
Perhaps the greatest technological achieve-
ment in industrial and organizational (I–O)
psychology over the past 100 years is the
development of decision aids (e.g., paper-
and-pencil tests, structured interviews,
mechanical combination of predictors) that
substantially reduce error in the predic-
tion of employee performance (Schmidt &
Hunter, 1998). Arguably, the greatest failure
of I–O psychology has been the inability to
convince employers to use them. A little over
10 years ago, Terpstra (1996) sampled 201
human resources (HR) executives about the
perceived effectiveness of various selection
methods. As the left side of Figure 1 shows,
they considered the traditional unstructured
interview more effective than any of the
paper-and-pencil assessment procedures.
Inspection of actual effectiveness of these
procedures, however, shows that paper-
and-pencil tests commonly outperform
unstructured interviews. For example, the
right side of Figure 1 shows the results of
a meta-analysis conducted on the actual
effectiveness of these same procedures for
predicting performance in sales (Vinchur
Schippmann, Switzer, & Roth, 1998). Use of
any one of the paper-and-pencil tests alone
outperforms the unstructured interview—a
procedure that is presumed to assess ability,
personality, and aptitude concurrently.
Although one might argue that these data
merely reflect a lack of knowledge about
effective practice, there is considerable evi-
dence that employers simply do not believe
that the research is relevant to their own sit-
uation (Colbert, Rynes, & Brown, 2005;
Johns, 1993; Muchinsky, 2004; Terpstra &
Rozelle, 1997; Whyte & Latham, 1997).
For example, Rynes, Colbert, and Brown
(2002) found that HR professionals were
well aware of the limitations of the unstruc-
tured interview. Similarly, one of my stu-
dents conducted a yet-unpublished survey
of HR professionals (n¼206) about their
views of selection practice. His data indi-
cated that the HR professionals agreed, by
Correspondence concerning this article should be
addressed to Scott Highhouse. E-mail: shighho@bgsu.edu
Address: Bowling Green State University, Bowling
Green, OH 43403.
Industrial and Organizational Psychology,1(2008), 333–342.
Copyright ª2008 Society for Industrial and Organizational Psychology. 1754-9426/08
333
a factor of more than 3 to 1, that using tests
was an effective way to evaluate a candi-
date’s suitability and that tests that assess
specific traits are effective for hiring em-
ployees. At the same time, however, these
same professionals agreed, by more than 3
to 1, that you can learn more from an infor-
mal discussion with job candidates and that
you can ‘‘read between the lines’’ to detect
whether someone is suitable to hire. This
apparent conflict between knowledge and
belief seems loosely analogous to the com-
mon practice of preferring brand name cold
remedies to store brand remedies containing
the same ingredients. People know that the
store brands are identical, but they do not
trust them for their own colds.
Some might argue that the tide is turning.
Much has been written on the merits of evi-
dence-based management (Pfeffer & Sutton,
2006; Rousseau, 2006). This approach,
much like evidence-based medicine, relies
on the best available scientific evidence to
make decisions. At the core of this move-
ment is ‘analytics’’ or data-based decision
making (e.g., Ayers, 2007). Discussions of
number crunching in the arena of personnel
selection, however, are almost always lim-
ited to anecdotes from professional sports
(e.g., Davenport, 2006). Competing with
the analytical point of view are books like
Malcolm Gladwell’s (2005) Blink: The
Power of Thinking Without Thinking and
Gerd Gigerenzer’s (2007) Gut Feelings:
The Intelligence of the Unconscious, which
extol the virtues of intuitive decision mak-
ing. Although the assertions of these authors
have little relevance for the prediction of
human performance, the popularity of their
work likely reinforces the common belief
that good hiring is a matter of experience
and intuition.
Implicit Beliefs
My colleagues and I (Lievens, Highhouse, &
De Corte, 2005) conducted a policy-capturing
study of the decision processes of retail man-
agers making hypothetical hiring decisions.
We found that the managers placed more
emphasis on competencies assessed by
unstructured interviews than on competen-
cies measured by tests, regardless of what
those competencies were. They placed more
emphasis, for instance, on Extraversion than
on general mental ability when Extraversion
was assessed using an unstructured inter-
view (and general mental ability was as-
sessed using a paper-and-pencil test). The
opposite was found when Extraversion was
assessed using a paper-and-pencil test and
general mental ability was assessed using
12345
Perceived Effectiveness
GMA Test
Personality
Test
Specific
AptitudeTest
Unstructured
Interview
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Actual Effectiveness (Sales)
GMA Test
Personality
Test
Specific
AptitudeTest
Unstructured
Interview
Figure 1. Perceived versus actual usefulness of various predictors.
Note. Perceived effectiveness numbers are on a 1–5 scale (1 ¼not good;3¼average;5¼
extremely good). Actual effectiveness numbers are correlations corrected for unreliability in
the criterion and range restriction. Because Vinchur, Schippmann, Switzer, and Roth (1998)
did not include interviews, the interview estimate is from Huffcutt and Arthur (1994) level
1 interview. GMA ¼general mental ability; personality ¼potency; specific aptitude ¼
sales ability.
334 S. Highhouse
an unstructured interview! Clearly, these
managers believed that good old-fashioned
‘horse sense’ was needed to accurately size
up applicants (see Phelan & Smith, 1958).
The reluctance of employers to use ana-
lytical selection procedures is at least
partially a reflection of broader misconcep-
tions that the general public has about how
to go about assessing and selecting people
for jobs. Consider two high-profile policy
opinions on testing and selection in the
United States.
dIn 1990, the National Commission on
Testing and Public Policy (1990) issued
eight recommendations for testing in
schools and the workplace. Among
those was the statement as follows:
‘Test scores are imperfect measures
and should not be used alone to make
important decisions about individuals’’
(National Commission on Testing and
Public Policy, 1990, p. 30). The com-
mission’s chairman, Bernard Gifford of
Apple Computer, commented, ‘We
just believe that under no circumstan-
ces should individuals be denied a job
or college admission exclusively based
on test scores’ (‘‘Panel Criticizes Stan-
dard Testing,’ 1990).
dIn the landmark Supreme Court deci-
sion on affirmative action at the Univer-
sity of Michigan, Justice Rehnquist
concluded that consideration of race
as a factor in student admission is
acceptable—but it must be done at
the individual level, with each appli-
cant considered holistically. In concur-
rence, Justice O’Connor commented,
‘But the current [student selection] sys-
tem, as I understand it, is a nonindivi-
dualized, mechanical one. As a result, I
join the Court’s opinion . . . . (Gratz v.
Bollinger, 2003, Concurrence 1).
Although these positions sound reasonable
on the surface, they represent fundamen-
tally flawed assumptions. No one disputes
that test scores are imperfect measures,
but the testing commission implies that
combining them with something else will
correct the imperfections (rather than exac-
erbate them). The court’s majority opinion
in Gratz suggests that individualized meth-
ods of selection are more fair and reliable
than impersonal ‘mechanical’ ones. Both
of these examples illustrate two implicit
beliefs about employee selection: (1) people
believe that it is possible to achieve near-
perfect precision in the prediction of
employee success, and (2) people believe
that there is such a thing as intuitive expertise
in the prediction of human behavior. These
implicit beliefs exert their influence on pol-
icy and practice, even though they may not
be immediately accessible (Kahneman,
2003). I acknowledge that there are a num-
ber of contextual reasons for resistance to
selection technologies, including organiza-
tional politics, habit, and culture, along with
the existing legal climate (e.g., Johns, 1993;
Muchinsky, 2004). However, whereas con-
textual issues are often situation specific,
these are universal ‘truths’ about people.
As such, understanding and studying them
provides hope for overcoming user resis-
tance to selection decision aids.
Irreducible Unpredictability
I recently came across an article in a popular
trade magazine for executives, purportedly
summarizing the state of the science on
executive assessment (Sindelar, 2002). I
was struck by a statement made by the
author: ‘For many top-level positions, tech-
nical competence accounts for only 20
percent of a successful alignment. Psycho-
logical factors account for the rest’’ (pp.
13–14).1Whether intentional or not, the
author was clearly implying what is shown
on the top of Figure 2—that 80% of the var-
iance in executive success can be explained
by psychological factors (presumably tem-
perament or personality). Reality, however,
is much more like the chart on the bottom
of Figure 2—showing that most of the vari-
ance in executive success is simply not
1. The author identified his affiliation as the ‘‘Institute
for Advanced Business Psychology.’
Reliance on intuition and subjectivity 335
predictable prior to employment. The busi-
ness of assessment and selection involves
considerable irreducible unpredictability;
yet, many seem to believe that all failures
in prediction are because of mistakes in the
assessment process. Put another way, people
seem to believe that, as long as the applicant
is the right person for the job and the ap-
plicant is accurately assessed, success is
certain. The‘‘validity ceiling’ has been a con-
tinually vexing problem for I–O psychology
(see Campbell, 1990; Rundquist,1969). Enor-
mous resources and effort are focused on the
quixotic quest for new and better predictors
that will explain more and more variance in
performance. This represents a refusal, by
knowledgeable people, to recognize that
many determinants of performance are not
knowable at the time of hire. The notion that
it is still possible to achieve large gains in the
prediction of employee success reflects a fail-
ure to accept that there is no such thing as
perfect prediction in this domain. Campbell
noted that our poor professional self-esteem
is based on an unrealistic notion of what can
be achieved in the prediction of employee
success. Campbell wrote: ‘No external
source imposed this [validity ceiling] stan-
dard on the discipline or even argued that
there should be a standard at all’ (p. 689).
Recall the earlier comment by the
national testing commission, cautioning
that tests are ‘imperfect’ and must be
supplemented with other things. It is remark-
ably similar to Viteles’ (1925) observation
that ‘objective scores of vocational tests
are at best uncertain diagnostic criteria’
(p. 132). This early pioneer of I–O was argu-
ing that standardized methods of assessment
could only fill the proverbial glass halfway.
Intuitive judgment was needed to fill it the
rest of the way. Viteles wrote: ‘It is the opin-
ion of the writer that in the cause of greater
scientific accuracy in vocational selection
in industry the statistical point of view must
be supplemented by a clinical point of view’
(p. 134). Countering this position was Freyd
(1926), whocautioned against allowing intu-
ition to creep into hiring decisions. Freyd,
who represented the analytical viewpoint
of selection, argued ‘allowing selection to
be influenced by personal interpretations
with their unavoidable prejudices instead of
relying upon objective measures gives even
less consideration to the well-being and
interest of the individual worker’ (p. 354).
History proved Freyd prescient.
Table 1 shows the results of the earliest
study investigating the relative effectiveness
of standardized procedures alone versus
supplementing those procedures with intu-
itive judgment (Sarbin, 1943). As you can
see, academic achievement was better pre-
dicted by the standardized scores alone
than by the scores plus clinical judgment.
The notion that analysis outperforms intui-
tion in the prediction of human behavior is
among the most well-established findings
in the behavioral sciences (Grove & Meehl,
1994; Grove, Zald, Lebow, Snitz, & Nelson,
2000).2Why, therefore, does the intuitive
Tech.
Competence
20%
Psych. Factors
80%
Tech.
Competence
20%
Psych. Factors
10%
Unpredictability
70%
Figure 2. Variance in success accounted for
by technical competence and psychological
factors.
2. Although few studies in I–O have explicitly made
this comparison, there are a number of examples
where tests alone outpredicted tests 1intuition
(e.g., Borneman, Cooper, Klieger, & Kuncel, 2007;
Huse, 1962; Meyer, 1956).
336 S. Highhouse
perspective remain so appealing? Einhorn
(1986) observed that a crucial distinction
between the intuitive and the analytical
approaches to human prediction is the
worldview of the people making the judg-
ments. According to Einhorn, the intuitive
approach reflects a deterministic world-
view, one that rejects the idea that the future
is inherently probabilistic. This is con-
trasted with the analytical worldview,
which accepts uncertainty as inevitable.
Consider the San Diego Chargers profes-
sional football team who, despite having
a regular season record of 14-2 in 2006,
fired its head coach following a play-off
loss. The fired coach had a reputation for
leading teams to successful regular season
records, only to lose the big games. The
Chargers organization evidently failed to
consider that the contribution of uncer-
tainty to a play-off outcome is much greater
than to a 16-game season record. Abelson
(1985) found that knowledgeable baseball
fans overestimated by a factor of 75 the con-
tribution of skill (vs. chance) to the likeli-
hood of a major league baseball player
getting a hit in a given turn at bat.
Intuitive approaches to employee selec-
tion make the errors in selection ambiguous.
Analytical approaches make them part of
the process—hence, visible. Considerable
research suggests that ambiguity about the
likelihood of an outcome (e.g., the operation
has an unknown chance of success) encour-
ages more optimism than a low known prob-
ability (e.g., the operation has a 20% chance
of success; see Kuhn, 1997). There is little
room for optimism when a composite of pre-
dictors is known to leave 75% of the variance
unexplained. This may explain why selection
procedures that are difficult to evaluate (e.g.,
feelings about ‘fit’’) are so attractive. Einhorn
(1986) noted, however, that one must be
willing to accept error to make less error.
Myth of Expertise
I have argued that one of the reasons that
people have an inherent resistance to analyt-
ical approaches to hiring is that they fail to
view selection in probabilistic terms. A
related but different reason for employer ret-
icence to use selection decision aids is that
most people believe in the myth of selection
expertise. By this I mean the belief that one
can become skilled in making intuitive judg-
ments about a candidate’s likelihood of suc-
cess. This is reflected in the survey responses
of the HR professionals who believed in
‘reading between the lines’’ to size up job
candidates. It is also evidenced in the pheno-
menal growth of the professional recruiter
or ‘headhunter’ profession (Finlay & Cover-
dill, 1999) and the perseverance of the
holistic approach to managerial assessment
(Highhouse, 2002).
Despite this widespread belief in intuitive
expertise, the data suggest that it is a myth.
For example, the considerable research on
predicting human behavior per se shows that
experience does not improve predictions
made by clinicians, social workers, parole
boards, judges, auditors, admission com-
mittees, marketers, and business planners
(Camerer & Johnson, 1991; Dawes, Faust,
& Meehl, 1989; Grove et al., 2000; Sherden,
1998). Although it is commonly accepted
that some (employment) interviewers are
better than others, research on variance in
interviewer validity suggests that differences
are due entirely to sampling error (Pulakos,
Schmitt, Whitney, & Smith, 1996). Exist-
ing evidence suggests that the interrater
reliability of the traditional (unstructured)
Table 1. Sarbin’s (1943) Investigation of Two Methods for Predicting Success of University
of Minnesota Undergraduates Admitted in 1939
Predictor composite Correlation with criterion (r)
High school rank 1college aptitude test .45
High school rank 1college aptitude test 1
intuitive judgment of counselors
.35
Reliance on intuition and subjectivity 337
interview is so low that, even with a perfectly
reliable and valid criterion, interview-based
judgments could never account for more
than 10% of the variance in job performance
(Conway, Jako, & Goodman, 1995).3This
empirical evidence is troubling for a proce-
dure that is supposed to simultaneously take
into account ability, motivation, and person–
organization fit. Keep in mind also that these
findings are based on interviews that had rat-
ings associated with the interviewers’ judg-
ments. Thus, the unstructured interviews
subjected to meta-analyses are almost cer-
tainly unusual and on the high end of rigor.
The data do not paint a sanguine picture of
intuitive judgment in the hiring process.
There are commonly two scholarly rebut-
tals to the arguments against prediction
expertise. I will consider these in turn. One
response to the limitations of intuitive
approaches to selection is to focus on the
ability of experts to spot idiosyncrasies in
a candidate’s profile (Jeanneret & Silzer,
1998). Meehl (1954) noted that one limita-
tion of analytical formulas was their inability
to incorporate ‘broken-leg’ cues. The term
comes from an anecdotal example in which
one is trying to predict whether or not a per-
son will go to the movie on a particular day.
A mechanical formula might take into
account things like the nature of the movie
(e.g., less likely to go to romantic comedy) or
the weather (e.g., more likely to go on a rainy
day). The mechanical procedure would not
take into account, however, an event that is
extremely rare (e.g., the person has a broken
leg), and thus, the mechanical prediction
will not be as accurate as a prediction based
on a simple intuitive observation. A mechan-
ical approach to selection would not, the
logic goes, consider idiosyncratic charac-
teristics of any particular job candidate—a
seasoned expert would.
Another common response to criticisms
of intuitive selection is to focus on the
expert’s ability to interpret configurations
of traits (Prien, Schippmann, & Prien, 2003).
The notion behind this argument is that
each candidate is unique, and one must con-
sider each piece of information about the
candidate in light of all the other pieces of
information. In other words, assessing pat-
terns of traits is more accurate than assessing
traits individually. For example, Prien et al.
noted that executive assessment requires a
‘dynamic interpretation’ of applicant data,
one that takes into account interactions
between test scores and other observations
(p. 125). This view is reinforced by leadership
theoristswho assert that leader characteristics
exhibit complex configural relations with
leadership outcomes (e.g., Zaccaro, 2007).
Even if we do accept that decision makers
incorporate broken-leg cues and configura-
tions of traits, existing evidence suggests that
these things account for negligible variance
in the predicted outcome. For example,
Dawes (1971) modeled admission decisions
of a four-person graduate admissions com-
mittee using a bootstrapping procedure. This
is shown in Figure 3. Dawes found that the
model (i.e., paramorphic representation) of
the admission committee’s judgments out-
performed the committee itself. More rele-
vant to this discussion, however, was the fact
that, whereas a linear combination of the
expert cues correlated significantly (r¼.25)
with the criterion, the residual—which in-
cluded configural judgments, broken-leg
cues, and error—was inconsequential (r¼
.01). Camerer and Johnson (1991) noted
that, despite accounting for a large portion
of the error term, broken-leg cues and con-
figural judgments consistently provide little
incremental gain in prediction—even for so-
called experts. The problem with broken-leg
cues is that people rely too much on them
because they present compelling stories.
The tendency to be seduced by detailed
stories causes people to ignore relevant
information and to violate simple rules of
logic (see Highhouse, 1997, 2001). Also, as
one reviewer noted, broken legs are them-
selves constructs that can and should be
measured reliably. The problem with trait
configurations, on the other hand, is that
3. Meta-analysis suggests that it accounts for negligible
incremental validity over simple paper-and-pencil
tests of cognitive ability and conscientiousness
(Cortina, Goldstein, Payne, Davison, & Gilliland,
2000).
338 S. Highhouse
they require feats of information integration
that contradict current understanding of
human cognitive limitations (Ruscio,
2003). And true real-world examples of pre-
dictive interactions between job applicant
characteristics are difficult to find (e.g.,
Sackett, Gruys, & Ellingson, 1998). Hastie
and Dawes (2001) distilled from the vast lit-
erature on prediction ‘experts’ the following
stylized facts:
dThey rely on few pieces of information.
dThey lack insight into how they arrive at
predictions.
dThey exhibit poor interjudge agree-
ment.
dThey become more confident in their
accuracy when irrelevant information
is presented.
The obvious remedy to the limitations of
expertise is to structure expert intuition and
mechanically combine it with other decision
aids, such as paper-and-pencil inventories.
However, there would likely be consider-
able resistance to structuring or mechaniz-
ing the judgment process (e.g., Lievens et al.,
2005; van der Zee, Bakker, & Bakker, 2002).
Most people believe that aspects of an appli-
cant’s character are far too complex to be
assessed by scores, ratings, and formulas.
An example of the irrationality of this bias
against decision aids is the contempt with
which most college football fans and com-
mentators hold the Bowl Championship
Series, which is a mechanical formula that
incorporates expert ratings (e.g., coaches
poll) and computer rankings (e.g., wins and
losses of opponents) into an overall ranking
of football teams. The nature of the com-
plaints (‘‘unplug the computers’’) suggests
that people do not want mechanical formu-
las making their expert decisions about who
attends bowl games. A University of Oregon
coach infamously declared: ‘I liken the BCS
to a bad disease, like cancer’ (Vondersmith,
2001). Another example of this bias against
decision aids is the considerable patient
resistance to diagnostic decision aids (Arkes,
Shaffer, & Medow, 2007). Arkes and his col-
leagues found that physicians who made
computer-based diagnoses of ankle injuries
were perceived less competent, profes-
sional, and thorough than physicians who
made diagnoses without any aids. Indeed,
the idea that (with the appropriate data)
a physician might not even need to meet or
interact with a patient to understand his or
her personal health issues would be a hard
sell to most people. Physicians, aware of this
lay bias against ‘cookbook medicine,’
grossly underutilize these valuable technol-
ogies in practice (Kaplan, 2001).4Hastie
and Dawes (2001) noted that relying on
Predicted
Outcome
Expert
Predictions
Model of
Expert
Residuals
Bootstrapped Models of Experts
configural judgments
“broken-leg” cues
linear combination of cues
error
r = .19
r = .25
r = .01
Figure 3. Results from Dawes’ (1971) examination of graduate admissions decisions.
4. This underutilization also results from overconfi-
dence on the part of physicians in their own diagnos-
tic expertise.
Reliance on intuition and subjectivity 339
expertise is more socially acceptable than
relying on test scores or formulas. Research
on medical decision making supports this
contention. It is no wonder, therefore, that
HR practitioners would be reluctant to
undermine their status by administering
a paper-and-pencil test, structuring an
employment interview, or plugging ratings
into a mechanical formula.
Concluding Remarks
We know quite a bit about applicant reac-
tions to hiring methods (Hausknecht, Day, &
Thomas, 2004), but very little attention has
been given to user resistance to selection
decision aids. Campbell (1990) noted: ‘‘We
still do not know much about how to best
communicate selection results to people
outside the [I-O] profession’ (p. 704). Fifteen
years later, Anderson (2005) lamented: ‘‘In
fact, the whole area of practitioner beliefs
about selection methods and processes is
a gargantuan one which research has made
little or no inroads into’ (p. 19). I have
inferred from the general psychological lit-
erature, and the specific selection literature,
two implicit beliefs that likely inhibit the
widespread acceptance of selection tech-
nologies. These include the belief that it is
possible to achieve near-perfect precision in
predicting performance on the job and the
belief that intuitive prediction can be
improved by experience. People trust that
the complex characteristics of applicants
can be best assessed by a sensitive, equally
complex human being. This does not stand
up to scientific scrutiny, and I–O psycholo-
gists need to begin focusing their efforts on
understanding how to navigate these waters.
We can begin by drawing from the judgment
and decision making and human factors lit-
eratures on how to better communicate
uncertainty and error. We also need to learn
how to better calibrate user expectations.
Consider Muchinsky’s (2004) experience in
communicating a .50 validity coefficient for
a mechanical comprehension test:
my pleasure regarding the findings was
highly apparent to the client organi-
zation. It was at this point a senior com-
pany official said to me, ‘I fail to see the
basis for your enthusiasm.’ (p. 194)
Research on probabilityneglect (Sunstein,
2002) suggests that people make little dis-
tinction between probabilities that they
consider small. In addition, research on
evaluability (Hsee, 1996) has shown that
most attributes cannot be evaluated without
appropriate context. Perhaps if Muchinsky
(2004) had compared his .50 to flipping
a coin (.00) or to an unstructured interview
(.20), management would have been more
impressed. Perhaps management would
have been more impressed by a common-
language effect size indicator or by an
expectancy chart. We simply do not have
the research to guide these communication
decisions.
The traditional unstructured interview has
remained the most popular and widely used
selection procedure for over 100 years
(Buckley, Norris, & Wiese, 2000). This is
despite the fact that, during this same period,
there have been significant advancements in
the development of selection decision aids.
Guion (1965) argued that the waste of
human resources caused by poor selection
procedures should pain the professional
conscience of I–O psychologists. It is true
that people are not very predictable, but
selection decision aids help.
References
Abelson, R. P. (1985). A variance explanation paradox:
When a little is a lot. Psychological Bulletin,97,
128–132.
Anderson, N. (2005). Relationship between practice
and research in personnel selection: Does the left
hand know what the right is doing? In A. Evers, N.
Anderson, & O. Voskuijl (Eds.), The Blackwell hand-
book of personnel selection (pp. 1–24). Malden,
MA: Blackwell.
Arkes, H., Shaffer, V. A., & Medow, M. A. (2007).
Patients derogate physicians who use a computer-
assisted diagnostic aid. Medical Decision Making,
27, 189–202.
Ayers, I. (2007). Super crunchers: Why thinking-by-
numbers is the new way to be smart. New York:
Random House.
Borneman, M. J., Cooper, S. R., Klieger, D.M., & Kuncel,
N. R. (2007, April). The efficacy of the admissions
interview: A meta-analysis. In N. R. Kuncel (Chair),
Alternative predictors of academic performance:
340 S. Highhouse
The glass is half empty. Symposium conducted at
the Annual Meeting of the National Council on
Measurement in Education, Chicago, IL.
Buckley, M. R., Norris, A. C., & Wiese, D. S. (2000). A
brief history of the selection interview: May the next
100 years be more fruitful. Journal of Management
History,6, 113–126.
Camerer, C. F., & Johnson, E. J. (1991). The process-per-
formance paradox in expert judgment: How can
experts know so much and predict so badly? In K.
A. Ericsson & J. Smith (Eds.), Toward a general theory
of expertise: Prospects and limits (pp. 195–217).
Cambridge, UK: Cambridge University Press.
Campbell, J. P. (1990). Modeling the performance
prediction problem in industrial and organizational
psychology. In M. D. Dunnette & L. M. Hough (Eds.),
Handbook of industrial and organizational psychol-
ogy (Vol. 1, 2nd ed., pp. 687–732). Palo Alto, CA:
Consulting Psychologists Press.
Colbert, A. E., Rynes, S. L., & Brown, K. G. (2005). Who
believes us? Understanding managers’ agreement
with human resource findings. Journal of Applied
Behavioral Science,41, 304–325.
Conway, J. M., Jako, R. A., & Goodman, D. F. (1995). A
meta-analysis of interrater and internal consistency
reliability of selection interviews. Journal of Applied
Psychology,80, 565–579.
Cortina, J. M., Goldstein, N. B., Payne, S. C., Davison, H.
K., & Gilliland, S. W. (2000). The incremental val-
idity of interview scores over and above cognitive
ability and conscientiousness scores. Personnel
Psychology,53, 325–351.
Davenport, T. H. (2006, January). Competing on analy-
tics. Harvard Business Review,84, 99–107.
Dawes, R. M. (1971). A case study of graduate admis-
sions: Application of three principles of human
decision making. American Psychologist,26,
180–188.
Dawes, R. M., Faust, D., & Meehl, P. E. (1989). Clinical
versus actuarial judgment. Science,243, 1668–
1774.
Einhorn, H. J. (1986). Accepting error to make less error.
Journal of Personality Assessment,50, 387–395.
Finlay, W., & Coverdill, J. E. (1999). The search game:
Organizational conflicts and the use of headhunters.
Sociological Quarterly,40, 11–30.
Freyd, M. (1926). The statistical viewpoint in vocational
selection. Journal of Applied Psychology,10, 349–
356.
Gigerzenger, G. (2007). Gut feelings: The intelligence
of the unconscious. London: Penguin Books.
Gladwell, M. (2005). Blink: The power of thinking with-
out thinking. New York: Little, Brown and Company.
Gratz v. Bollinger, 539 U.S. 244 (2003).
Grove, W. M., & Meehl, P. E. (1994). Comparative effi-
ciency of informal (subjective, impressionistic) and
formal (mechanical, algorithmic) prediction proce-
dures: The clinical–statistical controversy. Psychol-
ogy, Public Policy, and Law,2, 293–323.
Grove, W. M., Zald, D. H., Lebow, B. S., Snitz, B. E., &
Nelson, C. (2000). Clinical versus mechanical pre-
diction. Psychological Assessment,12, 19–30.
Guion, R. M. (1965). Industrial psychology as an aca-
demic discipline. American Psychologist,20, 815–
821.
Hastie, R., & Dawes, R. M. (2001). Rational choice in an
uncertain world. Thousand Oaks, CA: Sage.
Hausknecht, J. P., Day, D. V., & Thomas, S. C. (2004).
Applicant reactions to selection procedures: An
updated model and meta-analysis. Personnel Psy-
chology,57, 639–683.
Highhouse, S. (1997). Understanding and improving
job-finalist choice: The relevance of behavioral
decision research. Human Resource Management
Review,7, 449–470.
Highhouse, S. (2001). Judgment and decision mak-
ing research: Relevance to industrial and orga-
nizational psychology. In N. Anderson, D. S. Ones,
H. K. Sinangil, & C. Viswesvaran (Eds.), Handbook
of industrial, work and organizational psychology
(pp. 314–332). Thousand Oaks, Sage.
Highhouse, S. (2002). Assessing the candidate as
a whole: A historical and critical analysis of individ-
ual psychological assessment for personnel decision
making. Personnel Psychology,55, 363–396.
Hsee, C. K. (1996). The evaluability hypothesis: An
explanation for preference reversals between joint
and separate evaluations of alternatives. Organiza-
tional Behavior and Human Decision Processes,67,
242–257.
Huffcutt, A. L., & Arthur, W (1994). Hunter and Hunter
(1984) revisited: Interview validity for entry-level
jobs.
Huse, E. F. (1962). Assessments of higher level personnel
IV: The validity of assessment techniques based on
systematically varied information. Personnel Psy-
chology,15, 195–205.
Jeanneret, R., & Silzer, R. (1998). An overview of psy-
chological assessment. In R. Jeanneret & R. Silzer
(Eds.), Individual psychological assessment: Predict-
ing behavior in organizational settings (pp. 3–26).
San Francisco: Jossey-Bass.
Johns, G. (1993). Constraints on the adoption of psy-
chology-based personnel practices: Lessons from
organizational innovation. Personnel Psychology,
46, 569–592.
Kahneman, D. (2003). A perspective on judgment and
choice: Mapping bounded rationality. American
Psychologist,58, 697–720.
Kaplan, B. (2001). Evaluating informatics applications:
Clinical decision support systems literature review.
International Journal of Medical Informatics,64, 15–37.
Kuhn, K. (1997). Communicating uncertainty: Framing
effects on responses to vague probabilities. Organi-
zational Behavior and Human Decision Processes,
71, 55–83.
Lievens, F., Highhouse, S., & De Corte, W. (2005). The
importance of traits and abilities in supervisors’ hir-
ability decisions as a function of method of assess-
ment. Journal of Occupational and Organizational
Psychology,78, 453–470.
Meehl, P. E. (1954). Clinical versus statistical prediction:
A theoretical analysis and a review of the evidence.
Minneapolis, MN: University of Minnesota.
Meyer, H. H. (1956). An evaluation of a supervisory
selection program. Personnel Psychology,9, 499–
513.
Muchinsky, P. M. (2004). When the psychometrics of
test development meets organizational realities: A
conceptual framework for organizational change,
examples, and recommendations. Personnel Psy-
chology,57, 175–209.
National Commission on Testing and Public Policy.
(1990). From gatekeeper to gateway: Transforming
Reliance on intuition and subjectivity 341
testingin America. Chestnut Hill, MA: National Com-
mission on Testingand Public Policy, Boston College.
Panel criticizes standard testing. (1990, May 24). The
New York Times. Retrieved December 10, 2007,
from query.nytimes.com
Pfeffer, J., & Sutton, R. I. (2006). Hard facts, danger-
ous half-truths and total nonsense: Profiting from
evidence-based management. Boston: Harvard
Business School Press.
Phelan, J., & Smith, G. W. (1958). To select execu-
tives—combine interviews, tests, horse sense. Per-
sonnel Journal,36, 417–421.
Prien, E. P., Schippmann, J. S., & Prien, K. O. (2003).
Individual assessment: As practiced in industry and
consulting. Mahwah, NJ: Lawrence Erlbaum.
Pulakos, E. D., Schmitt, N., Whitney, D., & Smith, M.
(1996). Individual differences in interviewer ratings:
The impact of standardization, consensus discus-
sion, and sampling error on the validity of a struc-
tured interview. Personnel Psychology,49, 85–102.
Rousseau, D. M. (2006). Is there such a thing as evi-
dence-based management? Academy of Manage-
ment Review,31, 256–269.
Rundquist, E. A. (1969). The prediction ceiling. Person-
nel Psychology,22, 109–116.
Ruscio, J. (2003). Holistic judgment in clinical practice:
Utility or futility? Scientific Review of Mental Health
Practice,2, 38–48.
Rynes, S. L., Colbert, A. E., & Brown, K. G. (2002). HR
professionals’ beliefs about effective human
resource practices: Correspondence between
research and practice. Human Resources Manage-
ment,41, 149–174.
Sackett, P. R., Gruys, M. L., & Ellingson, J. E. (1998).
Ability-personality interactions when predicting
job performance. Journal of Applied Psychology,
83, 545–556.
Sarbin, T. L. (1943). A contribution to the study of
actuarial and individual methods of prediction.
American Journal of Sociology,48, 598–602.
Schmidt, F. L., & Hunter, J. E. (1998). The validity and
utility of selection methods in personnel psychol-
ogy: Practical and theoretical implications of 85
years of research findings. Psychological Bulletin,
124, 262–274.
Sherden, W. (1998). The fortune sellers: The big busi-
ness of buying and selling predictions. New York:
John Wiley.
Sindelar, S. (2002, October). Executive assessment.
Executive Excellence,19, 13–14.
Sunstein, C. R. (2002). Probability neglect: Emotions,
worst cases, and law. Yale Law Journal,112, 61–107.
Terpstra, D. E. (1996, May). The search for effective
methods. HR Focus,73, 16–17.
Terpstra, D. E., & Rozelle, J. (1997). Why some poten-
tially effective staffing practices are seldom used.
Public Personnel Management,26, 483–495.
van der Zee, K. I., Bakker, A. B., & Bakker,P. (2002). Why
are structured interviews so rarely used in personnel
selection? Journal of Applied Psychology,87, 176–184.
Vinchur, A. J., Schippmann, J. S., Switzer, F. S.,& Roth, P.
L. (1998). A meta-analytic review of predictors of job
performance for salespeople. Journal of Applied
Psychology,83, 586–597.
Viteles, M. S. (1925).The clinical viewpoint in vocational
selection. Journalof Applied Psychology,9, 131–138.
Vondersmith, J. (2001, December 11). Bellotti sounds
off, favors playoffs. The Portland Tribune. Retrieved
January 10, 2008, from www.portlandtribune.com
Whyte, G., & Latham, G. (1997). The futility of utility
analysis revisited: When even an expert fails. Per-
sonnel Psychology,50, 601–610.
Zaccaro, S. J. (2007). Trait-based perspectives of leader-
ship. American Psychologist,62, 6–16.
342 S. Highhouse
... Thus, the widespread use of personality tests and the tendency for HR professionals to prefer intuitive selection procedures (Highhouse, 2008) means that stereotypes about extraversion and introversion could influence HR decisions, such as who to hire, fire, or promote. Such errors in predicting job performance could impact an organization's bottom line in that the inherent strengths of introverted employees are overlooked in the relevant context and introverted employees are unfairly passed over for opportunities for which they are qualified. ...
Article
Full-text available
This paper drew on the stereotype content model (SCM) to clarify cultural stereotypes about introverted and extraverted people at work to increase our understanding of the stereotype-driven process that may lead to negative responses to introversion and subsequent detriment to employee health and well-being. The hypothesis that introverted workers would be rated as lower in warmth and competence than extraverted workers was examined across three studies. Study 1 used qualitative content analyses to assess open responses that freely described introverted and extraverted colleagues. Study 2 tested the hypothesis quantitatively using established measures of warmth and competence. Finally, Study 3 tested if one’s self-identified personality impacted perceptions of warmth and competence. Across all studies, introverted employees were endorsed as being lower in warmth and competence than extraverted employees. In Study 3, warmth and competence stereotypes held regardless of one’s identification as introverted or extraverted. Finally, social interaction requirements of the job moderated perceptions of competence in Study 1, but not in Study 2 or 3. The present findings extend the SCM to new groups and provide empirical evidence to support a key driver in negative responses to introversion in the workplace. The results also suggest that job demands and personality identity salience are important considerations, and a need for organizations to engage in best practices to reduce the negative impact of these stereotypes on employees’ health and well-being.
... This "reason against" is in line with prior research on consumer-artificial intelligence relationship that shows many individuals are reluctant to adopt artificial intelligence-driven technologies with distrust emerging as a significant factor in their resistance (Mou, Xu, and Hu 2023). Surprisingly, consumers are extremely sensitive to mistakes made by algorithms and artificial intelligence-based technologies such that even a single error by an algorithmic system readily translates into the inference that the system is fundamentally flawed or completely malfunctioning (Highhouse 2008). In addition, individuals are very intolerant to artificial intelligence mistakes (Mou, Xu, and Hu 2023) and believe that artificial intelligence systems, contrary to their human counterparts, are not capable of learning from mistakes (Reich, Kaju, and Maglio 2023). ...
Article
Full-text available
Artificial intelligence‐enabled technologies have paved the way for innovative retail formats, such as “just walk out” stores, where consumers use their smartphone to enter, take what they want, and exit without an elaborate checkout process. Despite its promise, however, this new retail format has challenges maturing and being fully embraced by consumers. Coupling behavioral reasoning theory with a configurational perspective, this multimethod paper examines consumers' contextualized reasons for and against adoption of just walk out stores. Through qualitative field depth interviews (n = 60), Study 1 uncovers five reasons for (i.e., access convenience, transaction convenience, search convenience, no need for interaction, and limited social judgment) and seven reasons against (i.e., technology reliability concerns, entry inconvenience, payment unease, customer care inconvenience, privacy, contamination, and assortment‐related concerns) adoption of just walk out stores. Building on a subsequent empirical online survey (n = 546), Study 2 uses fuzzy set Qualitative Comparative Analysis and shows that different configurations (or combinations) of reasons for and against may lead to favorable and unfavorable attitude and adoption intentions. This research contributes to the emerging literature on artificial intelligence‐enabled retail formats by shedding light on the nuanced adoption/rejection drivers of just walk out stores. With these insights, retailers can tailor their approaches to minimize barriers and accentuate positive factors associated with customer adoption of just walk out stores.
... However, the prevalence and use of these certifications can vary significantly across industries, depending on the sector's focus, competitiveness, and workforce demographics. Below is a discussion of the industries that commonly use employer quality seals and the reasons behind their use [27][28][29][30][31][32][33][34][35][36][37][38][39]. ...
Article
Full-text available
This comprehensive review explores the influence of employer quality seals-such as certifications, awards, and recognitions-on applicant behavior, with a focus on how these symbols shape perceptions and actions in the recruitment process. Employer quality seals serve as external signals that reduce information asymmetry, offering job seekers insights into an organization's values, standards, and commitment to areas like employee satisfaction, diversity, and sustainability. By presenting these seals, organizations provide reassurance of their credibility, which can increase applicants' trust and motivate them to pursue employment opportunities.
... Additionally, looking at journal articles only represents half of the gap that separates scientists and practitioners, failing to gather data from the practice side. For example, the literature suggests that practitioners may prefer intuitive methods of selection over empirically developed and validated tools (Buckley et al., 2000;Highhouse, 2008;Rynes, 2012), highlighting that the issues with implementation are not solely due to academic article clarity. As such, we argue that bridging the gap between scientists and practitioners should not fall on academic institutions or journals alone. ...
... For instance, individuals tend to perceive decisions based on algorithms as less ethical than identical ones made by humans (Jago, 2019) and are less likely to follow advice from an algorithm than from a human (Logg et al., 2019). AI aversion has been observed in various situations such as employee selection (Highhouse, 2008), health care (Longoni et al., 2019;Promberger & Baron, 2006), and commerce (Longoni & Cian, 2022;Luo et al., 2019;Xie et al., 2022). ...
Article
Artificial intelligence (AI) tools are often perceived as lacking human-like qualities, leading to a preference for human experts over AI assistance. Extending prior research on AI aversion, the current research explores the potential aversion toward those using AI to seek advice. Through eight preregistered studies (total N = 2,317) across multiple AI use scenarios, we found that people denied humanness, especially emotional capacity and human nature traits, to AI-advice seekers in comparison to human-advice seekers (Studies 1–5 and S1–S3). This is because people perceived less similarity between themselves and AI-advice seekers (versus human-advice seekers), with a stronger mediating role of perceived similarity among individuals with greater aversion to AI (Studies 2 and S1). Dehumanization of AI-advice seekers predicted less behavioral support for (Study 3) and helping intention toward (Studies S2 and S3) them, and could be alleviated through anthropomorphism-related interventions, such as perceiving human-like qualities in AI or utilizing generative AI (Studies 4 and 5). These findings represent an important theoretical step in advancing research on AI aversion and add to the ongoing discussion on the potential adverse outcomes of AI, focusing on AI users.
... Using standardized selection processes and automated decision-making might make recruiting easier and more consistent. Still, research and anecdotal evidence routinely show that hiring managers prefer flexible methods (subjective, nonstandardized decision-making that often varies between individuals) for making hiring decisions because they believe that expert decision-makers can make better choices than impersonal algorithms or equations and that their own value as experts in the process is reduced when standardized methods are used (Grove & Meehl, 1996;Highhouse, 2008;Nolan et al., 2016). Thus, consulting psychologists will have a role in working with hiring managers as well as applicants. ...
Article
Full-text available
The presence of artificial intelligence (AI) and machine learning in consulting psychology is accelerating. Although consulting psychologists are beginning to understand that AI’s disruptive potential is significant, many still have a limited knowledge of its concepts, uses, and challenges. However, practitioners will increasingly encounter AI in many domains where they work and possibly compete with it. We present an overview of AI technology and discuss its growing use in the practice domains of consulting psychology. The fundamental elements of the so-called AI “black box” are discussed, including the basics of how AI algorithms are designed to function and learn. Given this understanding, we next discuss how AI is emerging as a technology impacting organizations, teams, and individuals specific to applications relevant to consulting practice. Within this context, we consider the roles that consulting psychologists can play in helping organizations implement effective, responsible, and ethical AI. The article concludes with a discussion of suggestions for AI’s responsible and ethical use in consulting psychology.
Article
Full-text available
The focus of this paper is on the context of selection and testing for organizational leaders in the 21st century and the reasons why so many organizations seem to be unsuccessful in selecting and retaining effective leaders, managers, and supervisors. The paper begins by identifying five major problems why organizations are predictably so incapable of selecting the right persons for top level positions. Citing information from top scholars, academic institutions, and the Society for Human Resource Management, we identify the underlying root causes of “the leader selection problem” and offer eight specific recommendations for immediately addressing that problem in public sector, private sector, and non-governmental organizations of all types. We conclude the paper with a challenge to key stakeholders to address the importance of improving the ability of organizations to select well-qualified leaders.
Article
As algorithms often outperform humans in prediction, algorithm aversion is economically harmful. To enhance algorithm utilization, we suggest emphasizing their learning capabilities, i.e., their increasing predictive precision over time, through the explicit addition of a “learning” label. We conducted five incentivized studies in which 1,167 participants may prefer algorithms or take up algorithmic advice in a financial or healthcare related task. Our results suggest that people use algorithms with a learning label to a greater extent than algorithms without such a label. As the accuracy of advice improves beyond a threshold, the use of algorithms with a learning label increases more than algorithms without a label. Thus, we show that a salient learning attribute can positively affect algorithm use in both the financial and health domain.
Article
Perceptions of algorithms as opaque, commonly referred to as the black box problem, can make people reluctant to accept a recommendation from an algorithm rather than a human. Interventions that enhance people’s subjective understanding of algorithms have been shown to reduce this aversion. However, across four preregistered studies (N = 960), we found that in the online shopping context, after explaining the algorithm recommendation process (versus human recommendation), users felt dehumanized and thus averse to algorithms (Study 1). This effect persisted, regardless of the type of algorithm (i.e., conventional algorithms or large language models; Study 2) or recommended product (i.e., search or experience products; Study 3). Notably, considering large language models (versus conventional algorithms) as the recommendation agent (Study 2) and framing algorithm recommendation as consumer-serving (versus website-serving; Study 4) mitigated algorithm aversion caused by meta-dehumanization. Our findings contribute to ongoing discussions on algorithm transparency, enrich the literature on human–algorithm interaction, and provide practical insights for encouraging algorithm adoption.
Article
Full-text available
This article summarizes the practical and theoretical implications of 85 years of research in personnel selection. On the basis of meta-analytic findings, this article presents the validity of 19 selection procedures for predicting job performance and training performance and the validity of paired combinations of general mental ability (GMA) and the 18 other selection procedures. Overall, the 3 combinations with the highest multivariate validity and utility for job performance were GMA plus a work sample test (mean validity of .63), GMA plus an integrity test (mean validity of .65), and GMA plus a structured interview (mean validity of .63). A further advantage of the latter 2 combinations is that they can be used for both entry level selection and selection of experienced employees. The practical utility implications of these summary findings are substantial. The implications of these research findings for the development of theories of job performance are discussed.
Article
Individual Assessment is a professional practice important to Human Resource Managers, Executives and anyone making decisions about employees. Finally, we now have a clear, practical guide with methodologically-grounded descriptions of how to successfully do it. The authors have put together a unique new book with the following key features: case studies and applied examples showing "how to" conduct individual assessment; the book provides the reader with a conceptual structure and the research and literature supporting the process; and it can be used as a text or supplemental text in courses on Personnel Selection, Assessment, Human Resources and Testing. This book will take Individual Assessment to an entirely new level of understanding and practice, and into a new era of professional research and activity. © 2003 by Lawrence Erlbaum Associates, Inc. All rights reserved.
Article
The purpose of this study was to identify the reasons why some organizations do not employ certain HRM practices that could increase levels of employee performance and organizational profitability. The focus was on the staffing area (recruitment and selection) of HRM.1 Specifically, this study looked at five staffing practices that the academic research literature has found can significantly increase employee performance levels. Descriptions of these practices, and references supporting their impact on employee performance, are provided in Exhibit 1.
Article
The general proposition that performance is a multiplicative function of ability and motivation has a long-standing history. Three recent studies have reported results that suggest that shifting from an additive model to a multiplicative model may improve efforts to predict performance. This article represents an extensive examination of this multiplicative proposition when motivation is conceptualized in terms of personality characteristics. The Project A database, the Management Continuity Study database, and 2 additional data sets were brought together to facilitate a systematic investigation concerning whether ability and personality interact when predicting performance. Contrary to expectations, the results indicate that ability-personality interactions are not detected at above chance levels.
Article
This meta-analysis evaluated predictors of both objective and subjective sales performance. Biodata measures and sales ability inventories were good predictors of the ratings criterion, with corrected rs of .52 and .45, respectively. Potency (a subdimension of the Big 5 personality dimension Extraversion) predicted supervisor ratings of performance (r =.28) and objective measures of sales (r =.26). Achievement (a component of the Conscientiousness dimension) predicted ratings (r =.25) and objective sales (r=.41). General cognitive ability showed a correlation of .40 with ratings but only .04 with objective sales. Similarly, age predicted ratings (r =.26) but not objective sales (r = -.06). On the basis of a small number of studies, interest appears to be a promising predictor of sales success.
Article
Malcolm Gladwell; род. 3 сентября 1963, Хэмпшир) — канадский журналист, поп-социолог. В 2005 году «Time» назвало Малкольма Гладуэлла одним из 100 самых влиятельных людей. Книги и статьи Малкольма часто касаются неожиданных последствий исследований в социальных науках и находят широкое применение в научной работе, в частности в областях социологии, психологии и социальной психологии. Некоторые из его книг занимали первые строки в списке бестселлеров «The New York Times». В 2007 году Малкольм получил первую премию Американской Социологической ассоциации за выдающиеся достижения по отчетам в социальных вопросах. В 2007 году он также получил почетную степень доктора филологии Университета Ватерлоо. Малькольм Гладуелл описывает эксперименты, которые показывают, что человеку с поврежденными эмоциональными центрами крайне трудно принимать решения. Он рассказывает про одного такого пациента, которому было предложено прийти на прием либо во вторник, либо в пятницу. И пациент два часа решал во вторник ему прийти или в пятницу — в столбик выписывал плюсы и минусы, их сравнивал, группировал по разному, всяко переставлял. И в жизни своих домашних он просто убивал вот этим. Если его спрашивали, ты что хочешь: омлет или салат? — это задача минут на сорок. Обычный человек очень просто поступает. Он видит омлет, что-то чувствует и говорит: Хочу! Все. Выбор сделан легко и быстро.