ArticlePDF Available

New Talent Signals: Shiny New Objects or a Brave New World?

Authors:
  • Deeper Signals
  • Hogan Assessment Systems

Abstract and Figures

Almost 20 years after McKinsey introduced the idea of a war for talent, technology is disrupting the talent identification industry. From smartphone profiling apps to workplace big data, the digital revolution has produced a wide range of new tools for making quick and cheap inferences about human potential and predicting future work performance. However, academic industrial–organizational (I-O) psychologists appear to be mostly spectators. Indeed, there is little scientific research on innovative assessment methods, leaving human resources (HR) practitioners with no credible evidence to evaluate the utility of such tools. To this end, this article provides an overview of new talent identification tools, using traditional workplace assessment methods as the organizing framework for classifying and evaluating new tools, which are largely technologically enhanced versions of traditional methods. We highlight some opportunities and challenges for I-O psychology practitioners interested in exploring and improving these innovations.
Content may be subject to copyright.
Industrial and Organizational Psychology/page 1 of 20/June 2016.
Copyright © 2016 Society for Industrial and Organizational Psychology. doi:
10.1017/iop.2016.6
Focal Article
New Talent Signals: Shiny New Objects or a
Brave New World?
Tomas Chamorro-Premuzic
Hogan Assessment Systems, University College London, and Columbia University
Dave Winsborough
Hogan Assessment Systems and Winsborough Ltd.
Ryne A. Sherman
Florida Atlantic University
Robert Hogan
Hogan Assessment Systems
Almost 20 years after McKinsey introduced the idea of a war for talent, technology
is disrupting the talent identication industry. From smartphone proling apps to
workplace big data, the digital revolution has produced a wide range of new tools
for making quick and cheap inferences about human potential and predicting future
work performance. However, academic industrial–organizational (I-O) psycholo-
gists appear to be mostly spectators. Indeed, there is little scientic research on in-
novative assessment methods, leaving human resources (HR) practitioners with no
credible evidence to evaluate the utility of such tools. To this end, this article provides
an overview of new talent identication tools, using traditional workplace assess-
ment methods as the organizing framework for classifying and evaluating new tools,
which are largely technologically enhanced versions of traditional methods. We high-
light some opportunities and challenges for I-O psychology practitioners interested
in exploring and improving these innovations.
Keywords: talent identication, technology, big data, social media, gamication
Friedrich Hegel thought conict and war were the major engines of
progress (Black, 1973). McKinsey & Company’s notion of a war for talent
(Chambers, Foulon, Handeld-Jones, Hankin, & Michael,
1998) has cre-
ated considerable interest in the development, validation, and application
of innovative tools for quantifying human potential (Chamorro-Premuzic,
Tomas Chamorro-Premuzic, Hogan Assessment Systems, Tulsa, Oklahoma; Depart-
ment of Psychology, University College London; and Teachers College, Columbia University.
Dave Winsborough, Hogan Assessment Systems, Tulsa, Oklahoma; and Winsborough Ltd.,
Wellington, New Zealand. Ryne A. Sherman, Department of Psychology, Florida Atlantic
University. Robert Hogan, Hogan Assessment Systems, Tulsa, Oklahoma.
Correspondence concerning this article should be addressed to Tomas Chamorro-
Premuzic, Hogan Assessment Systems, 11 South Greenwood, Tulsa, OK 74012. E-mail:
t.chamorro@ucl.ac.uk
1
2 - .
2013). Like other forms of warfare, the talent war has spurred an explosion
of digital tools for identifying new talent signals, that is, nontraditional in-
dicators of work-related potential. As a result, not only are talent identi-
cation practices rapidly becoming more high tech but also they are evolv-
ing faster than industrial–organizational (I-O) psychology research (Roth,
Bobko, Van Iddekinge, & Thatcher, 2016). This leaves academics playing
catch up and human resources (HR) practitioners with many unanswered
questions: How valid are these methods; are new technologies just a fad; can
new tools disrupt traditional assessment methods; what are the ethical con-
strains to adopting these new tools? This article attempts to address some of
these questions by reviewing recent innovations in the assessment and talent
identication space. We review these innovative tools by highlighting their
link to equivalent old school methods. For example, gamied assessments
are the digital equivalent of situational judgment tests, digital interviews rep-
resent computerized versions of traditional selection interviews, and profes-
sional social networks, such as LinkedIn, are the modern equivalent of a re-
sumé and recommendation letters. Thus, our article draws parallels between
the old and the new worlds of talent identication and provides an organiz-
ing framework for making sense of the emerging tools we are seeing in this
space.
If It Ain’t Broke, Don’t Fix It: The Old World of Talent Is Alive and Well
Although denitions of talent vary, four basic heuristics distinguish between
more and less talented employees. The rst is the 80/20 rule (Craft & Leake,
2002) based on Vilfredo Pareto’s (1848–1923) observation that a small num-
ber of people will generally create a disproportionate amount of the output
of any group. Specically, around 20% of employees will account for around
80% of productivity, while the remaining 80% of employees will account for
only 20% of productivity. Who, then, are the talented individuals? They are
the vital few who are responsible for most of the output. The second heuris-
tic concerns the principle of maximum performance (Barnes & Morgeson,
2007), which equates talent to the best an individual can do; that is, people
are as talented as their best possible performance. The third heuristic equates
talent to eortless performance, emphasizing its relation to innate ability
or potential. Because performance is usually conceptualized as a combina-
tion of ability (talent) and motivation (eort; Heider, 1958; Porter & Lawler,
1968), talent can be dened as performance minus eort. Thus, if two in-
dividuals are equally motivated, the more talented person will perform bet-
ter. That means if ordinary people want to perform as well as the talented,
then their best bet is to work harder. The nal heuristic equates talent to
personality in the right place. That is, when individuals skills, dispositions,
knowledge, and abilities are matched to a task or job, they should
3
perform to a higher level. This denition is the core of the so-called
person–environment t theory of I-O psychology (Edwards, 2008). Thus,
a major goal of any talent acquisition venture is to maximize t between
the employees qualities and the role and organization in which they are
placed.
With these heuristics in mind, it is possible to classify individuals as
more or less talented. The vital few who have displayed higher levels of per-
formance or achieved without trying hard or having much training and who
seem to have found a niche that ts their dispositions and abilities will gen-
erally be considered more talented. Consider the case of Lionel Messi, the
Barcelona soccer star. Although Messi’s teammates are usually considered
the best soccer players in the world, he is consistently the best player on
the team, and in some seasons he is individually responsible for over 60%
of the critical goals and assists on his team. This makes Messi not just part of
the vital few but perhaps the vital one on the team. Furthermore, as hundreds
of YouTube compilations show, Messi’s maximum performance is matched
by none, and it is also eortless—he has been dribbling and scoring in the
same way since his early teens and, unlike Cristiano Ronaldo, is not known
for training particularly hard. However, while Messi’s qualities are certainly
in the right place at Barcelona—where he plays with his lifelong friends and
shares the values of the club and supporters—he has struggled to show a
similar form when playing for the Argentine national team. Thus even for
an extraordinary talent like Messi, t matters.
The next step in talent identication concerns two critical questions:
what to assess and how (Ployhart,
2006). The “what” question involves den-
ing the key components of talent. This question is important because if you
don’t know what to measure, there is no point in measuring it well. In other
words, you can do a great job measuring the wrong thing, but that will not
get you very far. The “how” question concerns the methods that can be used
to quantify individual dierences in talent—in eect, these are the tools used
by consultants, recruiters, and coaches to help organizations win the war for
talent. We see test designers and publishers as arms merchants in the war for
talent. We ourselves provide scientically defensible “weapons” that help or-
ganizations win the talent war by better understanding and predicting work-
related behaviors, particularly in leaders. There are, of course, many other
key players in the war for talent: from CEOs, who represent the generals, to
HR managers, who are the lieutenants, to coaches and consultants, who are
the soldiers, hit men, or mercenaries, respectively. We all share a common
goal, which is to help organizations attract, engage, and retain more talented
individuals, who are the commodity being fought over.
To provide a more granular answer to the “what” question of talent
identication, we can examine the qualities that talented individuals tend to
4 - .
display at work. As argued earlier in this journal, the generic attributes of
talent can be described with the acronym RAW (R. T. Hogan, Chamorro-
Premuzic, & Kaiser,
2013). First, talented people are more rewarding to deal
with (R)—they are likable and pleasant. Interpersonal and intrapersonal
competencies such as emotional intelligence (EQ), emotional stability, po-
litical skill, and extraversion capture this core element of talent, which en-
ables individuals to get along at work (Van Rooy & Viswesvaran,
2004). In a
world where the employees direct-line manager tends to determine career
success, it is unsurprising that perceptions of talent will be largely driven
by being pleasant and rewarding to deal with. Second, talented people are
more able (A), meaning they learn faster and solve problems better. This is a
function of experience, general intellectual ability (Schmidt & Hunter,
1998),
and the domain-specic expertise that the Nobel laureate Herbert Simon de-
scribed as a persons network of possible wanderings (Amabile,
1998). The
more able employees are, the easier it is for them to make sense of work-
related problems, translate information into knowledge, and quickly iden-
tify patterns in critical work tasks. Third, talented people are more willing
to work hard (W), thereby displaying more initiative and drive. This theme,
which concerns how employees get ahead, is reected in meta-analytic stud-
ies highlighting the consistent positive eects of ambition, conscientious-
ness, and achievement motivation on job performance and career success
(Almlund, Duckworth, Heckman, & Kautz, 2011). Although the labels vary,
these universals of talent compose fairly stable individual dierences that
have been studied and validated extensively in I-O psychology, as well as
social, educational, and dierential psychology (Kuncel, Ones, & Sackett,
2010). A great deal of conceptual confusion about talent arises because orga-
nizations prefer their own labels, and they devote signicant time devising
original competency models—as the saying goes, A camel is a horse de-
signed by a committee.”
As for the “how” question, it is noteworthy that traditional meth-
ods for talent identication are alive and well. Indeed, 100 years of re-
search in I-O psychology provide conclusive evidence for the validity of
job interviews (Levashina, Hartwell, Morgeson, & Campion,
2014), assess-
ment centers (Thornton & Gibbons,
2009), cognitive ability tests (Schmitt,
2013), personality inventories (J. Hogan & Holland, 2003), biodata (Breaugh,
2009), situational judgment tests (Christian, Edwards, & Bradley, 2010), 360-
degree feedback ratings (Borman, 1997), resumés (Cole, Feild, & Staord,
2005), letters of recommendations (Chamorro-Premuzic & Furnham, 2010),
and supervisors ratings of performance (Viswesvaran, Schmidt, & Ones,
2005). Unfortunately, HR practitioners are not always aware of this litera-
ture, or practitioners remain attached to their amateurish competency la-
bels and meta-models, which explains why they often prefer to rely on their
5
intuition to identify talent (Dries, 2013) and also why the face and social
validity of these methods are often unrelated to their psychometric validity
(Chamorro-Premuzic,
2013). Similarly, shiny new talent identication ob-
jects often bamboozle recruiters and talent acquisition professionals with no
regard for predictive validity.
For example, employers and recruiters have used social media to eval-
uate job candidates for several years. Intuitive examinations of social me-
dia proles are a popular, albeit clandestine, method for discovering the
applicants true self.” Informal assessments of a candidates online reputa-
tion, called cybervetting (Berkelaar,
2014), are often preferred to review-
ing the more formal but overly polished resumé. Yet, most people spend a
great deal of time curating their online personae, which are burnished by
the same degree of impression management and social desirability as their
resumés (Back et al.,
2010). Burnishing has even been taken as a right, seen
in the ability of European Union citizens to limit access and hide links to
images or posts that do not t the reputation they want to portray online
(Warman,
2014). When social media users decide what images, achieve-
ments, musical preferences, and conversations to display online, the same
self-presentational dynamics are at play as in any traditional social setting
(Chamorro-Premuzic, 2013. Consequently, peoples online reputations are
no more “real” than their analogue reputations; the same individual dier-
ences are manifested in virtual and physical environments, albeit in seem-
ingly dierent ways. It is therefore naïve to expect online proles to be more
genuine than resumés, although they may oer a much wider set of behav-
ioral samples. Indeed, recent studies suggest that when machine-learning al-
gorithms are used to mine social media data, they tend to outperform human
inferences of personality in accuracy because they can process a much bigger
range of behavioral signals (Lambiotte & Kosinski, 2014, Youyou, Kosinski,
& Stillwell, 2015). That said, social media is as deceptive as any other form
of communication (B. Hogan,
2010); employers and recruiters are right to
regard it as a rich source of information about candidates talent—if they
can get past the noise and make accurate inferences.
For their part, candidates seem to expect that their digital lives will
be examined for hiring purposes (El Ouirdi, Segers, El Ouirdi, & Pais,
2015). Although studies suggest that candidates may nd cybervetting unfair
(Madera, 2012), most candidates seem habituated to the idea that their so-
cial media activity will inuence potential stang or promotion decisions.
Indeed, one study found that nearly 70% of respondents agreed that em-
ployers have the right to check their social networking prole when evalu-
ating them (Vicknair, Elkersh, Yancey, & Budden, 2010). Job applicants may
therefore face a posting paradox (Berkelaar & Buzzanell,
2015), torn be-
tween sharing authentic personal information—and risking inappropriate
6 - .
self-disclosure—or creating a professional but deceptive online persona that
appeals to potential employers. Yet humans always regulate their social be-
havior to conform to others expectations and social rules, even when the
environment tolerates narcissistic indulgences in self-presentation, such as
on Facebook. This is the fundamental skill that enables people to live in
harmony and reects individual dierences in social competence (Kaiser,
Hogan, & Craig,
2008).
The New Kids on the Blog: Talent in the Digital World
Most innovations in talent identication are the product of the digital rev-
olution, enabled by the application of innovative tools designed to evaluate
massive data sets. When the human need for connectedness met digital and
mobile technologies, it generated a wealth of data about individuals prefer-
ences, values, and reputation. These traces of behavior, also known as the
online footprint or digital breadcrumbs (Lambiotte & Kosinski,
2014), may
be used to infer talent or job-related potential. For example, MIT researchers
used phone metadata (e.g., call frequency, duration, location, etc.) to produce
fairly accurate descriptions of users’ personalities (de Montjoye, Quiodbach,
Robic, & Pentland, 2013). Similarly, Chorley, Whitaker, and Allen (2015)
successfully inferred some elements of the Big Five personality taxonomy
by tracking user location behavior. Although data have turbo-charged an-
alytics in elds as diverse as medicine, credit and risk, media, and market-
ing, HR generally lags behind. Despite all the talk about a big data revolu-
tion in HR and the rebranding of the eld as people analytics,” novel tal-
ent identication tools are still in their infancy, and user adoption is rela-
tively low even in industrialized markets. One notable exception is the use of
professional social networking sites, such as LinkedIn, for recruitment pur-
poses. However, these sites are simply the modern equivalent of a resumé
and phone directory, with the option of including personal endorsements
(the modern version of a recommendation letter). Inferences based on these
signals are mostly holistic and intuitive, and the focus is on hard skills rather
than core talent qualities, for example, ambition, EQ, and intelligence (Zide,
Elman, & Shahani-Denning,
2014). Nonetheless, demand for recruitment-
related networking sites is growing at double or triple digit rates (Recruiting
Daily,
2015), with hundreds of startups oering technologies to screen, in-
terview, and prole candidates online (Davison, Maraist, Hamilton, & Bing,
2012).
These new ventures are predominantly based on four methodologies
that have the potential to disrupt and perhaps even advance the talent
identication industry; they are (a) digital interviewing and voice prol-
ing, (b) social media analytics and web scraping, (c) internal big data and
talent analytics, and (d) gamication. As shown in
Table 1, each of these
7
Table 1. A Comparison Between Old and New Talent Identification Methods
Old methods New tools Dimension assessed
Interviews Digital interviews Expertise, social skills,
Voice proling motivation, and
intelligence
Biodata Big data (internal) Past performance
Supervisory ratings Current performance
IQ Intelligence, job-related
Situational judgment test Gamication knowledge, and Big Five
Self-reports personality traits or minor
traits
Self-reports Social media analytics Big Five personality traits
and values (identity
claims)
Resumés Professional social networks Experience, past
References (LinkedIn) performance, and technical
skills and qualications
360s Crowdsourced
reputation/peer-ratings
Any personality trait,
competencies, and
reputation
methodologies corresponds to a well-established talent identication ap-
proach. We discuss the new methodologies below.
Digital Interviewing and Voice Profiling
Although preemployment job interviews are generally less valid than other
assessment tools, they are ubiquitous (Roth & Hucutt,
2013). Furthermore,
job interviews are often the only method used to evaluate candidates, and
when used in conjunction with other methods they are generally the -
nal hurdle applicants need to pass. Technology can make interviews more
ecient, standardized, and cost eective by enhancing both structure and
validity (Levashina et al.,
2014). Some companies have developed struc-
tured interviews that ask candidates to respond via webcam to prerecorded
questions using video chat software similar to Skype (thus digital interview-
ing). This increases standardization and allows hiring panels and managers
to watch the recordings at their convenience. Moreover, through the addi-
tion of innovations, such as text analytics (see below) and algorithmic read-
ing of voice-generated emotions, a wider universe of talent signals can be
sampled. In the case of voice mining, candidates speech patterns are com-
pared with an attractive exemplar, derived from the voice patterns of high
performing employees. Undesirable candidate voices are eliminated from
the context, and those who t move to the next round. More recent develop-
8 - .
ments use similar video technology to administer scenario-based questions,
image-based tests, and work-sample tests. Work samples are increasingly
common, automated, and sophisticated. For example, Hirevue.com, a lead-
ing provider of digital interview technologies, employs coding challenges to
screen software engineers for their software writing ability. Likewise, Uber
uses similar tools to test and evaluate potential drivers exclusively via their
smartphones (see
www.uber.com).
Based on Ekmans research on emotions (Ekman,
1993), the secu-
rity sector has developed microexpression detection and analysis tech-
nology to enhance the accuracy of interrogation techniques for identi-
fying deception (Ryan, Cohn, & Lucey,
2009). The recent creation of
large databases of microexpressions (Yan, Wang, Liu, Wu, & Fu,
2014)
is likely to facilitate the standardization and validation of these meth-
ods. Beyond using automated emotion reading, new research aims to cor-
relate facial features and habitual expression with personality (Kosinski,
2016). Although eect sizes tend to be small, this methodology can pro-
vide additional talent signals to produce more accurate and predictive
proles.
Social Media Analytics and Web Scraping
Humans are intrinsically social, and our need to connect is the driving
force behind Facebooks dominance in social networking; it is estimated that
nearly 25% of all the people in the world (and 50% of all Internet users)
have active Facebook accounts. Unsurprisingly, Facebook has become a use-
ful research tool—and ecosystem—to evaluate human behavior (Kosinski,
Matz, & Gosling,
2015). Research nds that aspects of Facebook activity,
such as users’ photos, messages, music lists, and “likes” (reported prefer-
ences for groups, people, brands, and other things), convey accurate infor-
mation about individual dierences in demographic, personality, attitudinal,
and cognitive ability variables. Michal Kosinski and colleagues have shown
that machine-learning algorithms can predict scores on well-established psy-
chometric tests using Facebook “likes” as data input (Kosinski, Stillwell, &
Graepel,
2013). This makes sense, because “likes” are the digital equiva-
lent of identity claims: “Likes” tell others about our values, attitudes, in-
terests, and preferences, all of which relate to personality and IQ. In some
cases, associations between Facebook “likes” and psychometrically derived
individual dierence scores are intuitive. For example, people with higher
IQ scores tend to “like” science, the Godfather movies, and Mozart. How-
ever, other associations are less intuitive and may not have been discovered
without large-scale exploratory data mining. For example, one of the main
markers—strongest signals—of high IQ scores was “liking curly fries (a
type of French fry, popular in the United States, characterized by a wrinkly,
9
spring-like, shape). Somewhat ironically, media coverage of this nding led
to an increase in “liking” curly fries, presumably without causing a global
rise in IQ scores. However, unlike the static scoring keys used in tradi-
tional psychometric assessments, machine-learning algorithms can auto-
correct in real time. Thus, when too many unintelligent individuals “like”
curly fries, they cease to signal higher intelligence. This point is impor-
tant for thinking about validity in the digital world: Some talent signals
may not generalize from particular contexts or may change over time (like
curly fries). Facebook is allegedly interested in using personality to un-
derstand user behavior and incorporates a wide range of personal sig-
nals, such as hometown, frequency of movement, friend count, and edu-
cational level to segment its audience for media and marketing purposes
(Chapsky,
2011). Perhaps the same information will soon be used for tal-
ent management purposes, especially in recruitment or prehiring decisions.
Social media analytics has turned up several such counterintuitive associ-
ations, which big data enthusiasts and HR practitioners care little about
because their main goal is to predict, rather than explain, behavior. I-O
psychologists on the other hand—and psychologists in general—may fret
about the atheoretical, black box, data-mining approach, which has created
somewhat of a gap—and tension—between the science and the machine
approach.
Some estimates suggest that 70% of adults are passive job seekers (i.e.,
not actively searching for new jobs, but open to new opportunities), and
companies like TalentBin and Entelo identify potential job candidates out-
side the pool of existing job applicants (Bersin, 2013). Entelo claims that it
can search (scrape) 200 million candidate proles from 50 Internet sources
and identify individuals likely to change jobs within the next 3 months (En-
telo Outbound Recruiting Datasheet). If these claims are accurate, then it
raises the possibility of placing workers in more relevant roles and lower-
ing the proportion of disengaged employees, the economic value of which
should not be underestimated.
Another unexpected talent signal concerns the language people use on-
line. Psychologists from Freud and Rorschach onward have argued that peo-
ples language reveals core aspects of their personalities (Tausczik & Pen-
nebaker, 2010). Linguistic analysis is a promising methodology for inferring
talent from web activity, and it can be applied to free-form text (Schwartz
et al., 2013). This methodology has been around for 25 years, but mod-
ern scraping tools and publicly available text have made it applicable to
large-scale proling. Indeed, work with the Linguistic Inquiry and Word
Count application (LIWC; Pennebaker, 1993) has shown that some LIWC
categories correspond to the Big Five personality traits (Pennebaker,
2011).
For example, for both men and women, higher word count and fewer large
10 - .
words predicted extraversion (Mehl, Gosling, & Pennebaker, 2006), which
itself correlates with leader emergence (Grant, Gino, & Hofmann, 2011).
Other work (Schwartz et al.,
2013) shows that gender, religious identity, age,
and personality can be identied from linguistic information.
Unlike other areas of assessment-related innovations, peer-reviewed
studies provide evidence for the links between word usage and important
individual dierences. For example, the words that neurotics use in blogs
include awful,” “horrible,” and depressing,” whereas extraverts talk about
“bars,” drinks,” and “Miami (Schwartz et al.,
2013; Yarkoni, 2010). Less
intelligent people mangle grammar and make more frequent spelling errors.
There are free tools available to infer personality from open text (IBM’s Wat-
son does it for you here:
http://bit.ly/1OjlkuR). These tools allow us to copy
and paste anyones writing into a web page and generate their personality
prole. New applications analyze e-mail communications and provide users
with tips on how to respond to senders, based on their inferred personal-
ity (http://bit.ly/1lkv5gB); others use speech-to-text tools and then parse the
text through a personality engine (e.g., HireVue.com).
What is unknown is whether these types of talent signals are additive in
terms of the predictive power. For example, do biodata and Facebook likes
and voice proling improve prediction of work-related outcomes? This is an
area ripe for large-scale research.
Big Data and Workplace Analytics
In-house data are another source of information about talent. Because so
much work is now digital—recorded or being logged and transmitted via
the Internet of things—organizational performance data are both vast and
ne-grained. Mining these data for critical signals of talent is consistent with
the traditional I-O psychology view that past behavior is a good predictor
of future behavior. For example, big data may be used to connect aggre-
gate sales sta personality variables, LinkedIn use, engagement scores, and
sales activity (including number of calls, frequency, length of time spent with
customers, and net promoter scoring) to customer ordering data and future
revenues. Once the data are recorded, models can be developed and tested
backward in time to create predictions (as is the case when modeling share-
market behavior).
Sandy Pentland and his MIT colleagues have used tracking badges to
follow employees behaviors at work and record the frequency of talking,
turn taking, and so on. This showed where people go for advice (or gos-
sip) and how ideas and information spread within an organization. These
data predicted team eectiveness (Woolley, Chabris, Pentland, Hashmi, &
Malone,
2010) as well as identied the individuals who are a central
node in the network (presumably because they are more useful to the
11
organization or because they have more and stronger connections with
colleagues).
One critical ingredient in talent identication is the criterion space: the
empirical evidence of talent. In the I-O eld, the well-known criterion prob-
lem (Austin & Villanova, 1992) remains problematic. Bartram noted that
traditional validation research has been predictor centric (Bartram, 2005),
and despite the development of competency frameworks (e.g., Lombardo &
Eichinger, 2002), criterion data remain noisy, dependent on supervisor rat-
ings, and unsatisfactory. Although more data may not help conceptually, a
ner understanding of performance is possible in principle, although this
issue has not been addressed to date. Emergent tools and products suggest
that this inevitably will happen.
For example, an important area in organizational big data is the case of
peer evaluations or open source ratings. Glassdoor, a sort of Yelp of work-
places, is a good example. The site enables employees to rate their jobs and
work experience, and the site has manager ratings for nearly 50,000 com-
panies; anybody can retrieve the ratings. This enables employers to see how
employees perceive the company culture and how individual managers have
impacted workers and workplaces. With these data, organizations can eec-
tively crowdsource their evaluations of leadership, looking at the link be-
tween employees ratings and company performance.
So long as organizations have robust criteria, their ability to identify
novel signals will increase, even if those signals are unusual or counterin-
tuitive. As an example of an unlikely talent signal, Evolv, an HR data an-
alytics company, found that applicants who use Mozilla Firefox or Google
Chrome as their web browsers are likely to stay in their jobs longer and per-
form better than those who use Internet Explorer or Safari (Pinsker,
2015).
Knowing which browser candidates used to submit their online applications
may prove to be a weak but useful talent signal. Evolv hypothesizes that the
correlations among browser usage, performance, and employment longevity
reect the initiative required to download a nonnative browser (Pinsker,
2015).
Gamification
More Americans play games than do not, half of all gamers are under the age
of 35 (Campbell, 2015), and parents mostly think video games are a positive
inuence on their children (Big Fish Blog,
2015; Lofgren, 2016); therefore,
it seems obvious to look for talent signals via this medium. For instance,
HR Avatar conducts workplace simulations in the form of interactive car-
toons aimed at customer service or security roles. Consider the personality
assessments developed by Visual DNA, which present users with choices in
12 - .
the form of images and pictures, an intuitive and engaging experience with
validity comparable with other questionnaire formats.
Gamication is now mobile. One company, Knack, claims to evaluate
several dierent talents (“knacks”) from playing puzzle-solving games on
mobile phones. What is interesting is that Knack has completely taken on the
gamied persona, awarding players badges that they can share with friends.
Another company, Pymetrics, gamies some of the assessment principles of
neuroscience to infer the personality and intelligence of candidates. Whether
and to what degree it is useful to share this information with others is yet
to be seen. But this approach represents a shift in the relationship among
test providers, test takers, and rms: from a business-to-business model to
a business-to-consumer model and from a reactive test taker to a proactive
test taker. We predict that the testing market will increasingly transition from
the current push model—where rms require people to complete a set of
assessments in order to quantify their talent—to a pull model where rms
will search various talent badges to identify the people they seek to hire. In
that sense, the talent industry may follow the footsteps of the mobile dating
industry. Consider the case of Tinder, a popular and addictive mobile dating
app. First, users agree to have some elements of their social media footprint
proled when they sign up for the service. Next, their peers are able to judge
these proles and report whether they are interested in them or not by swip-
ing left or right (a gamied version of hot or not). This is consistent with
research showing that personality traits can be accurately inferred through
photographs and that these inferences drive dating and relationship choices
(Zhang, Kong, Zhong, & Kou,
2014). Finally, if the algorithm determines a
match, both parties receive instant feedback on their preferences. This model
could easily be applied to the talent identication and stang process. In fact,
it is easier to predict job performance and career success than relationship
compatibility and success.
The Enablers of New Tools
The World Wide Web has made it possible for workers to leave digital foot-
prints all over the Internet, perhaps most prominently on social networking
sites. However, without devices to examine these footprints, these novel tal-
ent signals would be of no use. Technological advances in three key areas
have made the new tools of HR professionals possible: data scraping, data
storage, and data analytics.
Data scraping involves gathering data that are available on websites,
smartphones, and computer networks and translating these data into be-
havioral insights. Gathering data on potential workers is a rst step toward
understanding what they are like, and some of the most powerful devices
for gathering and manipulating data are open source and free to use (e.g.,
13
Python, Perl), making them quite exible and readily available. Because
many data scraping devices require working with and/or developing applica-
tion program interfaces (i.e., programming skills), HR professionals are en-
listing computer programmers to develop customized devices for their data
scraping needs.
The availability of large amounts of useful data has increased demand
for data storage. As a result, devices for data storage and centralization have
emerged; they include cloud-based storage systems (e.g., iCloud, Dropbox)
and advanced Hadoop clusters, which allow for massive data storage and
enormous computer processing power to run virtually any application.
Finally, advances in data analytics have created interesting new HR
tools. For example, software for text analysis and object recognition can
rapidly transform purely qualitative information into quantitative data.
Such data can then be submitted to a variety of new analytic techniques such
as machine learning. In contrast with traditional data analytic techniques,
machine-learning techniques rely on sophisticated algorithms to (a) detect
hidden structures in the data (i.e., unsupervised learning) or (b) develop
predictive models of known criteria (i.e., supervised learning). Once again,
some of the most powerful tools for conducting these analyses are open
source and free (e.g., R: R Core Team,
2015), making them available to
anyone.
The Future Is Here, but Be Careful
As William Gibson pointed out, the future is already here; it’s just not yet
evenly distributed. In a hyperconnected world where everyday behaviors are
recorded, unprecedented volumes of data are available to evaluate human
potential. I-O psychologists need to recognize the impact our digital lives
will have on research methods, ndings, and practices. We believe that these
vast data pools and improved analytic capabilities will fundamentally disrupt
the talent identication process. There are several key points to be derived
from our review. First, many more talent signals will become available. Sec-
ond, even if these emerging signals are weak or noisy, they may still work
additively and be useful. Third, new analytic tools and computing power will
continue to emerge and allow us to improve and rene the prediction of be-
havior in a wide range of contexts, probably based on the additive nature of
these signals. Alternatively, if they do not prove to be additive, we anticipate
that subsets of these signals will allow more specic prediction of perfor-
mance. That is to say, computing power and the vast number of data points
will allow for much greater alignment between the criterion and the predic-
tor, which is a fundamental tenet of validity (J. Hogan & Holland,
2003).
The datication of talent is upon us, and the prospect of new technolo-
gies is exciting. The digital revolution is just beginning to appear in practice,
14 - .
and research lags our understanding of these technologies. We therefore sug-
gest four caveats regarding this revolution.
First, the new tools have not yet demonstrated validity comparable with
old school methods, they tend to disregard theory, and they pay little at-
tention to the constructs being assessed. This issue is important but possibly
irrelevant, because big data enthusiasts, assessment purveyors, and HR prac-
titioners are piling into this space in any event. Roth and colleagues (Roth
et al., 2016) point out that construct validity is lacking when using informa-
tion from social media for employment purposes, which does not seem to
worry big data enthusiasts who are simply interested in nding relationships
between variables. In our view, predicting behavior is clearly a key priority
in talent identication, but understanding behavior is equally important. In-
deed, scientically defensible assessment tools do not just provide accurate
data, they also tell a story about the candidate that explains why we may ex-
pect them to behave in certain ways. Until we have peer-reviewed evidence
regarding the incremental validity of the new methods over and above the
old, they will remain bright, shiny objects in the brave new world of HR.
Though, as we have pointed out, shiny objects interest HR practitioners re-
gardless of their demonstrated validity and reliability.
Three additional issues may constrain the implementation of new
assessment tools in talent identication processes. First, privacy and
anonymity concerns may limit access to individual data, a point that has
been raised repeatedly in earlier scholarly articles (Brown & Vaughn,
2011;
Davison et al.,
2012; Roth et al., 2016). On the other hand, scholarly concern
has not stopped recruiters, HR, or managers from using individuals digital
proles, nor has it slowed the development of tools designed specically to
do this. Individuals may provide consent for their data to be used without
understanding the implications of doing so or may simply be unaware. Gov-
ernments and privacy advocates may step in to regulate access or control us-
age, but it would be better if consumers fully understood what can be known
about them and how that information might be used. Note, however, that in
other elds of application, such as programmatic marketing, predictive an-
alytics appear to operate without many ethical concerns, even though they
oer relatively less to consumers; for example, the promise of a relevant ad
is arguably less enticing (and likely) than the promise of a relevant job.
Second, in order to match or surpass the accuracy attained by established
tools, the cost of building new tools may be prohibitive. For example, devel-
oping a valid and comprehensive gamied assessment of personality costs
much more than developing a traditional self-report or situational judgment
test. Thus there is a trade-o developers make between price, accuracy, and
user experience (e.g., when you increase the user experience, you increase
price but decrease accuracy; when you increase accuracy, you increase price;
15
and if you want to maintain the same level of accuracy while improving the
user experience, you increase price substantially).
Third, new tools are extremely likely to identify an individual’s ethnic-
ity, gender, or sexual orientation as well as talent signals. Certainly in the
United States, and throughout much of the industrialized world, Equal Em-
ployment Opportunity Commission guidelines concerning adverse impact
must be considered; even a fundamentally solid assessment tool should come
under additional scrutiny if it is seen to contribute to adverse impact. This
issue strengthens the case for more evidence-based reviews of any emerging
tools, in particular those that scrape publicly available records of individuals
(e.g., Facebook or other social media algorithms). Clearly, emerging tools
enable employers to know more about potential candidates than they prob-
ably should, and ethical concerns—as well as the law—may represent the
ultimate barrier to the application of new technologies.
In short, people are living their lives online. By doing so they make their
behavior public, and that behavior leaves more or less perpetual traces—
often inadvertently. The ability to penetrate the noise of all this information
and identify robust talent signals is improving, but merging today’s frag-
mented services with scientically proven methods will be necessary to cre-
ate the most accurate and in-depth proles yet.
Last Thoughts
In the context of overall enthusiasm for these adventures in digital min-
ing as applied to talent identication, we have two last thoughts. First, al-
though it is clear that most of the innovations discussed in this article have
yet to demonstrate compelling levels of validity, such as those that charac-
terize academic I-O research, from a practical standpoint that may not be
too relevant. As most I-O psychologists will know, there is a substantial gap
between what science prescribes and what HR practitioners do, especially
around assessment practices. In particular, the accuracy of talent identica-
tion tools is not the only factor considered by real-world HR practitioners
when they make decisions about talent identication methods. Even when it
is, most real-world HR practitioners are not competent enough to evaluate
accuracy. This enables vendors to make bogus claims, such as the accuracy
of our tool is 95%.” In a world driven by accuracy, the Myers-Briggs would
not be the most popular assessment tool. It seems to us that organizations
and HR practitioners are more interested in price and user experience than
accuracy.
Second, the history of science is much more one of adventitious and
serendipitous ndings than many people realize. Raw empiricism has often
produced marvelously useful outcomes. So we are not at all worried about
the fact that this explosion of talent identication procedures is uninformed
16 - .
by any concerns with well-established personality theory and what we know
about the nature of human nature. As discussed, there are two fundamental
questions underlying the assessment process: (a) what to assess and (b) how
to assess it? Virtually all of the innovative thinking in the digital revolution in
talent identication concerns the second question. Having scraped and col-
lated the various online cues, the next question concerns how to interpret the
data. As Wittgenstein (an Austrian proto-psychologist) once observed, In
psychology there are empirical methods and conceptual confusions.” The
most thoughtful of the data scrapers have provided evidence that their data
can be used to predict aspects of the ve-factor model, an idea at least 65
years old. Going forward, it would be nice to see as much eort put into
reconceptualizing personality as is being put into assessment methods. In
the end, true advancements will come if we can balance out data and theory,
for only theory can translate information into knowledge. As Immanuel Kant
famously noted, Theory without data is groundless, but data without theory
is just uninterpretable.”
References
Almlund, M., Duckworth, A. L., Heckman, J., & Kautz, T. (2011). Personality psychology and
economics: Vol 4. Handbook of the economics of education. Amsterdam, the Netherlands:
Elsevier BV. doi:
10.1016/B978-0-444-53444-6.00001-8
Amabile, T. M. (1998). How to kill creativity. Harvard Business Review, 76(5), 77–87.
Austin, J. T., & Villanova, P. (1992). The criterion problem: 1917–1992. Journal of Applied
Psychology, 77(6), 836–874. doi:
10.1037/0021-9010.77.6.836
Back, M. D., Stopfer, J. M., Vazire, S., Gaddis, S., Schmukle, S. C., Eglo, B., & Gosling,
S. D. (2010). Facebook proles reect actual personality, not self-idealization. Psycho-
logical Science: A Journal of the American Psychological Society/APS, 21(3), 372–374.
doi:10.1177/0956797609360756
Barnes, C. M., & Morgeson, F. P. (2007). Typical performance, maximal performance, and
performance variability: Expanding our understanding of how organizations value
performance. Human Performance, 20(3), 259–274. doi:
10.1080/08959280701333289
Bartram, D. (2005). The Great Eight competencies: A criterion-centric ap-
proach to validation. The Journal of Applied Psychology, 90(6), 1185–1203.
doi:
10.1037/0021-9010.90.6.1185
Berkelaar, B. L. (2014). Cybervetting, online information, and personnel selection: New
transparency expectations and the emergence of a digital social contract. Management
Communication Quarterly, 26, 377–403. doi:
10.1177/0893318912439474
Berkelaar, B. L., & Buzzanell, P. M. (2015). Online employment screening and digital career
capital: Exploring employers use of online information for personnel selection. Man-
agement Communication Quarterly, 29(1), 84–113. doi:
10.1177/0893318914554657
Bersin, J. (2013, February). Big data in human resources: Talent analytics comes
of age. Forbes. Retrieved from
http://www.forbes.com/sites/joshbersin/2013/02/17/
bigdata-in-human-resources-talent-analytics-comes-of-age/
Big Fish Blog. (2015). Video game statistics & trends: Who’s playing what & why? Retrieved
from
http://www.bigshgames.com/blog/2015-global-video-game-stats-whos-
playing-what-and-why/
17
Black, E. (1973). Hegel on war. The Monist, 57(4), 570–583.
Borman, W. C. (1997). 360° ratings: An analysis of assumptions and a research agenda
for evaluating their validity. Human Resource Management Review, 7(3), 299–315.
doi:10.1016/S1053-4822(97)90010-3
Breaugh, J. A. (2009). The use of biodata for employee selection: Past research
and future directions. Human Resource Management Review, 19(3), 219–231.
doi:
10.1016/j.hrmr.2009.02.003
Brown, V. R., & Vaughn, E. D. (2011). The writing on the (Facebook) wall: The use of social
networking sites in hiring decisions. Journal of Business and Psychology, 26(2), 219–
225. doi:
10.1007/s10869-011-9221-x
Campbell, C. (2015, April 14). Here’s how many people are playing games in America. Re-
trieved from http://www.polygon.com/2015/4/14/8415611/gaming-stats-2015
Chambers, E., Foulon, M., Handeld-Jones, H., Hankin, S., & Michael, E., III. (1998). The
war for talent. The McKinsey Quarterly, 3, 44–57. doi:
10.4018/jskd.2010070103
Chamorro-Premuzic, T. (2013). The perfect hire. Scientic American Mind, 24, 42–47.
doi:
10.1038/scienticamericanmind0713-42
Chamorro-Premuzic, T., & Furnham, A. (2010). The psychology of personnel selection. Cam-
bridge, UK: Cambridge University Press.
Chapsky, D. (2011). Leveraging online social networks and external data sources to pre-
dict personality. Proceedings—2011 International Conference on Advances in Social Net-
works Analysis and Mining, 428–433.
Chorley, M. J., Whitaker, R. M., & Allen, S. M. (2015). Personality and location-based social
networks. Computers in Human Behavior, 46, 45–56. doi:
10.1016/j.chb.2014.12.038
Christian, M. S., Edwards, B. D., & Bradley, J. C. (2010). Situational judgment tests: Con-
structs assessed and a meta-analysis of their criterion-related validities. Personnel Psy-
chology, 63(1), 83–117. doi:
10.1111/j.1744-6570.2009.01163.x
Cole, M., Feild, H., & Staord, J. (2005). Validity of resumé reviewers’ inferences concerning
applicant personality based on resumé evaluation. International Journal of Selection
and Assessment, 13(4), 321–324. doi:
10.1111/j.1468-2389.2005.00329.x
Craft, R. C., & Leake, C. (2002). The Pareto principle in organizational decision making.
Management Decision, 40(8), 729–733. doi:
10.1108/00251740210437699
Davison, H. K., Maraist, C. C., Hamilton, R. H., & Bing, M. N. (2012). To screen or not to
screen? Using the Internet for selection decisions. Employee Responsibilities and Rights
Journal, 24(1), 1–21. doi:
10.1007/s10672-011-9178-y
de Montjoye, Y., Quiodbach, J., Robic, F., & Pentland, A. S. (2013). Predicting personal-
ity using novel mobile phone-based metrics. In A. M. Greenberg, W. G. Kennedy, &
N. D. Bos (Eds.), Social computing, behavioral-cultural modeling and prediction (pp.
48–55). New York, NY: Springer.
Dries, N. (2013). The psychology of talent management: A review and research agenda.
Human Resource Management Review, 23(4), 272–285. doi:
10.1016/j.hrmr.2013.05.001
Edwards, J. R. (2008). 4 person–environment t in organizations: An assessment
of theoretical progress. The Academy of Management Annals, 2(1), 167–230.
doi:
10.1080/19416520802211503
Ekman, P. (1993). Facial expression and emotion. The American Psychologist, 48(4), 384–
392. doi:
10.1037/0003-066X.48.4.384
El Ouirdi, M., Segers, J., El Ouirdi, A., & Pais, I. (2015). Predictors of job seek-
ers self-disclosure on social media. Computers in Human Behavior, 53, 1–12.
doi:
10.1016/j.chb.2015.06.039
18 - .
Grant, A. M., Gino, F., & Hofmann, D. A. (2011). Reversing the extraverted leadership ad-
vantage: The role of employee proactivity. Academy of Management Journal, 54(3),
528–550. doi:
10.5465/AMJ.2011.61968043
Heider, F. (1958). The psychology of interpersonal relations. Hillsdale, NJ: Erlbaum.
Hogan, B. (2010). The presentation of self in the age of social media: Distinguishing perfor-
mances and exhibitions online. Bulletin of Science, Technology & Society, 30(6), 377–
386. doi:10.1177/0270467610385893
Hogan, J., & Holland, B. (2003). Using theory to evaluate personality and job-performance
relations: A socioanalytic perspective. The Journal of Applied Psychology, 88(1), 100–
112. doi:
10.1037/0021-9010.88.1.100
Hogan, R. T., Chamorro-Premuzic, T., & Kaiser, R. B. (2013). Employability and career suc-
cess: Bridging the gap between theory and reality. Industrial and Organizational Psy-
chology: Perspectives on Science and Practice, 6, 3–16.
Kaiser, R. B., Hogan, R., & Craig, S. B. (2008). Leadership and the fate of organizations. The
American Psychologist, 63(2), 96–110. doi:10.1037/0003-066X.63.2.96
Kosinski, M. (2016, January). Mining big data to understand the link between facial features
and personality. Paper presented at the 17th Annual Convention of the Society of Per-
sonality and Social Psychology, San Diego, CA.
Kosinski, M., Matz, S. C., & Gosling, S. D. (2015). Facebook as a research tool for the social
sciences. American Psychologist, 70(6), 543–556. doi:
10.1037/a0039210
Kosinski, M., Stillwell, D., & Graepel, T. (2013). Private traits and attributes are predictable
from digital records of human behavior. Proceedings of the National Academy of Sciences
of the United States of America, 110(15), 5802–5805.
Kuncel, N. R., Ones, D. S., & Sackett, P. R. (2010). Individual dierences as predictors of
work, educational, and broad life outcomes. Personality and Individual Dierences,
49(4), 331–336. doi:
10.1016/j.paid.2010.03.042
Lambiotte, R., & Kosinski, M. (2014). Tracking the digital footprints of personality. Proceed-
ings of the IEEE, 102(12), 1934–1939. doi:
10.1109/JPROC.2014.2359054
Levashina, J., Hartwell, C. J., Morgeson, F. P., & Campion, M. A. (2014). The structured em-
ployment interview: Narrative and quantitative review of the research literature. Per-
sonnel Psychology, 67(1), 241–293. doi:
10.1111/peps.12052
Lofgren, K. (2016, February 8). 2016 video game statistics & trends who’s playing
what & why. Retrieved from
http://www.bigshgames.com/blog/2016-video-game-
statistics-and-trends/
Lombardo, M. M., & Eichinger, R. W. (2002). The leadership machine: Architecture to develop
leaders for any future (3rd ed.). Minneapolis, MN: Lominger International.
Madera, J. M. (2012). Using social networking websites as a selection tool: The role of se-
lection process fairness and job pursuit intentions. International Journal of Hospitality
Management, 31(4), 1276–1282. doi:
10.1016/j.ijhm.2012.03.008
Mehl, M. R., Gosling, S. D., & Pennebaker, J. W. (2006). Personality in its natural habitat:
Manifestations and implicit folk theories of personality in daily life. Journal of Person-
ality and Social Psychology, 90(5), 862–877. doi:
10.1037/0022-3514.90.5.862
Pennebaker, J. W. (1993). Putting stress into words: Health, linguistic, and ther-
apeutic implications. Behaviour Research and Therapy, 31(6), 539–548.
doi:
10.1016/0005-7967(93)90105-4
Pennebaker, J. W. (2011). Your use of pronouns reveals your personality. Harvard Business
Review, 89, 32–33.
19
Pinsker, J. (2015, March). People who use Firefox or Chrome are better employees.
The Atlantic. Retrieved from
http://www.theatlantic.com/business/archive/2015/03/
people-who-use-refox-or-chrome-are-better-employees/387781/
Ployhart, R. E. (2006). Stang in the 21st century: New challenges and strategic opportu-
nities. Journal of Management, 32(6), 868–897. doi:
10.1177/0149206306293625
Porter, L. W., & Lawler, E. E. (1968). What job attitudes tell about motivation. Harvard Busi-
ness Review, 46, 118–126.
R Core Team. (2015). R: A language and environment for statistical computing [Computer
software]. Vienna, Austria: R Foundation for Statistical Computing.
Recruiting Daily. (2015). Trac, transparency & talent technology: The future of
online recruiting. Retrieved from
http://recruitingdaily.com/trac-transparency-
talent-technology-the-future-of-online-recruiting/
Roth, P. L., Bobko, P., Van Iddekinge, C. H., & Thatcher, J. B. (2016). Social media in
employee-selection-related decisions: A research agenda for uncharted territory. Jour-
nal of Management, 42(1), 269–298. doi:
10.1177/0149206313503018
Roth, P. L., & Hucutt, A. I. (2013). A meta-analysis of interviews and cognitive
ability: Back to the future? Journal of Personnel Psychology, 12(4), 157–169.
doi:
10.1027/1866-5888/a000091
Ryan, A., Cohn, J., & Lucey, S. (2009). Automated facial expression recognition system.
43rd Annual 2009 International Carnahan Conference on Security Technologies, 172–
177. doi:
10.1109/CCST.2009.5335546
Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in person-
nel psychology: Practical and theoretical implications of 85 years of research ndings.
Psychological Bulletin, 124(2), 262–274. doi:
10.1037/0033-2909.124.2.262
Schmitt, N. (2013). Personality and cognitive ability as predictors of eective performance at
work. Annual Review of Organizational Psychology and Organizational Behavior, 1(1),
45–65. doi:
10.1146/annurev-orgpsych-031413-091255
Schwartz, H. A., Eichstaedt, J. C., Kern, M. L., Dziurzynski, L., Ramones, S. M.,
Agrawal, M., . . . Ungar, L. H. (2013). Personality, gender, and age in the lan-
guage of social media: The open-vocabulary approach. PloS One, 8(9), e73791.
doi:
10.1371/journal.pone.0073791
Tausczik, Y. R., & Pennebaker, J. W. (2010). The psychological meaning of words: LIWC and
computerized text analysis methods. Journal of Language and Social Psychology, 29(1),
24–54. doi:
10.1177/0261927X09351676
Thornton, G. C., & Gibbons, A. M. (2009). Validity of assessment centers for
personnel selection. Human Resource Management Review, 19(3), 169–187.
doi:
10.1016/j.hrmr.2009.02.002
Van Rooy, D. L., & Viswesvaran, C. (2004). Emotional intelligence: A meta-analytic investi-
gation of predictive validity and nomological net. Journal of Vocational Behavior, 65(1),
71–95. doi:
10.1016/S0001-8791(03)00076-9
Vicknair, J., Elkersh, D., Yancey, K., & Budden, M. C. (2010). The use of social networking
websites as a recruiting tool for employers. American Journal of Business Education,
3(11), 7–12.
Viswesvaran, C., Schmidt, F. L., & Ones, D. S. (2005). Is there a general factor in
ratings of job performance? A meta-analytic framework for disentangling sub-
stantive and error inuences. Journal of Applied Psychology, 90(1), 108–131.
doi:10.1037/0021-9010.90.1.108
20 - .
Warman, M. (2014, May 13). Google must delete your data if you ask, EU rules. The
Telegraph. Retrieved from
http://www.telegraph.co.uk/technology/google/10827005/
Google-must-delete-your-data-if-you-ask-EU-rules.html
Woolley, A. W., Chabris, C. F., Pentland, A., Hashmi, N., & Malone, T. W. (2010). Evi-
dence for a collective intelligence factor in the performance of human groups. Science,
330(6004), 686–688. Retrieved from
http://doi.org/10.1126/science.1193147
Yan, W. J., Wang, S. J., Liu, Y. J., Wu, Q., & Fu, X. (2014). For micro-expression
recognition: Database and suggestions. Neurocomputing, 136, 82–87.
doi:
10.1016/j.neucom.2014.01.029
Yarkoni, T. (2010). Personality in 100,000 words: A large-scale analysis of personality
and word use among bloggers. Journal of Research in Personality, 44(3), 363–373.
doi:
10.1016/j.jrp.2010.04.001
Youyou, W., Kosinski, M., & Stillwell, D. (2015). Computer-based personality judgments are
more accurate than those made by humans. Proceedings of the National Academy of the
Sciences, 112(4), 1036–1040.
Zhang, Y., Kong, F., Zhong, Y., & Kou, H. (2014). Personality manipulations: Do they modu-
late facial attractiveness ratings? Personality and Individual Dierences, 70, 80–84. Re-
trieved from
http://doi.org/10.1016/j.paid.2014.06.033
Zide, J., Elman, B., & Shahani-Denning, C. (2014). LinkedIn and recruitment:
How proles dier across occupations. Employee Relations, 36(5), 583–604.
doi:
10.1108/ER-07-2013-0086
... However, we think that it is questionable whether people, with their subjective perceptions and assessments, perform better than AI in this regard. Because AI is data-based and can process a much larger range of behavioral signals than humans can, AI may even outperform human inferences about future performance in accuracy and validity [18,68]. This is also in line with Kahnemann's [69] findings that algorithmic predictions generally perform better than human ones and suggests that whenever we can replace human judgments with formulas, we should at least consider it. ...
Article
Full-text available
The use of artificial intelligence (AI) technologies in organizations’ recruiting and selection procedures has become commonplace in business practice; accordingly, research on AI recruiting has increased substantially in recent years. But, though various articles have highlighted the potential opportunities and ethical risks of AI recruiting, the topic has not been normatively assessed yet. We aim to fill this gap by providing an ethical analysis of AI recruiting from a human rights perspective. In doing so, we elaborate on human rights’ theoretical implications for corporate use of AI-driven hiring solutions. Therefore, we analyze whether AI hiring practices inherently conflict with the concepts of validity, autonomy, nondiscrimination, privacy, and transparency, which represent the main human rights relevant in this context. Concluding that these concepts are not at odds, we then use existing legal and ethical implications to determine organizations’ responsibility to enforce and realize human rights standards in the context of AI recruiting.
... In the domain of personnel selection, however, scant research addresses how ADSSs affect recruiters. The scientist-practitioner gap is large in terms of investigating how recruiters perceive different tools and approaches and why they do so (Kuncel et al., 2013;Chamorro-Premuzic et al., 2016;Gonzalez et al., 2019;Langer et al., 2019a). ...
Article
Full-text available
Resume screening assisted by decision support systems that incorporate artificial intelligence is currently undergoing a strong development in many organizations, raising technical, managerial, legal, and ethical issues. The purpose of the present paper is to better understand the reactions of recruiters when they are offered algorithm-based recommendations during resume screening. Two polarized attitudes have been identified in the literature on users’ reactions to algorithm-based recommendations: algorithm aversion, which reflects a general distrust and preference for human recommendations; and automation bias, which corresponds to an overconfidence in the decisions or recommendations made by algorithmic decision support systems (ADSS). Drawing on results obtained in the field of automated decision support areas, we make the general hypothesis that recruiters trust human experts more than ADSS, because they distrust algorithms for subjective decisions such as recruitment. An experiment on resume screening was conducted on a sample of professionals (N = 694) involved in the screening of job applications. They were asked to study a job offer, then evaluate two fictitious resumes in a 2 × 2 factorial design with manipulation of the type of recommendation (no recommendation/algorithmic recommendation/human expert recommendation) and of the consistency of the recommendations (consistent vs. inconsistent recommendation). Our results support the general hypothesis of preference for human recommendations: recruiters exhibit a higher level of trust toward human expert recommendations compared with algorithmic recommendations. However, we also found that recommendation’s consistence has a differential and unexpected impact on decisions: in the presence of an inconsistent algorithmic recommendation, recruiters favored the unsuitable over the suitable resume. Our results also show that specific personality traits (extraversion, neuroticism, and self-confidence) are associated with a differential use of algorithmic recommendations. Implications for research and HR policies are finally discussed.
... Timely identification of talent saves a lot of resources, including time, money and efforts. The traditional methods of the talent identification process (TIP) are proving to be very limited and less scaled(Chamorro-Premuzic, Winsborough, Sherman, & Hogan, 2016). With the sole reliance on disorganized human judgment, it becomes unreliable to some degree as well. ...
Article
Talent Hunt for sports has always been of great concern. The research interest in the domain of sports talent identification is on an increasing curve. The conventional approaches to identify the talent are being modelled into the scientific models using various analytical and mathematical computational techniques. This paper aims to review some of the talent identification models and projects the current perspective of the Sports talent identification (TiD) computational techniques. Articles from a timeframe of 1995-2020 were systematically selected in accordance with the PRISMA guidelines. We remain focused on the computational methodology being employed in the TiD models. The review delivers the findings and highlights some of the inherent issues that are not being addressed by the existing TiD models.
... For example, Amazon designed and later had to scrap an AI that evaluated resumes in relation to those of past employees after it discriminated against women, despite repeated efforts to fix it (Dastin, 2018). While vendors contend that their tests are predictively valid and in line with business necessity, little independent evidence supports these claims (Chamorro-Premuzic et al., 2016;Rotolo et al., 2018). Although the impact of algorithmic bias on members of protected classes such as gender, race, and age has been the subject of extensive debate, few have explored it in the context of disability (Trewin, 2018). ...
Article
Full-text available
While rapid advances in artificial intelligence (AI) hiring tools promise to transform the workplace, these algorithms risk exacerbating existing biases against marginalized groups. In light of these ethical issues, AI vendors have sought to translate normative concepts such as fairness into measurable, mathematical criteria that can be optimized for. However, questions of disability and access often are omitted from these ongoing discussions about algorithmic bias. In this paper, I argue that the multiplicity of different kinds and intensities of people’s disabilities and the fluid, contextual ways in which they manifest point to the limits of algorithmic fairness initiatives. In particular, existing de-biasing measures tend to flatten variance within and among disabled people and abstract away information in ways that reinforce pathologization. While fair machine learning methods can help mitigate certain disparities, I argue that fairness alone is insufficient to secure accessible, inclusive AI. I then outline a disability justice approach, which provides a framework for centering disabled people’s experiences and attending to the structures and norms that underpin algorithmic bias.
... GBAs as research has uncovered benefits such as minimizing adverse impact (Chamorro-Premuzic, Winsborough, Sherman, & Hogan, 2016;Montefiori, 2016), measuring facets of intelligence (DiCerbo, 2014;Jaffal & Wloka, 2015;Weiner & Sanchez, 2020), and improved engagement and applicant reactions (Landers, 2014;2015;Ryan et al., 2006). GBAs may also have the potential to reduce the negative effects of testing by utilizing the motivating structure found in games and increasing the applicants' perspective of the assessments (Landers, 2015;Montefiori, 2016, Povah, Riley, & Routledge, 2017. ...
Conference Paper
Full-text available
Responding to a need for research-based evidence on the use of game-based assessments, the purpose of this paper is to test the generalizability of a personality game-based assessment to make inferences on job interests. Data were collected from 140 participants who completed both a traditional paper-and-pencil style assessment and game-based assessment. A multi-trait multi- method (MTMM) matrix was used to evaluate convergence. Results showed the game-based assessment does not appear to be a viable measure for job interests, providing evidence for the use and application of high quality GBAs as they are intended. There is value in the novelty of this study, as research on game-based assessments in the workforce is limited and providing evidence on the boundary conditions of game-based assessments may inform future research.
Article
Full-text available
Experts in the legal and Industrial-Organizational (I-O) psychology fields suggest that using information from casual social media to screen applicants can result in biased hiring decisions. The problem is that there are very few studies that examine discriminatory hiring decisions based on information gathered from non-professional social media sites, and no studies were found at the time this study was conducted that examined age discrimination from using such sites. The purpose of this study was to determine if human resource (HR) professionals' exposure to non-job relevant information on a non-professional social media website during the screening process and a job candidate's United States (U.S.) Age Discrimination in Employment Act (ADEA) protection status influences how HR managers perceive applicants' job-suitability, which may contribute to the likelihood of discriminatory selection decisions against applicants protected under the ADEA. Participants included 209 working HR professionals and 7 graduate students studying HR management not working in HR in the United States; participants were recruited through Facebook, LinkedIn, and Amazon's Mechanical Turk. A factorial ANOVA of the dataset, with and without outliers, revealed no statistically significant main effects for casual social media exposure hoc analysis by source of participant recruitment revealed a cross-over interaction (F(2, 160) = 3.32, p = .04) for participants recruited through Mechanical Turk.
Article
Full-text available
This article summarizes the practical and theoretical implications of 85 years of research in personnel selection. On the basis of meta-analytic findings, this article presents the validity of 19 selection procedures for predicting job performance and training performance and the validity of paired combinations of general mental ability (GMA) and the 18 other selection procedures. Overall, the 3 combinations with the highest multivariate validity and utility for job performance were GMA plus a work sample test (mean validity of .63), GMA plus an integrity test (mean validity of .65), and GMA plus a structured interview (mean validity of .63). A further advantage of the latter 2 combinations is that they can be used for both entry level selection and selection of experienced employees. The practical utility implications of these summary findings are substantial. The implications of these research findings for the development of theories of job performance are discussed.
Article
Full-text available
Facebook is rapidly gaining recognition as a powerful research tool for the social sciences. It constitutes a large and diverse pool of participants, who can be selectively recruited for both online and offline studies. Additionally, it facilitates data collection by storing detailed records of its users' demographic profiles, social interactions, and behaviors. With participants' consent, these data can be recorded retrospectively in a convenient, accurate, and inexpensive way. Based on our experience in designing, implementing, and maintaining multiple Facebook-based psychological studies that attracted over 10 million participants, we demonstrate how to recruit participants using Facebook, incentivize them effectively, and maximize their engagement. We also outline the most important opportunities and challenges associated with using Facebook for research, provide several practical guidelines on how to successfully implement studies on Facebook, and finally, discuss ethical considerations. (PsycINFO Database Record
Conference Paper
Full-text available
The present study provides the first evidence that personality can be reliably predicted from standard mobile phone logs. Using a set of novel psychology-informed indicators that can be computed from data available to all carriers, we were able to predict users’ personality with a mean accuracy across traits of 42% better than random, reaching up to 61% accuracy on a three-class problem. Given the fast growing number of mobile phone subscription and availability of phone logs to researchers, our new personality indicators open the door to exciting avenues for future research in social sciences. They potentially enable cost-effective, questionnaire-free investigation of personality-related questions at a scale never seen before.
Article
Full-text available
Extraversion predicts leadership emergence and effectiveness, but do groups perform more effectively under extraverted leadership? Drawing on dominance complementarity theory, we propose that although extraverted leadership enhances group performance when employees are passive, this effect reverses when employees are proactive, because extraverted leaders are less receptive to proactivity. In Study 1, pizza stores with leaders rated high (low) in extraversion achieved higher profits when employees were passive (proactive). Study 2 constructively replicates these findings in the laboratory: passive (proactive) groups achieved higher performance when leaders acted high (low) in extraversion. We discuss theoretical and practical implications for leadership and proactivity.
Article
This engaging and thought-provoking text introduces the main techniques, theories, research and debates in personnel selection, helping students and practitioners to identify the major predictors of job performance as well as the most suitable methods for assessing them. Tomas Chamorro-Premuzic and Adrian Furnham provide a comprehensive, critical and up-to-date review of the constructs we use in assessing people – intelligence, personality, creativity, leadership and talent – and explore how these help us to predict differences in individuals' performance. Covering selection techniques such as interviews, references, biographical data, judgement tests and academic performance, The Psychology of Personnel Selection provides a lively discussion of both the theory behind the use of such techniques and the evidence for their usefulness and validity. The Psychology of Personnel Selection is essential reading for students of psychology, business studies, management and human resources, as well as for anyone involved in selection and assessment at work.
Book
This proceedings volume contains the accepted papers and posters from the 2013 International Conference on Social Computing, Behavioral-Cultural Modeling, and Prediction. This was the sixth year of the SBP conference, and the third since it merged with the International Conference on Computational Cultural Dynamics. In 2013 the SBP conference continued to grow. We received a strong set of 137 submissions, greatly exceeding the previous high of 88. SBP continues to be a selective single-track conference. Thirty-three submissions were accepted as full papers, for a 24% paper acceptance rate. We also accepted 27 posters, more than we had previously, for an overall 42% acceptance rate. All papers and posters presented at the conference are included in this proceedings volume, which are distributed to attendees and made available electronically as well as in print as part of Springer’s Lecture Notes in Computer Science series. This conference is strongly committed to multidisciplinarity, consistent with recent trends in computational social science and related fields. Authors were asked to indicate from a checklist which topics fitted the papers they were sub- mitting. So many of the papers covered multiple categories that dividing this proceedings volume into topical sections presented a real challenge for the Pro- gram Committee. Of course, as a multidisciplinary conference, this is exactly the sort of problem we are glad to have. The topic areas that formed the core of past SBP conferences are all well represented: behavioral science, health sci- ences, military science and information science. There are also many papers that provide methodological innovation as well as new domain-specific findings. There are a number of events that took place at the conference that are not well represented in the proceedings, but added greatly to the intellectual and collegial value of the experience for participants. The first day of the conference offered four free tutorials, from Marta C. Gonzalez (A Review of Human Mo- bility Models Based on Digital Traces of Human Activity), David Sallach (Cat- egorial Analysis of Social Processes), Joey Harrison and Claudio Cioffi-Revilla (Building Agent-Based Models with the MASON Toolkit), and GeorgiyBoba- shev (Social Simulation: Introduction to Agent-Based Modeling). One of our excellent keynote speakers anchored each day’s program: Myron Gutmann from the National Science Foundation and University of Michigan, Michele Gelfand from the University of Maryland, and BernardoHuberman from Hewlett-Packard Laboratories. Nitin Agarwal and Wen Dong managed the second SBP Data Analysis Chal- lenge Problem, designed around a rich and unique “reality mining” dataset generously provided and supported by the MIT Human Dynamics Laboratory. (These data continue to be available to the research community at reality- commons.media.mit.edu.) VI Preface Unique SBP traditions also continued: The “Cross-Fertilization Round Ta- ble” event created opportunities for interaction between technical specialists and domain experts, and a Q&A panel was held over lunch with program managers from a number of federal agencies. More information on all program activities is available on the conference website at SBP2013.org. Activities such as SBP only succeed with assistance from many contributors. The conference itself was held during April 2-5 in downtown Washington, DC, at the University of California’s DC Center on Rhode Island Avenue; we are grate- ful for their hospitality and many forms of logistical support. The Organizing Committee met early and often this year, lining up keynote speakers, working to publicize the conference, and making many decisions about programming, direction, and finances. Program Committee Co-chairs Ariel M. Greenberg and William G. Kennedy should be singled out for special recognition. They ably managed the submission, review, and proceedings production process, keeping it on an aggressive schedule without losing sight of the larger goals of promoting intellectual exploration and broad participation. Evaluating the large number of submissions could not have been accomplished without our volunteer reviewers, listed under the Technical Program Committee. Last but not least, we sincerely appreciate the support from the following federal agencies: Air Force Research Laboratory (AFRL), Army Research Office (ARO), National Institute of Gen- eral Medical Sciences (NIGMS) at the National Institutes of Health (NIH), the National Science Foundation (NSF), and the Office of Naval Research (ONR). We also would like to thank Alfred Hofmann from Springer. We thank all for their kind help, dedication, and support in making SBP13 possible.