ArticlePDF Available

An investigation into the validity of asynchronous web-based video employment-interview ratings

Authors:

Abstract

Drawing from Huffcutt, Conway, Roth, and Stone’s (2001) taxonomy of employment-interview constructs, we hypothesized that asynchronous web-based video employment interviews would be associated with job performance and organizational tenure using a crowd-sourced sample of 75 employed professionals. We found that composite interview ratings and construct ratings of mental capability, knowledge and skills, applied social skills, and conscientiousness were significantly related to self-rated job performance. We also found that construct ratings of knowledge and skills and applied social skills were significantly associated with self-reported organizational tenure. Implications for web-based video employment-interview research and practice are discussed.
AN INVESTIGATION INTO THE VALIDITY
OF ASYNCHRONOUS WEB-BASED VIDEO
EMPLOYMENT-INTERVIEW RATINGS
C. Allen Gorman
East Tennessee State University and
GCG Solutions, LLC, Limestone,
Tennessee
Jim Robinson
Hire-Intelligence, LLC,
Gloucester, Virginia
Jason S. Gamble
East Tennessee State University
Drawing from Huffcutt, Conway, Roth, and Stone’s (2001) taxonomy of employment-
interview constructs, we hypothesized that asynchronous web-based video employment
interviews would be associated with job performance and organizational tenure using a
crowd-sourced sample of 75 employed professionals. We found that composite inter-
view ratings and construct ratings of mental capability, knowledge and skills, applied
social skills, and conscientiousness were significantly related to self-rated job perfor-
mance. We also found that construct ratings of knowledge and skills and applied social
skills were significantly associated with self-reported organizational tenure. Implications
for web-based video employment-interview research and practice are discussed.
Keywords: employment interviews, video interviews, interview validity, interview
constructs
The use of information technology in making human-resource decisions has increased substantially
over the last decade (Cascio & Aguinis, 2011;Ryan & Ployhart, 2014;Tippins, 2015). With the
growing need to efficiently and cost-effectively recruit and hire from among an increasingly global
workforce, organizations have turned to computer and web-based technology to assist with many
personnel recruitment and selection activities (Reynolds & Weiner, 2009;Ryan & Ployhart, 2014;
Tippins, 2015). Although the landscape of personnel selection has changed thanks to advancements
in information technology, there is surprisingly little empirical research on technology-enhanced
C. Allen Gorman, Department of Management and Marketing, East Tennessee State University, and GCG
Solutions, LLC, Limestone, Tennessee; Jim Robinson, Hire-Intelligence, LLC, Gloucester, Virginia; Jason S.
Gamble, Department of Psychology, East Tennessee State University.
C. Allen Gorman provided consulting services for Hire-Intelligence. Jim Robinson is a cofounder and part
owner of Hire-Intelligence. Funding for this project was provided by Hire-Intelligence. A version of this article
was presented at the 2014 Convention of the American Psychological Association, Washington, DC.
We thank Andrea Alvarez, Stephanie Bradley, Brian Drivas, Kelsey Geary, and Christina Thibodeaux for
their assistance with interview coding.
Correspondence concerning this article should be addressed to C. Allen Gorman, Department of Manage-
ment and Marketing, College of Business and Technology, East Tennessee State University, 128 Sam Wilson
Hall, P.O. Box 70625, Johnson City, TN 37614. E-mail: gormanc@etsu.edu
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Consulting Psychology Journal: Practice and Research © 2018 American Psychological Association
2018, Vol. 70, No. 2, 129–146 1065-9293/18/$12.00 http://dx.doi.org/10.1037/cpb0000102
129
selection methods. The purpose of the present study was to conduct an initial validation of
asynchronous web-based video employment interviews and constructs frequently rated in them.
Web-Based Video Employment Interviews
Employment interviews are a staple of personnel selection in organizations (Macan, 2009). Thanks
to rapid advances in technology in recent years, employment interviews have expanded beyond the
traditional face-to-face setting into mediums such as telephone, instant-message software, and
computer-mediated video chat (Behrend & Thompson, 2013;Blacksmith, Willford, & Behrend,
2016;Huffcutt & Culbertson, 2010). Web-based video interviewing, in particular, has increased in
popularity in recent years and continues to grow with advances in technology (Blacksmith et al.,
2016;Levashina, Hartwell, Morgeson, & Campion, 2014). With the proliferation of affordable
personal computers, cellular phones, and relatively inexpensive webcams and video applications,
employers have begun to capitalize on the advanced technology available to many job applicants.
Several Fortune 500 companies such as Oracle, Wal-Mart, and Geico have turned to web-based
video interview platforms to assist with interviewing job candidates from around the globe. In a
2010 survey conducted by the Aberdeen Group, 10% of all organizations surveyed currently use
video solutions for talent acquisition (Lombardi, 2010).
There are several advantages that web-based video interview technology presents to organiza-
tions, including (a) the reduced cost in terms of travel expenses and the time involved for hiring
managers to conduct face-to-face interviews; (b) the ease and flexibility of scheduling and con-
ducting interviews with candidates from anywhere in the world; (c) the consistency and structure of
the interviews, in which each candidate is provided the same set of questions; (d) that the interview
questions can be tailored to the needs of the company; (e) that hiring managers can replay, review,
and rate the interview questions “online” instead of relying on memory and/or notes; and (f) the
ability to prescreen candidates, if desired, before inviting them for a face-to-face interview. To
illustrate, in the 2010 Aberdeen Group survey, 73% of organizations using web-based video
interview technology reported a decrease in travel-related costs, with an average reduction of 9%
(Lombardi, 2010).
Boutique firms specializing in providing web-based video platforms to companies for inter-
viewing purposes have also begun to thrive. One type of platform offered by many of these firms
is the asynchronous web-based video interview (AWBVI). AWBVIs involve recording a video of
the job applicant’s responses to a standard list of interview questions and a decision-maker watching
and evaluating the video at a later time (Levashina et al., 2014). They are particularly useful as a
replacement for phone interviews as an initial assessment in the traditional interview process
(Guchait, Ruetzler, Taylor, & Toldi, 2014). Compared with phone interviews, in which only one
organizational representative can participate in the interview, AWBVIs allow multiple representa-
tives to watch and rate the videos at their convenience, thus minimizing the impact of individual
biases (Dipboye, 1992). Some have lumped them in the same category as video resumes, although
AWBVIs are more closely related to interviews than resumes (Hiemstra & Derous, 2015;Hiemstra,
Derous, Serlie, & Born, 2012). They are also distinguished from video-based situational judgment
tests (SJTs) in that video in an AWBVI is the medium through which information is conveyed from
the candidate to the hiring manager, whereas in video-based SJTs, video serves as the stimulus to
which a job candidate responds (Cucina et al., 2011;Lievens & Sackett, 2006).
The Present Study
A study of the criterion-related validity of AWBVIs would benefit the literature in several ways.
First, despite the rise in popularity of AWBVIs, there is little research on them in the applied
psychology literature. Although studies have examined technology-mediated selection interviews,
such as videoconferencing or telephone interviews (e.g., Chapman & Rowe, 2002;Kroeck &
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
130 GORMAN, ROBINSON, AND GAMBLE
Magnusen, 1997;Straus, Miles, & Levesque, 2001), there are no studies of employment interviews,
to our knowledge, that have featured AWBVI platforms as the focal technology.
Second, despite the increasing frequency of their adoption by organizations, there is very little
evidence about the criterion-related validity of the constructs rated in AWBVIs. The extant research
on video-based interview technology has focused instead on topics such as applicant and interviewer
perceptions of and reactions to video-based interviews (e.g., Chapman & Rowe, 2002;Hiemstra et
al., 2012), applicant impression management in video-based interviews (Bangerter, Roulin, &
König, 2012;Roulin, 2016), and the psychometric characteristics of interviewer ratings of video
interviews (Van Iddekinge, Raymark, Roth, & Payne, 2006). Although perceptions, impression
management, and the psychometric characteristics of video-based interviews are important for
understanding, research is also needed to estimate the validity and utility of video-based interviews
for potential use in applied settings.
Third, research on the criterion-related validity of job interviews conducted using video-based
technology has not employed a rigorous and systematic approach to validating the constructs
displayed in video-based job interviews. In the present study we used the Huffcutt, Conway, Roth,
and Stone’s (2001) taxonomy of interview constructs to guide the development and execution of our
validation efforts. Huffcutt et al. (2001) found that personality, social skills, mental capability, and
knowledge and skills are the most commonly rated constructs in job interviews. Waung, Hymes, and
Beatty (2014) found that applied social skills and mental capability could reliably be assessed in
video resumes; however, there is no research that indicates if Huffcutt et al.’s (2001) constructs can
be rated in AWBVIs or if ratings of these constructs in AWBVIs predict job and career outcomes.
In doing so, we answer the call from Huffcutt et al. (2001) to move beyond the validation of just
composite scores on job interviews to providing a thorough validation of some of the most
frequently rated constructs.
Development of Hypotheses
Using meta-analytic procedures, Huffcutt et al. (2001) developed and examined a taxonomy of
constructs typically assessed in interviews. They found that the most commonly rated constructs in
interviews tend to fall into seven categories: (1) mental capability, (2) knowledge and skills, (3)
basic personality tendencies, (4) applied social skills, (5) interests and preferences, (6) organiza-
tional fit, and (7) physical attributes. Unfortunately, no subsequent research has examined the
criterion-related validity of these constructs in video-based interviews. Although Huffcutt et al.
(2001) identified seven interview construct types, the first four are by far the most frequently rated
in interviews, so we focused on mental capability, knowledge and skills, personality, and applied
social skills in the present study. In addition, rather than assessing all of the Big Five personality
factors (and hence increasing cognitive complexity for the interview raters), we concentrated our
focus on conscientiousness. It is generally accepted that conscientiousness is the best predictor of
job performance from the five-factor model of personality (Barrick & Mount, 1991).
On the basis of the extant research on employment interviews, we generally expected that these
constructs can be reliably rated in AWBVIs and that these constructs would relate to self-reported
job performance. In the following text we summarize the relevant research in the development of our
hypotheses.
Structured-Interview Performance and Job Performance
There are several reasons to believe that overall ratings of AWBVI performance would demonstrate
acceptable levels of criterion-related validity in the present study. First, structured employment
interviews have received a great deal of research attention, and the evidence suggests that perfor-
mance in a structured interview is predictive of candidate job performance. Several meta-analyses,
for example, have found impressive validity estimates for structured interviews (Conway, Jako, &
Goodman, 1995;McDaniel, Whetzel, Schmidt, & Maurer, 1994;Schmidt & Hunter, 1998;Wiesner
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
131ASYNCHRONOUS WEB-BASED VIDEO EMPLOYMENT INTERVIEWS
& Cronshaw, 1988;Wright, Lichtenfels, & Pursell, 1989). Second, research on other video-based
assessments, such as video-based SJTs, has found that validity estimates of video-based formats
were comparable to their paper-and-pencil counterparts (Lievens & Sackett, 2006;Weekley &
Jones, 1997). Third, interview ratings using video-based technology have been found to be less
inflated than face-to-face interview ratings (Van Iddekinge et al., 2006), thus improving the
psychometric properties and predictive utility of interview ratings. Finally, AWBVIs rely on a high
degree of structure, and structured interviews tend to be more reliable and valid than unstructured
interviews (Conway et al., 1995;McDaniel et al., 1994). Thus, given the accumulation of evidence
in support of the reliability, validity, and utility of video-based assessment methods, we hypothe-
sized that overall ratings of AWBVI performance would predict candidate self-rated job perfor-
mance.
Hypothesis 1: Independent ratings of overall interview performance will be positively related
to the self-reported job performance of AWBVI candidates.
Mental Capability in Employment Interviews
Huffcutt et al. (2001) defined mental capability as the “ability to learn, organize, process, and
evaluate information” (p. 904). Studies have consistently shown that mental capability is routinely
assessed, either implicitly or explicitly, in structured interviews (Judge, Higgins, & Cable, 2001).
Researchers have hypothesized that the reason structured interviews are predictive of job perfor-
mance is because structured-interview ratings tend to share a moderate percentage of variance with
assessments of general mental ability (Salgado & Moscoso, 2002). Indeed, a meta-analysis of 49
studies found a mean corrected correlation of .40 between cognitive ability and employment
interview ratings (Huffcutt, Roth, & McDaniel, 1996). Given these findings, in addition to the
ubiquitous conclusion that general mental ability is the best predictor of job performance (Schmidt
& Hunter, 1998), we hypothesized that ratings of mental capability would predict the self-rated job
performance of AWBVI candidates.
Hypothesis 2a: Independent ratings of mental capability will be positively related to self-
ratings of job performance made by AWBVI candidates.
We also expected that mental capability would be associated with longer organizational tenure
because people are likely to choose work experiences that are concordant with their abilities (Wilk,
Desmarais, & Sackett, 1995). When people find careers that fit with their abilities, they should
experience higher levels of job performance, which should then lead to increased career success,
including longer organizational tenure, across their careers (Judge, Higgins, Thoresen, & Barrick,
1999). Indeed, research has shown that general mental ability is predictive of many aspects of career
success (Gottfredson & Crouse, 1986;Howard & Bray, 1988;Judge et al., 1999). Thus, we
hypothesized that ratings of mental capability would predict the organizational tenure of AWBVI
candidates.
Hypothesis 2b: Independent ratings of mental capability will be positively related to the
self-reported organizational tenure of AWBVI candidates.
Knowledge and Skills in Employment Interviews
Knowledge and skills refer to a job candidate’s “accumulated knowledge, skills, and abilities”
(Huffcutt et al., 2001, p. 904). Measures of job knowledge and skills have proven to be useful in the
prediction of job performance (Hunter, 1986;Schmidt & Hunter, 1998;Schmidt, Hunter, &
Outerbridge, 1986). At least part of the predictive success of job knowledge and skills appears to be
their relationship with cognitive ability (Borman, White, Pulakos, & Oppler, 1991). Taylor and
Small (2002) suggested that the impressive validity levels associated with behavioral interviews is
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
132 GORMAN, ROBINSON, AND GAMBLE
due to the focus on acquired job knowledge and skills. Moreover, these authors pointed out that
models of job performance (e.g., Campbell, 1990;McCloy, Campbell, & Cudeck, 1994) include
declarative knowledge and procedural knowledge and skills as important determinants of job
performance. Thus, we hypothesized that ratings of knowledge and skills would predict the
self-reported job performance of AWBVI candidates.
Hypothesis 3a: Independent ratings of knowledge and skills will be positively related to the
self-reported job performance of AWBVI candidates.
We also expected that job knowledge and skills would be associated with longer organizational
tenure because the longer an employee stays with a particular organization, the more job and
organizationally relevant knowledge and skills are accrued (Gilson, Lim, Luciano, & Choi, 2013;
Ng & Feldman, 2013;Sturman, 2003). Also, as knowledge and skills increase, so too should job
performance and, thus, organizational tenure (Schmidt et al., 1986;Schneider, 1987). The link
between knowledge and skills and organizational tenure is also partially explained by organizational
socialization (Rollag, 2004), such that employees learn knowledge and skills relevant to the job and
organization by sharing experiences with other employees (Gilson et al., 2013;Nonaka, 1994). On
the basis of the literature reviewed above, we hypothesized that ratings of job knowledge and skills
would predict the organizational tenure of AWBVI candidates.
Hypothesis 3b: Independent ratings of job knowledge and skills will be positively related to
the self-reported organizational tenure of AWBVI candidates.
Applied Social Skills in Employment Interviews
Applied social skills refer to a job candidate’s “ability to function effectively in social
situations” (Huffcutt et al., 2001, p. 904). Whereas low-structure interviews are more likely to
assess mental capability, highly structured interviews are more likely to assess interpersonal
skills (Huffcutt et al., 2001;Van Iddekinge, Raymark, Eidson, & Attenweiler, 2004). In fact,
applied social skills are the most frequently assessed constructs in highly structured employ-
ment interviews (Huffcutt et al., 2001;Morgeson, Reider, & Campion, 2005). Applied social
skills are often assessed in employment interviews because they are considered to be important
for work situations that require communication and coordination with others (e.g., customer
service, work teams, leadership; Arvey & Campion, 1982;Campion, Medsker, & Higgs, 1993;
Ferris, Witt, & Hochwarter, 2001;Mumford, Campion, & Morgeson, 2007). And research has
found that applied social skills are indeed an important predictor of job performance (Ferris et
al., 2001;Hochwarter, Witt, Treadway, & Ferris, 2006;Lievens & Sackett, 2006). Thus, we
hypothesized that ratings of applied social skills would predict the self-rated job performance
of AWBVI candidates.
Hypothesis 4a: Independent ratings of applied social skills will be positively related to the
self-reported job performance of AWBVI candidates.
Applied social skills are also relevant to organizational tenure. Employees with better social
skills tend to develop better interpersonal relationships with other employees and they are better
able to ingratiate themselves with their supervisors, which should lead to more job- and
career-enhancing opportunities that may influence the likelihood of remaining with the orga-
nization (Todd, Harris, Harris, & Wheeler, 2009;Zenger & Lawrence, 1989). Overall, research
has generally found that employees with higher social skills tend to experience more favorable
work and career outcomes than those lacking in social skills (Baron & Markman, 2003;Ferris,
Perrewe, Anthony, & Gilmore, 2000;Kilduff & Day, 1994). Thus, we hypothesized that ratings
of applied social skills would predict the organizational tenure of AWBVI candidates.
Hypothesis 4b: Independent ratings of applied social skills will be positively related to the
self-reported organizational tenure of AWBVI candidates.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
133ASYNCHRONOUS WEB-BASED VIDEO EMPLOYMENT INTERVIEWS
Conscientiousness in Employment Interviews
Conscientiousness is a dimension of personality illustrated by characteristics such as dependability,
responsibility, reliability, need for achievement, and motivation, and it is the most frequently assessed
personality dimension in employment interviews (Huffcutt et al., 2001). Measures of conscientiousness
have been found to be modestly correlated with structured interviews (Barrick, Patton, & Haugland,
2000;Cortina, Goldstein, Payne, Davison, & Gilliland, 2000). In addition, previous studies have found
conscientiousness, as measured in an employment interview, demonstrates construct validity (Van
Iddekinge et al., 2005) and criterion-related validity (Huffcutt et al., 2001). Given these findings, in
addition to the accumulation of evidence that supports conscientiousness as a valid predictor of job
performance (Barrick & Mount, 1991;Schmidt & Hunter, 1998), we hypothesized that conscientiousness
ratings would predict the self-reported job performance of AWBVI candidates.
Hypothesis 5a: Independent ratings of conscientiousness will be positively related to the
self-reported job performance of AWBVI candidates.
Conscientiousness has also been empirically linked to organizational tenure (Ng & Feldman, 2010;
Zimmerman, 2008). Conscientiousness is typically considered an antecedent of the motivational com-
ponent of job performance (Barrick, Mount, & Strauss, 1994), which makes conscientiousness an
organizationally desired attribute (Ng & Feldman, 2010). Highly conscientious employees are more
likely to believe they have a moral obligation to remain with an organization (Zimmerman, 2008).
Conscientiousness has also been positively associated with organizational commitment (Erdheim, Wang,
& Zickar, 2006), and research has generally supported the notion that organizational commitment
predicts organizational tenure (Matheiu & Zajac, 1990;Meyer & Allen, 1984). Therefore, we expected
that conscientiousness ratings would predict the organizational tenure of AWBVI candidates.
Hypothesis 5b: Independent ratings of conscientiousness will be positively related to the
self-reported organizational tenure of AWBVI candidates.
Method
Procedure
The present study was conducted using Interview4, a commercially available AWBVI platform (https://
www.interview4.com/). Interview4 allows for both live—where audio and video are streamed both ways
between the interviewer and the candidate—and “on demand”—where audio and video are only
streamed one way from the candidate to the interviewer— online interviews. The on-demand feature, in
which an interviewer is not present, was used in the present study to avoid potential confounds due to
interviewer characteristics. Interview4 is fully customizable, and employers can record introductory or
closing videos for candidates to watch that provide more information about the hiring organization.
Participants were recruited through Amazon’s Mechanical Turk (www.MTurk.com), a crowd-
sourcing website where freelance “workers” can perform tasks that can be completed using a
computer (surveys, experiments, writing, etc.) in exchange for small amounts of money. Crowd-
sourcing services have been successfully used in previous studies of employment interviews (e.g.,
Roulin, 2016). In addition, previous research has indicated that participants recruited through
Mechanical Turk are more demographically and occupationally diverse than typical Internet and
college-student samples and that data collected through Mechanical Turk is at least as reliable as
data collected using other methods (Behrend, Sharek, Meade, & Wiebe, 2011;Buhrmester, Kwang,
& Gosling, 2011;Harms & DeSimone, 2015).
Given a limited budget provided by the sponsoring organization, we recruited a sample of 75
employed professionals (both full-time and part-time) through Mechanical Turk to participate in the
present study in exchange for 10 dollars each. Although many would consider this a convenience
sample, virtually all samples in applied psychology are convenience samples (Landers & Behrend,
2015). Moreover, as noted by Landers and Behrend (2015), reasonable sacrifices in external validity
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
134 GORMAN, ROBINSON, AND GAMBLE
are justified in cases where the purpose of the study is to falsify theory (i.e., hypothesis testing).
Furthermore, Aguinis and colleagues (Aguinis & Edwards, 2014;Aguinis & Lawal, 2012) have
noted that crowd-sourcing services such as Mechanical Turk provide the perfect blend of experi-
mental control and a naturalistic setting. Thus, we felt confident in recruiting a sample of Mechanical
Turk workers to provide an initial test of the validity of AWBVIs.
Participants were instructed that they would be taking part in a web-based video interview, after
which they would be provided a link to an online survey. In order to ensure the integrity of the
interview, participants were told that to receive the full amount of payment, they were to dress and
behave as though they were interviewing for an actual job in their career field. To add further
motivation, they were told that the top-rated interviews would be entered into a random drawing for
an Amazon.com gift card. Participants that signed up were sent an e-mail with a link to the
interview, which they could complete at their convenience. The interview was automated, such that
questions appeared successively on participants’ computer screen and their answers were recorded
digitally using their video camera. To keep the task manageable for them, we selected the following
five structured-interview questions from a database of Interview4 questions:
1. Tell me about a time when you displayed your greatest strength? What was the outcome of
this situation? How about a time when you displayed your greatest weakness? What was the
outcome of that situation?
2. Tell me about a key leadership role you have served in. What were your responsibilities?
What were the results of your leadership?
3. Tell me about a time when you had to make a difficult decision. What was the outcome of
your decision?
4. Tell me about a time when you tried something you had never done before. What was the
outcome of this action?
5. Can you think of a situation when you did not tell the whole truth? Describe the situation
and the result of your action.
These questions are representative of those commonly asked of applicants using Interview4. For research
purposes, we tailored the questions to be situational, behavior-based queries that required participants to
describe the situation, their behavior, and the results of their behavior. Each interview was between 5 and
10 min in total duration. After recording their interview responses, participants responded to an online
survey. The survey contained items that measured work outcomes and demographics (described in the
following text).
Participants
The mean age of participants was 31.38 years (SD 9.49). Fifty-three percent of the sample was
male (47% female), and the majority of participants indicated their race as Caucasian (78%).
Sixty-nine percent of the sample was employed full-time, representing industries such as education,
health care, information technology, and sales. The average yearly salary of participants was
$US37,421 (SD $US28,277).
Criterion Measures
To minimize participant fatigue, we measured the following criteria using single-item measures (Drolet
& Morrison, 2001;Nagy, 2002). By asking each question in a clear and unambiguous fashion, the use
of single-item measures was appropriate for this study (Gardner, Cummings, Dunham, & Pierce, 1998).
1
Job performance. Given the inherent difficulty of reliably collecting supervisory job-performance
ratings using Mechanical Turk, we measured job performance using Lance’s (1988) single-item self-
report measure: “On your last performance review, which phrase best describes how your supervisor
1
In addition to job performance and organizational tenure, we also assessed self-reported counterproduc-
tive work behavior and safety performance. The analysis of these data is not reported in the present article, but
the results are available from the first author on request.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
135ASYNCHRONOUS WEB-BASED VIDEO EMPLOYMENT INTERVIEWS
rated your performance?” Response options used Lance’s (1988) rating scale: 1 (unacceptable), 2
(marginal),3(needs improvement),4(above average— exceeds job requirements), and 5 (outstanding—
excellent). Although self-ratings of job performance have the potential to be upwardly biased, research
evidence suggests that self-ratings may be just as valid as other-ratings of performance (e.g., Atwater,
Ostroff, Yammarino, & Fleenor, 1998;Mabe & West, 1982;Steel & Ovalle, 1984). Moreover, we asked
participants to anonymously provide their supervisor’s rating rather than their own evaluation of their
performance. Furthermore, the primary argument against self-ratings of job performance is the potential
for method bias in validating self-rated predictors (Conway & Lance, 2010), but in our analyses we
compared participant self-ratings of job performance to interview ratings provided by trained raters, thus
minimizing concerns related to method bias (Podsakoff, MacKenzie, Lee, & Podsakoff, 2003).
Organizational tenure. Consistent with previous research on organizational tenure (Gilson et
al., 2013), we defined organizational tenure as the length of time an individual has been working at
their current organization. Thus, to assess organizational tenure, we asked participants to respond to
the following item: “How many consecutive months have you been employed with your current
employer?”
Predictor Measures
To assess interview validity, we relied on a dual approach rather than relying on single interview
ratings. That is, we utilized construct ratings and an overall rating of interview performance across
interview questions (described in the following text). Given that we were interested in rater
agreement across subject-matter-expert ratings of the interview constructs and overall interview
performance (described in the following text), we report the average r
wg
for each construct and
interview performance below to justify rating aggregation for the purpose of testing the study
hypotheses (see Bliese, 2000;James, Demaree, & Wolf, 1993;James & LeBreton, 2001).
Interview constructs. First, we developed a construct rating system based on the framework
of interview constructs developed by Huffcutt et al. (2001). For the purposes of the present study,
we included the first four construct categories (mental capability, knowledge and skills, basic
personality tendencies, and applied social skills) and not the last three (interests and preferences,
organizational fit, and physical attributes). We made this decision on the basis of the following
rationale: (a) The relatively short duration of the videos and limited number of questions precluded
the assessment of interests and preferences; (b) organizational fit could not be assessed because the
interview questions did not pertain to a particular job or organization; (c) physical attributes were
not assessed to avoid potential confounds due to applicant demographics; and (d) constructs from
these three categories are assessed much less frequently in interviews than the first four (Huffcutt
et al., 2001).
In keeping with Huffcutt et al.’s (2001) taxonomy, observers made ratings for each construct
within each category, and a composite rating was calculated for each category. Ratings of each
construct were made using a scale from 1 (poor)to5(superior). For mental capability, observers
made ratings on four constructs: general intelligence, verbal ability, applied mental skills, and
creativity and innovation. Average r
wg
was .93 across raters for the mental capability constructs.
Cronbach’s alpha for the aggregated mental-capability ratings was .94. For knowledge and skills,
observers rated job knowledge and skills, education and training, and experience and general work
history. Average r
wg
was .85 across raters for the knowledge and skills, and Cronbach’s alpha for
the aggregated knowledge-and-skills ratings was .80. For applied social skills, observers rated
communication skills, interpersonal skills, leadership, and persuading and negotiating. Average r
wg
was .93 across raters for applied social skills, and Cronbach’s alpha for the aggregated applied-
social-skills ratings was .90. Conscientiousness was rated as a single construct. Average r
wg
was .92
across raters for conscientiousness.
Overall interview performance. Consistent with previous interview research (Cable & Judge,
1997;Cuddy, Wilmuth, Yap, & Carney, 2015), we asked the observers to provide interview-
performance ratings for each of the five interview questions using a scale from 1 (poor)to7
(superior). The average r
wg
across interview ratings was .94; thus, we aggregated the ratings and
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
136 GORMAN, ROBINSON, AND GAMBLE
calculated an overall interview rating based on the average of the ratings for the five questions.
Cronbach’s alpha for the aggregated interview performance was .87.
Video-Interview Rating Procedure
All video interviews were evaluated by a team of eight upper-level graduate students in I/O
psychology under the direction of the lead author. Consistent with previous interview research
(Cuddy et al., 2015), raters were kept blind to the study hypotheses. An hour-long training session
was provided to the students to familiarize them with the constructs to be rated and the rating scales
themselves (see the Appendix for the rating scales used in this study). The students were divided into
teams of two, and, to avoid fatigue, each team rated 15 to 20 videos each. To maintain standard-
ization, raters were asked to watch each video no more than twice, and raters were not allowed
discuss their ratings. Each pair rated each assigned video independently (outside of the lab), at which
point each rater independently submitted his or her ratings to the first author. The averages of each
pair of ratings for each construct were used in subsequent analyses.
Results
Means, standard deviations, ranges, and zero-order correlations between study variables are pro-
vided in Table 1. Standardized regression coefficients were examined to test Hypotheses 1 through
5. Hypothesis 1 was supported, as overall interview performance was positively related to self-
reported job performance (␤⫽.28, p.01). Hypothesis 2a was supported, as mental-capability
ratings were positively related to self-reported job performance (␤⫽.32, p.01). Hypothesis 2b
was not supported, as mental-capability ratings were not associated with organizational tenure (␤⫽
.10, ns).
Hypothesis 3a was supported, as knowledge-and-skill ratings predicted self-reported job per-
formance (␤⫽.48, p.01), and knowledge-and-skill ratings predicted organizational tenure (␤⫽
.28, p.01), supporting Hypothesis 3b. Ratings of applied social skills were positively related to
self-reported job performance (␤⫽.26, p.05), thus supporting Hypothesis 4a. Hypothesis 4b was
also supported, as applied social skills were positively related to organizational tenure (␤⫽.25, p
.05). Hypothesis 5a predicted that ratings of conscientiousness would predict self-rated job perfor-
mance, and this hypothesis was supported (␤⫽.36, p.01). However, 5b was not supported, as
conscientiousness was not related to organizational tenure (␤⫽.12, ns).
Discussion
The present study was conducted as an initial attempt to examine the validity of AWBVIs for
predicting self-reported work outcomes. The results of the study reveal some positive findings in that
Table 1
Means, Standard Deviations, and Zero-Order Correlations Among Study Variables
Variable MSD1234567
1. Mental capability 3.23 .82
2. Knowledge and skills 3.45 .97 .70
3. Applied social skills 3.12 .77 .73 .69
4. Conscientiousness 3.53 1.05 .73 .58 .58
5. Interview performance 4.21 1.25 .84 .73 .81 .72
6. Job performance 4.13 .79 .32 .48 .26 .36 .33
7. Organizational tenure 39.80 44.12 .10 .28 .25 .12 .12 .21 —
Note. Correlations greater than .191 are statistically significant at p.05, df 73 (one-directional).
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
137ASYNCHRONOUS WEB-BASED VIDEO EMPLOYMENT INTERVIEWS
composite AWBVI ratings and interview constructs including mental capability, knowledge and
skills, applied social skills, and conscientiousness were all positively associated with self-rated job
performance. Indeed, the correlation between overall interview performance and self-reported job
performance (r.32) is consistent with previously reported criterion-related validity estimates of
video-based SJTs (r.34, Lievens & Sackett, 2006;r.33, Weekley & Jones, 1997) and higher
than criterion-related validity estimates of structured interviews (r.24, McDaniel et al., 1994).
Given the rise in popularity of video-based SJTs among practitioners of evidence-based manage-
ment (Oostrom, De Soete, & Lievens, 2015;Tippins, 2015), we believe our validity findings can
likewise be an encouraging first step toward the widespread adoption of AWBVIs in organizations.
An additional contribution of the present study is that we found that interview constructs rated
in AWBVIs were also related to self-reported organizational tenure. Specifically, job knowledge and
skills and applied social skills predicted organizational tenure. Moreover, the findings of the present
study suggest that organizations interested in selecting candidates who are likely to remain with the
organization may wish to focus on evaluating job knowledge and skills and applied social skills in
the employment interview.
The results of the present study are also encouraging for organizations that are looking to utilize
video-based technology in their interviews but that may be concerned about the lack of validity
evidence for such technology. Our results suggest AWBVIs may be at least as valid as structured
interviews and video-based situational judgment tests. One reason that AWBVIs evidenced higher
levels of criterion-related validity than previous estimates of structured interviews may be the
increased structure of AWBVIs over and above that of a typical structured interview. Specifically,
rather than relying on memory or notes alone, raters of AWBVIs have the ability to evaluate videos
as many times as they would like and in any order they prefer. Combined with the many advantages
of AWBVIs, their relative validity may persuade many organizations to reconsider their use of the
standard employment interview.
The present study also suggests that AWBVIs are a ripe topic for future research. For example,
Furner and George (2009) suggested that advanced technology may be useful in the detection and
amelioration of deception in employment interviews. There could be differences, for instance, in the
levels of candidate deception depending on whether the interview is conducted face to face or
asynchronously via the web. Future studies could also examine the influence of impression
management on AWBVI outcomes. For example, Bangerter et al. (2012) suggested that impression-
management tactics are likely restricted or minimized in video-based interviews, thus limiting the
impact of impression management on interview outcomes. Thus, future research on AWBVIs could
extend beyond examinations of validity to look at the impact of video-interview technology on
contemporary topics such as candidate deception, impression management, and verbal/nonverbal
behaviors.
We further added to the employment-interview literature by providing additional evidence
of the validity of some of the interview constructs identified by Huffcutt et al. (2001), and our
study is one of the first to do so using web-based video interviews. We found that the interview
constructs of mental capability, job knowledge and skills, applied social skills, and conscien-
tiousness all can be and were reliably rated in AWBVIs. However, we should point out that
these interview constructs may not be completely independent; indeed, in the present study, we
found that a single factor explained 70% of the variance in the ratings of the four constructs.
This could also suggest that a general interview-performance factor underlies the interview
constructs themselves, which may suggest a limitation of Huffcutt et al.’s (2001) taxonomy as
well as a limitation of the current study. Nonetheless, we echo the call of Huffcutt et al. (2001)
to continue research on these constructs using their framework. We also see a potential avenue
for future research in studying whether alternative interview technology (e.g., video) influences
certain constructs to be rated more often than in face-to-face interviews. Future research should
also continue to explore how web-based video technology influences the structure of the
interview and whether/how increased structure influences web-based video-interview validity.
Finally, despite the positive findings of our study, there may be trade-offs for companies
who decide to adopt AWBVI platforms rather than traditional face-to-face interviews. For
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
138 GORMAN, ROBINSON, AND GAMBLE
example, AWBVIs may not capture the full range of candidate nonverbal behaviors that can be
assessed in a face-to-face interview. Employers could potentially address this issue, though,
through follow-up face-to-face interviews, and although limited, AWBVIs would certainly
capture more nonverbal behaviors than phone interviews. Additionally, from the candidate’s
perspective, not having a synchronous conversation with a live interviewer may seem awkward
and impersonal, which may then cause the candidate to reduce effort or not take the interview
seriously. Also, some candidates may be uncomfortable using video technology and therefore
resist participating in an AWBVI because of a lack of experience. One potential solution to
mitigate these last two problems, though, might be to offer candidates the ability to practice
interviewing using the technology, and, in fact, several of the firms that offer AWBVI platforms
currently allow candidates to practice using their technology for free.
Limitations and Further Avenues for Research
Of course, every study has limitations. Our results are limited by the reliance on self-reports of
job performance rather than supervisor ratings. Self-reports of job performance have the
potential to be upwardly biased, and, given the elevated mean and small standard deviation on
the measure in our sample, this may have been the case. However, as described earlier, we asked
participants to provide supervisor ratings of their performance rather than their own ratings, and
there is some evidence that self-reports of performance may be just as valid as other-reports
(e.g., Mabe & West, 1982). Nonetheless, ratings provided by hiring managers in a field setting
should also be collected to complement the results of the present study.
One could also argue that participants in our study were not applicants in a real hiring
situation, thus limiting the generalizability of our results. This is a valid point, although we see
no reason why a sample of Mechanical Turk workers would be conspicuously different from a
sample of employees in a given organization (see Landers & Behrend, 2015). Moreover,
participants were compensated for their participation and they were instructed to treat the
interview as they would an actual job interview. Furthermore, although budget constraints
limited our sample size to 75, previous research has found Mechanical Turk participants provide
data that is at least as reliable and valid as those in other settings (e.g., Buhrmester et al., 2011).
Regardless, future studies should replicate our findings using larger samples in actual hiring
situations.
The present study also cannot answer the question of causality. As noted by an anonymous
reviewer, it may also be that candidates who provide high self-ratings of job performance are also
more likely to present themselves in a favorable light in an interview. Future research should
continue to evaluate the validity of AWBVIs with this limitation in mind. Specifically, further
studies should include predictive-validity designs in which job candidates are tracked using a
longitudinal design and their job performance is evaluated using alternative methods such as
supervisor-rated performance.
Conclusion
Overall, the present study was the first validation study of AWBVIs, and we provided initially
positive and promising findings about the criterion-related validity of AWBVIs. Our results also
highlight the utility of Huffcutt et al.’s (2001) framework for validating constructs rated in
AWBVIs. We believe our results are an important first step toward providing evidence of the
validity of AWBVIs for use in employment settings, and we suggest that this evidence, coupled
with the many practical advantages of AWBVIs, may render them a suitable alternative to
traditional face-to-face interviews. We believe this is encouraging to both academics and
practitioners, and we hope our findings will spur further research on a frequently practiced yet
highly understudied topic.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
139ASYNCHRONOUS WEB-BASED VIDEO EMPLOYMENT INTERVIEWS
References
Aguinis, H., & Edwards, J. R. (2014). Methodological wishes for the next decade and how to make wishes come
true. Journal of Management Studies, 51, 143–174. http://dx.doi.org/10.1111/joms.12058
Aguinis, H., & Lawal, S. O. (2012). Conducting field experiments using eLancing’s natural environment.
Journal of Business Venturing, 27, 493–505. http://dx.doi.org/10.1016/j.jbusvent.2012.01.002
Arvey, R. D., & Campion, J. E. (1982). The employment interview: A summary and review of recent research.
Personnel Psychology, 35, 281–322. http://dx.doi.org/10.1111/j.1744-6570.1982.tb02197.x
Atwater, L. E., Ostroff, C., Yammarino, F. J., & Fleenor, J. W. (1998). Self-other agreement: Does it really
matter? Personnel Psychology, 51, 577–598. http://dx.doi.org/10.1111/j.1744-6570.1998.tb00252.x
Bangerter, A., Roulin, N., & König, C. J. (2012). Personnel selection as a signaling game. Journal of Applied
Psychology, 97, 719 –738. http://dx.doi.org/10.1037/a0026078
Baron, R. A., & Markman, G. D. (2003). Beyond social capital: The role of entrepreneurs’ social competence
in their financial success. Journal of Business Venturing, 18, 41–60. http://dx.doi.org/10.1016/S0883-
9026(00)00069-0
Barrick, M. R., & Mount, M. K. (1991). The Big Five personality dimensions and job performance: A
meta-analysis. Journal of Applied Psychology, 44, 1–26.
Barrick, M. R., Mount, M. K., & Strauss, J. P. (1994). Antecedents of involuntary turnover due to a reduction
in force. Personnel Psychology, 47, 515–535. http://dx.doi.org/10.1111/j.1744-6570.1994.tb01735.x
Barrick, M. R., Patton, G. K., & Haugland, S. N. (2000). Accuracy of interviewer judgments of job applicant
personality traits. Personnel Psychology, 53, 925–951. http://dx.doi.org/10.1111/j.1744-
6570.2000.tb02424.x
Behrend, T. S., Sharek, D. J., Meade, A. W., & Wiebe, E. N. (2011). The viability of crowdsourcing for survey
research. Behavioral Research Methods, 43, 800 –813. http://dx.doi.org/10.3758/s13428-011-0081-0
Behrend, T. S., & Thompson, L. F. (2013). Combining I-O psychology and technology for an environmentally
sustainable world. In A. H. Huffman & S. R. Klein (Eds.), Green organizations: Driving change with I-O
psychology (pp. 300 –322). New York, NY: Routledge.
Blacksmith, N., Willford, J. C., & Behrend, T. S. (2016). Technology in the employment interview: A
meta-analysis and future research agenda. Personnel Assessment and Decisions, 2, 12–20. http://dx.doi.org/
10.25035/pad.2016.002
Bliese, P. D. (2000). Within-group agreement, non-independence, and reliability: Implications for data aggre-
gation and analysis. In K. J. Klein & S. W. Kozlowski (Eds.), Multilevel theory, research, and methods in
organizations (pp. 349 –381). San Francisco, CA: Jossey-Bass.
Borman, W. C., White, L. A., Pulakos, E. D., & Oppler, S. H. (1991). Models of supervisor job performance
ratings. Journal of Applied Psychology, 76, 863–872. http://dx.doi.org/10.1037/0021-9010.76.6.863
Buhrmester, M., Kwang, T., & Gosling, S. D. (2011). Amazon’s Mechanical Turk: A new source of
inexpensive, yet high-quality, data? Perspectives on Psychological Science, 6, 3–5. http://dx.doi.org/
10.1177/1745691610393980
Cable, D. M., & Judge, T. A. (1997). Interviewers’ perceptions of persons-organization fit and organizational
selection decisions. Journal of Applied Psychology, 82, 546 –561. http://dx.doi.org/10.1037/0021-
9010.82.4.546
Campbell, J. P. (1990). Modeling the performance prediction problem in industrial and organizational psychol-
ogy. In M. D. Dunnette & L. M. Hough (Eds.), Handbook of industrial and organizational psychology (2nd
ed., Vol. 1, pp. 687–732). Palo Alto, CA: Consulting Psychologists Press.
Campion, M. A., Medsker, G. J., & Higgs, A. C. (1993). Relations between work group characteristics and
effectiveness: Implications for designing effective work groups. Personnel Psychology, 46, 823–850.
http://dx.doi.org/10.1111/j.1744-6570.1993.tb01571.x
Cascio, W., & Aguinis, H. (2011). Applied psychology in human resource management. Englewood Cliffs, NJ:
Prentice Hall.
Chapman, D. S., & Rowe, P. M. (2002). The impact of videoconference technology, interview structure, and
interviewer gender on interviewer evaluations in the employment interview: A field experiment. Journal of
Occupational and Organizational Psychology, 74, 279 –298. http://dx.doi.org/10.1348/096317901167361
Conway, J. M., Jako, R. A., & Goodman, D. F. (1995). A meta-analysis of interrater and internal consistency
reliability of selection interviews. Journal of Applied Psychology, 80, 565–579. http://dx.doi.org/10.1037/
0021-9010.80.5.565
Conway, J. M., & Lance, C. E. (2010). What reviewers should expect from authors regarding common method
bias in organizational research. Journal of Business and Psychology, 25, 325–334. http://dx.doi.org/10.1007/
s10869-010-9181-6
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
140 GORMAN, ROBINSON, AND GAMBLE
Cortina, J. M., Goldstein, N. B., Payne, S. C., Davison, H. K., & Gilliland, S. W. (2000). The incremental validity
of interview scores over and above cognitive ability and conscientiousness scores. Personnel Psychology, 53,
325–351. http://dx.doi.org/10.1111/j.1744-6570.2000.tb00204.x
Cucina, J. M., Busciglio, H. H., Thomas, P. H., Callen, N. F., Walker, D. D., & Goldenberg Schoepfer, R. J.
(2011). Videobased testing at U.S. Customs and border protection. In N. T. Tippins, S. Adler, & A. I. Kraut
(Eds.), Technology-enhanced assessment of talent (pp. 338 –354). San Francisco, CA: Jossey-Bass. http://
dx.doi.org/10.1002/9781118256022.ch13
Cuddy, A. J., Wilmuth, C. A., Yap, A. J., & Carney, D. R. (2015). Preparatory power posing affects nonverbal
presence and job interview performance. Journal of Applied Psychology, 100, 1286 –1295. http://dx.doi.org/
10.1037/a0038543
Dipboye, R. L. (1992). Selection interviews: Process perspectives. Cincinnati, OH: South-Western Publishing.
Drolet, A. L., & Morrison, D. G. (2001). Do we really need multiple-item measures in service research? Journal
of Service Research, 3, 196 –204. http://dx.doi.org/10.1177/109467050133001
Erdheim, J., Wang, M., & Zickar, M. J. (2006). Linking the Big Five personality constructs to organizational
commitment. Personality and Individual Differences, 41, 959 –970. http://dx.doi.org/10.1016/
j.paid.2006.04.005
Ferris, G. R., Perrewé, P. L., Anthony, W. P., & Gilmore, D. C. (2000). Political skill at work. Organizational
Dynamics, 28, 25–37. http://dx.doi.org/10.1016/S0090-2616(00)00007-3
Ferris, G. R., Witt, L. A., & Hochwarter, W. A. (2001). Interaction of social skill and general mental ability on
job performance and salary. Journal of Applied Psychology, 86, 1075–1082. http://dx.doi.org/10.1037/0021-
9010.86.6.1075
Furner, C. P., & George, J. F. (2009). Making it hard to lie: Cultural determinants of media choice for deception.
Proceedings of the Hawaii International Conference on System Sciences. Retrieved from https://www
.semanticscholar.org/paper/Making-it-Hard-to-Lie-Cultural-Determinants-of-Med-Furner-George/24c9a726
1a7d53c4671abaa176cc9bf8f5fa6f1a
Gardner, D. G., Cummings, L. L., Dunham, R. B., & Pierce, J. L. (1998). Single item versus multiple item
measurement scales: An empirical comparison. Educational and Psychological Measurement, 58, 898 –915.
http://dx.doi.org/10.1177/0013164498058006003
Gilson, L. L., Lim, H. S., Luciano, M. M., & Choi, J. N. (2013). Unpacking the cross-level effects of tenure
diversity, explicit knowledge, and knowledge sharing on individual creativity. Journal of Occupational and
Organizational Psychology, 86, 203–222. http://dx.doi.org/10.1111/joop.12011
Gottfredson, L. S., & Crouse, J. (1986). Validity versus utility of mental tests: Example of the SAT. Journal of
Vocational Behavior, 29, 363–378. http://dx.doi.org/10.1016/0001-8791(86)90014-X
Guchait, P., Ruetzler, T., Taylor, J., & Toldi, N. (2014). Video interviewing: A potential selection tool for
hospitality managers—A study to understand applicant perspective. International Journal of Hospitality
Management, 36, 90 –100. http://dx.doi.org/10.1016/j.ijhm.2013.08.004
Harms, P. D., & DeSimone, J. A. (2015). Caution! MTurk workers ahead—Fines doubled. Industrial and
Organizational Psychology: Perspectives on Science and Practice, 8, 183–190. http://dx.doi.org/10.1017/
iop.2015.23
Hiemstra, A. M. F., & Derous, E. (2015). Video resumes portrayed: Findings and challenges. In I. Nikolaou &
J. Oostrom (Eds.), Employee recruitment, selection, and assessment: Contemporary issues for theory and
practice. Sussex, UK: Psychology Press.
Hiemstra, A. M. F., Derous, E., Serlie, A. W., & Born, M. P. (2012). Fairness perceptions of video resumes
among ethnically diverse applicants. International Journal of Selection and Assessment, 20, 423–433.
http://dx.doi.org/10.1111/ijsa.12005
Hochwarter, W. A., Witt, L. A., Treadway, D. C., & Ferris, G. R. (2006). The interaction of social skill and
organizational support on job performance. Journal of Applied Psychology, 91, 482–489. http://dx.doi.org/
10.1037/0021-9010.91.2.482
Howard, A., & Bray, D. W. (1988). Managerial lives in transition: Advancing age and changing times. New
York, NY: Guilford Press.
Huffcutt, A. I., Conway, J. M., Roth, P. L., & Stone, N. J. (2001). Identification and meta-analytic assessment
of psychological constructs measured in employment interviews. Journal of Applied Psychology, 86,
897–913. http://dx.doi.org/10.1037/0021-9010.86.5.897
Huffcutt, A. I., & Culbertson, S. S. (2010). Selecting and developing members for the organization. In S. Zedeck
(Ed.), APA handbook of industrial and organizational psychology (Vol. 2, pp. 185–203). Washington, DC:
American Psychological Association.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
141ASYNCHRONOUS WEB-BASED VIDEO EMPLOYMENT INTERVIEWS
Huffcutt, A. I., Roth, P. L., & McDaniel, M. A. (1996). A meta-analytic investigation of cognitive ability in
employment interview evaluations: Moderating characteristics and implications for incremental validity.
Journal of Applied Psychology, 81, 459 –473. http://dx.doi.org/10.1037/0021-9010.81.5.459
Hunter, J. E. (1986). Cognitive ability, cognitive aptitudes, job knowledge, and job performance. Journal of
Vocational Behavior, 29, 340 –362. http://dx.doi.org/10.1016/0001-8791(86)90013-8
James, L. R., Demaree, R. G., & Wolf, G. (1993). r
wg
: An assessment of within-group agreement. Journal of
Applied Psychology, 78, 306 –309. http://dx.doi.org/10.1037/0021-9010.78.2.306
James, L. R., & LeBreton, J. M. (2001). [Disentangling issues of agreement, disagreement, and lack of agreement
using rwg, r*
wg
,and
rwg(j)
]. Unpublished raw data.
Judge, T. A., Higgins, C. A., & Cable, D. M. (2001). The employment interview: A review of recent research
and recommendations for future research. Human Resource Management Review, 10, 383–406. http://
dx.doi.org/10.1016/S1053-4822(00)00033-4
Judge, T. A., Higgins, C. A., Thoresen, C. J., & Barrick, M. R. (1999). The Big Five personality traits, general
mental ability, and career success across the life span. Personnel Psychology, 52, 621–652. http://dx.doi.org/
10.1111/j.1744-6570.1999.tb00174.x
Kilduff, M., & Day, D. V. (1994). Do chameleons get ahead? The effects of self-monitoring on managerial
careers. Academy of Management Journal, 37, 1047–1060. http://dx.doi.org/10.2307/256612
Kroeck, K. G., & Magnusen, K. O. (1997). Employer and job candidate reactions to videoconference job
interviewing. International Journal of Selection and Assessment, 5, 137–142. http://dx.doi.org/10.1111/
1468-2389.00053
Lance, C. E. (1988). Job performance as a moderator of the satisfaction–turnover intention relation: An empirical
contrast of two perspectives. Journal of Organizational Behavior, 9, 271–280. http://dx.doi.org/10.1002/
job.4030090307
Landers, R. N., & Behrend, T. S. (2015). An inconvenient truth: Arbitrary distinctions between organizational,
Mechanical Turk, and other convenience samples. Industrial and Organizational Psychology, 8, 142–164.
http://dx.doi.org/10.1017/iop.2015.13
Levashina, J., Hartwell, C. J., Morgeson, F. P., & Campion, M. A. (2014). The structured employment interview:
Narrative and quantitative review of the research literature. Personnel Psychology, 67, 241–293. http://
dx.doi.org/10.1111/peps.12052
Lievens, F., & Sackett, P. R. (2006). Video-based versus written situational judgment tests: A comparison in
terms of predictive validity. Journal of Applied Psychology, 91, 1181–1188. http://dx.doi.org/10.1037/0021-
9010.91.5.1181
Lombardi, M. (2010, December). Lights, camera action: Video enabled talent acquisition takes center stage.
Retrieved from http://www.dsdinc.com/dsd/pdf/AberdeenWP-HRVideoEnabledTalentAcquisition.pdf
Mabe, P. A., & West, S. G. (1982). Validity of self-evaluation of ability: A review and meta-analysis. Journal
of Applied Psychology, 67, 280 –296. http://dx.doi.org/10.1037/0021-9010.67.3.280
Macan, T. (2009). The employment interview: A review of current studies and directions for future research.
Human Resource Management Review, 19, 203–218. http://dx.doi.org/10.1016/j.hrmr.2009.03.006
Mathieu, J. E., & Zajac, D. M. (1990). A review and meta-analysis of the antecedents, correlates, and
consequences of organizational commitment. Psychological Bulletin, 108, 171–194. http://dx.doi.org/
10.1037/0033-2909.108.2.171
McCloy, R. A., Campbell, J. P., & Cudeck, R. (1994). A confirmatory test of a model of performance
determinants. Journal of Applied Psychology, 79, 493–505. http://dx.doi.org/10.1037/0021-9010.79.4.493
McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., & Maurer, S. (1994). The validity of employment interviews:
A comprehensive review and meta-analysis. Journal of Applied Psychology, 79, 599 –616. http://dx.doi.org/
10.1037/0021-9010.79.4.599
Meyer, J. P., & Allen, N. J. (1984). Testing the “side-bet theory” of organizational commitment: Some
methodological considerations. Journal of Applied Psychology, 69, 372–378. http://dx.doi.org/10.1037/
0021-9010.69.3.372
Morgeson, F. P., Reider, M. H., & Campion, M. A. (2005). Selecting individuals in team settings: The
importance of social skills, personality characteristics, and teamwork knowledge. Personnel Psychology, 58,
583–611. http://dx.doi.org/10.1111/j.1744-6570.2005.655.x
Mumford, T. V., Campion, M. A., & Morgeson, F. P. (2007). The leadership skills strataplex: Leadership skill
requirements across organizational levels. The Leadership Quarterly, 18, 154 –166. http://dx.doi.org/
10.1016/j.leaqua.2007.01.005
Nagy, M. S. (2002). Using a single-item approach to measure facet job satisfaction. Journal of Occupational and
Organizational Psychology, 75, 77–86. http://dx.doi.org/10.1348/096317902167658
Ng, T. W., & Feldman, D. C. (2010). Human capital and objective indicators of career success: The mediating
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
142 GORMAN, ROBINSON, AND GAMBLE
effects of cognitive ability and conscientiousness. Journal of Occupational and Organizational Psychology,
83, 207–235. http://dx.doi.org/10.1348/096317909X414584
Ng, T. W., & Feldman, D. C. (2013). Does longer job tenure help or hinder job performance? Journal of
Vocational Behavior, 83, 305–314. http://dx.doi.org/10.1016/j.jvb.2013.06.012
Nonaka, I. (1994). A dynamic theory of organizational knowledge creation. Organization Science, 5, 14 –37.
http://dx.doi.org/10.1287/orsc.5.1.14
Oostrom, J. K., De Soete, B., & Lievens, F. (2015). Situational judgment testing: A review and some new
developments. In I. Nikolaou & J. K. Oostrom (Eds.), Employee recruitment, selection, and assessment:
Contemporary issues for theory and practice (pp. 172–189). Sussex, UK: Psychology Press.
Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Common method biases in behavioral
research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88,
879 –903. http://dx.doi.org/10.1037/0021-9010.88.5.879
Reynolds, D. H., & Weiner, J. A. (2009). Online recruiting and selection: Innovations in talent management.
West Sussex, UK: John Wiley and Sons. http://dx.doi.org/10.1002/9781444315943
Rollag, K. (2004). The impact of relative tenure on newcomer socialization dynamics. Journal of Organizational
Behavior, 25, 853–872. http://dx.doi.org/10.1002/job.280
Roulin, N. (2016). Individual differences predicting impression management detection in job interviews.
Personnel Assessment and Decisions, 2, 1–11. http://dx.doi.org/10.25035/pad.2016.001
Ryan, A. M., & Ployhart, R. E. (2014). A century of selection. Annual Review of Psychology, 65, 693–717.
http://dx.doi.org/10.1146/annurev-psych-010213-115134
Sackett, P., & Lievens, F. (2008). Personnel selection. Annual Review of Psychology, 59, 419 –450. http://
dx.doi.org/10.1146/annurev.psych.59.103006.093716
Salgado, J. F., & Moscoso, S. (2002). Comprehensive meta-analysis of the construct validity of the employment
interview. European Journal of Work and Organizational Psychology, 11, 299 –324. http://dx.doi.org/
10.1080/13594320244000184
Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology:
Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124, 262–274.
http://dx.doi.org/10.1037/0033-2909.124.2.262
Schmidt, F. L., Hunter, J. E., & Outerbridge, A. N. (1986). Impact of job experience and ability on job
knowledge, work sample performance, and supervisory ratings of job performance. Journal of Applied
Psychology, 71, 432–439. http://dx.doi.org/10.1037/0021-9010.71.3.432
Schneider, B. (1987). The people make the place. Personnel Psychology, 40, 437–453. http://dx.doi.org/10.1111/
j.1744-6570.1987.tb00609.x
Steel, R. P., & Ovalle, N. K. (1984). Self-appraisal based upon supervisory feedback. Personnel Psychology, 37,
667–685. http://dx.doi.org/10.1111/j.1744-6570.1984.tb00532.x
Straus, S., Miles, J., & Levesque, L. (2001). The effects of videoconference, telephone, and face-to-face media
on interviewer and applicant judgments in employment interviews. Journal of Management, 27, 363–382.
http://dx.doi.org/10.1177/014920630102700308
Sturman, M. C. (2003). Searching for the inverted U-shaped relationship between time and performance: Meta-
analyses of the experience/performance, tenure/performance, and age/performance relationships. Journal of
Management, 29, 609 –640. http://dx.doi.org/10.1016/S0149-2063(03)00028-X
Taylor, P. J., & Small, B. (2002). Asking applicants what they would do versus what they did do: A meta-analytic
comparison of situational and past behaviour employment interview questions. Journal of Occupational and
Organizational Psychology, 75, 277–294. http://dx.doi.org/10.1348/096317902320369712
Tippins, N. T. (2015). Technology and assessment in selection. Annual Review of Organizational Psychology
and Organizational Behavior, 2, 551–582. http://dx.doi.org/10.1146/annurev-orgpsych-031413-091317
Todd, S. Y., Harris, K. J., Harris, R. B., & Wheeler, A. R. (2009). Career success implications of political skill.
The Journal of Social Psychology, 149, 279 –304. http://dx.doi.org/10.3200/SOCP.149.3.279-304
Van Iddekinge, C. H., Raymark, P. H., Eidson, C. E., Jr., & Attenweiler, W. J. (2004). What do structured
selection interviews really measure? The construct validity of behavior description interviews. Human
Performance, 17, 71–93. http://dx.doi.org/10.1207/S15327043HUP1701_4
Van Iddekinge, C. H., Raymark, P. H., & Roth, P. L. (2005). Assessing personality with a structured employment
interview: Construct-related validity and susceptibility to response inflation. Journal of Applied psychology,
90, 536 –552. http://dx.doi.org/10.1037/0021-9010.90.3.536
Van Iddekinge, C. H., Raymark, P. H., Roth, P. L., & Payne, H. S. (2006). Comparing the psychometric
characteristics of ratings of face-to-face and videotaped structured interviews. International Journal of
Selection and Assessment, 14, 347–359. http://dx.doi.org/10.1111/j.1468-2389.2006.00356.x
Waung, M., Hymes, R. W., & Beatty, J. E. (2014). The effects of video and paper resumes on assessments of
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
143ASYNCHRONOUS WEB-BASED VIDEO EMPLOYMENT INTERVIEWS
personality, applied social skills, mental capability, and resume outcomes. Basic and Applied Social
Psychology, 36, 238 –251. http://dx.doi.org/10.1080/01973533.2014.894477
Weekley, J. A., & Jones, C. (1997). Video-based situational testing. Personnel Psychology, 50, 25–49.
http://dx.doi.org/10.1111/j.1744-6570.1997.tb00899.x
Wiesner, W. H., & Cronshaw, S. F. (1988). A meta-analytic investigation of the impact of interview format and
degree of structure on the validity of the employment interview. Journal of Occupational Psychology, 61,
275–290. http://dx.doi.org/10.1111/j.2044-8325.1988.tb00467.x
Wilk, S. L., Desmarais, L. B., & Sackett, P. R. (1995). Gravitation to jobs commensurate with ability:
Longitudinal and cross-sectional tests. Journal of Applied Psychology, 80, 79 –85. http://dx.doi.org/10.1037/
0021-9010.80.1.79
Wright, P. M., Lichtenfels, P. A., & Pursell, E. D. (1989). The structured interview: Additional studies and a
meta-analysis. Journal of Occupational Psychology, 62, 191–199. http://dx.doi.org/10.1111/j.2044-
8325.1989.tb00491.x
Zenger, T. R., & Lawrence, B. S. (1989). Organizational demography: The differential effects of age and tenure
distributions on technical communication. Academy of Management Journal, 32, 353–376. http://dx.doi.org/
10.2307/256366
Zimmerman, R. D. (2008). Understanding the impact of personality traits on individuals’ turnover decisions: A
meta-analytic path model. Personnel Psychology, 61, 309 –348. http://dx.doi.org/10.1111/j.1744-
6570.2008.00115.x
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
144 GORMAN, ROBINSON, AND GAMBLE
Appendix
Rating Scales Used to Code AWBVIs
Using the applicant’s responses in the video, please use the following rating scale to make an
inference regarding the applicant’s true level of each of the following work-related attributes.
12 3 45
Poor Marginal Satisfactory Good Superior
Mental capability: Ability to learn, organize, process, and evaluate information
Attribute Rating
1. General intelligence _______________
Common interview dimensions: intellectual capacity, mental ability, ability to
learn, analytical ability, mental alertness, ability to think quickly
Justification:
2. Verbal ability _______________
Common interview dimensions: support for arguments, use of vocabulary
Justification:
3. Applied mental skills _______________
Common interview dimensions: problem-solving, problem assessment,
judgment, decision-making, critical thinking, planning, organizing
Justification:
4. Creativity and innovation _______________
Common interview dimensions: creativity, creativeness, innovation
Justification:
Knowledge and skills: Accumulated knowledge, skills, and abilities
Attribute Rating
1. Job knowledge and skills _______________
Common interview dimensions: knowledge, technical knowledge, job knowledge,
product knowledge, use of tools, budgeting
Justification:
2. Education and training _______________
Common interview dimensions: education, academic achievement, grades in
school
Justification
3. Experience and general work history _______________
Common interview dimensions: experience, work history, exposure
Justification:
(Appendix continues)
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
145ASYNCHRONOUS WEB-BASED VIDEO EMPLOYMENT INTERVIEWS
Basic personality: Predisposition to act in certain ways
Attribute Rating
1. Conscientiousness _______________
Common interview dimensions: dependability, responsibility, reliability, timeliness,
sense of duty, need for achievement, motivation, willingness to work hard,
initiative, persistence, time management, moral character, integrity, ethics,
professionalism
Justification:
Applied social skills: Ability to function effectively in social situations
Attribute Rating
1. Communication skills _______________
Common interview dimensions: oral communication, communication skills, expression,
ability to present ideas, conversation ability, voice and speech, listening
Justification:
2. Interpersonal skills _______________
Common interview dimensions: interpersonal skills, interpersonal relations, social
skills, social sensitivity, working with others, ability to relate to people, rapport,
tact, ability to deal with people, adapting to people, teamwork, cooperation, team
focus, team building
Justification:
3. Leadership _______________
Common interview dimensions: leadership, coaching, developing people, delegation,
maintaining control, directing others, activating others, developing teamwork in
others, building morale, discipline
Justification:
4. Persuading and negotiating _______________
Common interview dimensions: persuasiveness, ability to negotiate
Justification:
Please give your evaluation of the quality of the response to each question using the rating scale
below.
123 4 56 7
Poor Average Superior
Interview Question Rating
1_______________
2_______________
3_______________
4_______________
5_______________
Overall _______________
Received February 28, 2017
Latest revision received September 15, 2017
Accepted September 18, 2017
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
146 GORMAN, ROBINSON, AND GAMBLE
... Although AI-based AVI has been used as a replacement for conferencing interviews, phone interviews, and face-to-face interviews in initial employment screening (Gorman et al., 2018), whether job applicants trust this approach is unknown, and this issue is critical to recruitment effectiveness (e.g., applicant withdrawal) . This project answers Glikson and Woolley's (2020) call for research on both cognitive and affective trust in various AI embodiments and interface features, especially in the context of employee selection (Acikgoz et al., 2020). ...
Article
As the demand for automatic video interviews powered by artificial intelligence (AI) increases among employers in the postpandemic era, so do concerns over job applicants' trust in the technology. There are various forms of AI-based video interviews with and without the features of tangibility, immediacy, and transparency used for preemployment screening, and these features may distinctively influence applicants' trust in the technology and whether they engage in or disengage from the hiring process accordingly. This field study involved designing a test of the effect of various forms of AI-based video interviews on interviewees' cognitive and affective trust based on the self-reporting of 152 real job applicants. The study found that AI used in asynchronous video interviews (AI-AVI) increased applicants' cognitive trust from that in the non-AI condition. Moreover, when the AI-AVI had features of tangibility and transparency, the applicants’ cognitive and affective trust increased. However, the feature of immediacy did not have a statistically significant impact. Contrary to concern over the potential negative effects caused by AI and its features, no statistically significant impacts were found in this study.
... There is preliminary evidence that applicants can receive higher performance ratings in AVIs than video-conference interviews (Langer et al., 2017). In addition, AVI ratings correlate with job performance (Gorman et al., 2018). However, such evidence is based on one study with a small sample of online panel respondents and self-reported job performance. ...
Article
Full-text available
The present study examined how variations in the design of asynchronous video interviews (AVIs) impact important interviewee attitudes, behaviors, and outcomes, including perceived fairness, anxiety, impression management, and interview performance. Using a 2x2 experimental design, we investigated the impact of two common and important design elements on these outcomes: (a) preparation time (unlimited versus limited) and (b) the ability to re-record responses. Using a sample of 175 participants completing a mock AVI, we found that whereas providing such options (i.e., unlimited preparation time and/or re-recording) did not impact outcomes directly, the extent to which participants actually used these options did affect outcomes. For instance, those who used more re-recording attempts performed better in the interview and engaged in less deceptive impression management. Moreover, those who used more preparation time performed better in the interview while engaging in slightly less honest impression management. These findings point to the importance of investigating the effects of AVI design on applicant experiences and outcomes. Specifically, AVI design elements produce opportunities for applicants not typically present in synchronous interviews, and can alter interview processes in crucial ways. Finally, not all applicants use these opportunities equally, and this has implications for understanding interview behavior and outcomes.
... Online application documents assess applicants' educational background and work experience (Brown & Campion, 1994), whereas online tests assess GMA (Ones et al., 2017). The constructs assessed in AVIs, however, rely on the questions being asked, but oral presentation skills probably always play a role in them (Gorman et al., 2018;Rasipuram & Jayagopi, 2016). Another difference is the presentation format and, therefore, the media richness (Daft & Lengel, 1986) of information. ...
Article
Full-text available
Asynchronous video interviews (AVIs) are increasingly used to preselect applicants. Previous research found that interviewees are more skeptical of these interviews compared to other forms of interviews. However, comparing AVIs to other interviews is not completely appropriate because of their lack of interactivity and their use during earlier stages of the selection process. Therefore, we compared perceptions of AVIs with perceptions of other preselection tools (online cognitive ability tests and online application documents). Compared to other preselection instruments, potential applicants do not have more skeptical fairness perceptions of AVIs. However, we found differences for perceived usefulness, perceived ease of use, privacy concerns, and perceptions of organizational attractiveness. Organizations can take this into account when choosing how to preselect their applicants. © 2022 The Authors. International Journal of Selection and Assessment published by John Wiley & Sons Ltd.
... Grundsätzlich erscheint es naheliegend, dass jede Durchführungsform für sich eine sehr gute Vorhersage beruflicher Arbeitsleistung ermöglicht und dass dies für strukturierte FTF-Gespräche der Fall ist, ist bekannt und auch metaanalytisch wiederholt belegt (Huffcutt et al., 2014;McDaniel, Whetzel, Schmidt & Maurer, 1994;Taylor & Small, 2002 (Brenner, 2019), was die Brauchbarkeit dieser Art von Interviews als Vorauswahlinstrument unterstreicht. Und schließlich fand eine Studie mit simulierten AVIs Zusammenhänge mit selbsteingeschätzter Arbeitsleistung (Gorman, Robinson & Gamble, 2018). ...
... As for verbal technology, voice data are extracted just as in the case of vocal technology. Each applicant's linguistic habits, times of using a particular word, and the like are analyzed to grasp his/her linguistic behaviors and tendencies [20]. Finally, vital (biological) data are collected to grasp applicants' emotional state and dishonesty based on the fact that the current biological state is related to his/her blood flow and pulse (see Figure 1). ...
Article
Full-text available
Recently applicant information services and interview-assistance services based on big data and AI technology are distributed rapidly worldwide to introduce an interview system that secures efficiency and fairness in the job interview market. Accordingly, this study presents an AI-based interview system developed based on deep-learning technology in which more than 100,000 evaluation data sets were derived from 400,000 interview image data sets. The resulting AI interview system has been applied to enterprises with a reliability of 0.88 Pearson score. Particularly, applying this system to 5 major public enterprises in Korea is presented in this paper. It turned out that the level of satisfaction with fairness and efficiency was as high as 85% in such aspects as evaluation processes, job fitness, and organization fitness. As the applicable range of AI-based solutions is expanding to the general area of personnel management with its time and cost efficiency, as well as reliability and fairness recognized, the deep learning-based job interview solution proposed by the present study needs to be applied widely to written examinations and personality and aptitude tests.
Chapter
Die Digitalisierung und zuletzt die Corona-Pandemie haben dazu beigetragen, dass Unternehmen technologie-mediierte Interviews immer häufiger als Ergänzung zu bzw. anstelle von persönlichen Vorstellungsgesprächen einsetzen. Dieses Kapitel gibt einen Überblick über eine Reihe von Befunden, die unterschiedliche Teilaspekte der Frage nach der Vergleichbarkeit von verschiedenen Arten der Interviewdurchführung beleuchten. Insbesondere gehen wir auf die folgenden Punkte ein: Welche Formen technologie-mediierter Interviews gibt es, und wie verbreitet sind diese? Gibt es Akzeptanzunterschiede zwischen den verschiedenen Interview-Medien, welche Faktoren beeinflussen die Akzeptanz, und gibt es Möglichkeiten, diese Akzeptanz zu verbessern? Beeinflusst das Interview-Medium das Abschneiden von Bewerber:innen, und welche Faktoren auf Seiten der Bewerber:innen bzw. welche auf Seiten der Interviewer:innen sind dabei relevant? Und gibt es Unterschiede in der Kriteriumsvalidität verschiedener Interview-Medien? Auf Basis der vorgestellten Befunde und der Interview-Standards werden abschließend Empfehlungen für die Durchführung von technologie-mediierten Vorstellungsgesprächen abgeleitet.
Preprint
Full-text available
Background All Canadian Residency Matching Service (CaRMS) R1 interviews were conducted virtually for the first time in 2021. We explored the facilitators, barriers, and implications of the virtual interview process for the CaRMS R1 match and provide recommendations for improvement. Methods We conducted a cross-sectional survey study of CaRMS R1 residency applicants and interviewers across Canada in 2021. Surveys were distributed by email to the interviewers, and by email, social media, or newsletter to the applicants. Close-ended items were described, and open-ended items were thematically analyzed. Results A total of 127 applicants and 400 interviewers, including 127 program directors, responded to the survey. 193/380 (50.8%) interviewers and 90/118 (76.3%) applicants preferred virtual over in-person interview formats. Facilitators of the virtual interview format included cost and time savings, ease of scheduling, reduced environmental impact, greater equity, less stress, greater reach and participation, and safety. Barriers of the virtual interview format included reduced informal conversations, limited ability for applicants to explore programs at different locations, limited ability for programs to assess applicants’ interest, technological issues, concern for interview integrity, limited non-verbal communication, and reduced networking opportunities. The most helpful mediums for applicants to learn about residency programs were program websites, the CaRMS/AFMC websites, and recruitment videos. Additionally, panel interviews were preferred by applicants for their ability to showcase themselves and build connections with multiple interviewers. Conclusions Perceptions of 2021 CaRMS R1 virtual interviews were favourable among applicants and interviewers. Recommendations from this study can help improve future iterations of virtual CaRMS interviews.
Article
Full-text available
최근 코로나 19 팬데믹으로 인해 비대면 채용 면접을 채택하는 기업이 늘어났다. 특히, 비대면 면접에 인공지능(Artificial Intelligence: AI) 기술이 도입됨에 따라 사전 심사를 목적으로 하는 비동시적 AI 면접(Asynchronous Video Interview- Artificial Intelligence; AVI-AI)이 많은 기업의 채용 과정에 추가되었다. AVI-AI는 채용 과정에서 요구되는 비용을 절감하고 효율성을 높일 수 있기에 기업들로부터 큰 관심을 받고 있으며, 미래에는 더 확대되어 적용될 전망이다. 이에 발맞추어 AVI-AI의 효과성에 대한 연구가 증가하고 있지만, 아직 산업 및 조직심리학 분 야에서 AVI-AI에 대한 연구는 미비한 실정이다. 본 연구에서는 AVI-AI의 개념 및 사용 실태를 개관하고 AVI-AI의 신뢰도, 타당도 및 지원자의 반응에 대한 연구를 개관하였다. 이를 바탕으로 향후 연구 방향과 실무자들을 위한 제안을 제공하였다.
Chapter
Video recruitmentVideo recruitment—the use of videosVideos at any point in the recruitment processRecruitment process—has surged among organizationsOrganizations as a strategy for hiring talent and operating in their respective fields amid the pandemic. In particular, video interviewsVideo interviews have become mainstream at the assessmentAssessment stage of the recruitment funnelRecruitment funnel to keep hiring workers in a remote context of work. Among job video interviewsVideo interviews, the asynchronousAsynchronous type especially raised interestInterest and concerns online. This chapter offers a novel and essential approach to the study of video interviewsVideo interviews through the theoretical exploration of the crossover between HRMHuman resource management (HRM), marketingMarketing, and information technologyInformation technology, with the goal of uncovering several ways that employer brandingEmployer branding can be ameliorated in a world wherein video recruitmentVideo recruitment prevails. This chapter investigates and compares online perspectivesOnline perspectives of video recruitmentVideo recruitment from both the employer and the worker standpoints to advance ways employer brandingEmployer branding can be improved across the recruitment funnelRecruitment funnel by employers that use video interviewsVideo interviews in remote hiringRemote hiring. This is especially beneficial considering talent’s increased bargaining power and mobility in the labor market and the relatively intense competition to attract and employ talent, as well as the fact that daily behaviors and practices are now developing the new paradigm of recruitment. In other words, this chapter can equip employers with the knowledge necessary to adapt their decision-making regarding video recruitmentVideo recruitment and employer brandingEmployer branding in the digital age, thereby positively affecting talent attractionTalent attraction. This can also benefit potential hires’ experience with video recruitmentVideo recruitment and enhance their opinion of video interviewsVideo interviews, which could lead to a better employer–worker match. In addition, this study underscores the key role and responsibility of video interview service providersVideo interview service providers in video recruitmentVideo recruitment. Service providers act as intermediaries between organizationsOrganizations and job applicants in the video recruitmentVideo recruitmentprocessRecruitment process; thus, they influence both recruiters’ and job applicants’ experience with video recruitmentVideo recruitment. This chapter first explores and conflates extant literature on video interviewsVideo interviews, recruitment, employer brandingEmployer branding, and digital marketingMarketing to lay the groundwork for empirical research. The methodological section then elaborates on the data collected from comments generated by individual users in a dedicated LinkedInLinkedIn news thread about asynchronousAsynchronousvideo interviewsVideo interviews and a service provider’s website content. Next, the empirical research findings are presented to stress the viewpoints of recruiters and the online pool of potential hires. Finally, the focus is shifted to the implications of these viewpoints as follows: (1) implications for organizationsOrganizations in their adoption, deployment, and use of video interviewsVideo interviews in remote recruitment, (2) implications for employer brandingEmployer branding, and (3) implications for the pool of potential and current workers. The chapter closes by discussing future research avenues on video recruitmentVideo recruitment and employer brandingEmployer branding in the digital age.
Article
Full-text available
There has been a growing interest in understanding what constructs are assessed in the employment interview and the properties of those assessments. To address these issues, the authors developed a comprehensive taxonomy of 7 types of constructs that the interview could assess. Analysis of 338 ratings from 47 actual interview studies indicated that basic personality and applied social skills were the most frequently rated constructs in this taxonomy, followed by mental capability and job knowledge and skills. Further analysis suggested that high-and low-structure interviews tend to focus on different constructs. Taking both frequency and validity results into consideration, the findings suggest that at least part of the reason why structured interviews tend to have higher validity is because they focus more on constructs that have a stronger relationship with job performance. Limitations and directions for future research are discussed.
Article
Full-text available
The use of technology such as telephone and video has become common when conducting employment interviews. However, little is known about how technology affects applicant reactions and interviewer ratings. We conducted meta-analyses of 12 studies that resulted in K = 13 unique samples and N = 1,557. Mean effect sizes for interview medium on ratings (d = -.41) and reactions (d = -.36) were moderate and negative, suggesting that interviewer ratings and applicant reactions are lower in technology-mediated interviews. Generalizing research findings from face-to-face interviews to technology mediated interviews is inappropriate. Organizations should be especially wary of varying interview mode across applicants, as inconsistency in administration could lead to fairness issues. At the same time, given the limited research that exists, we call for renewed attention and further studies on potential moderators of this effect.
Article
Full-text available
Landers and Behrend (2015) are the most recent in a long line of researchers who have suggested that online samples generated from sources such as Amazon's Mechanical Turk (MTurk) are as good as or potentially even better than the typical samples found in psychology studies. It is important that the authors caution that researchers and reviewers need to carefully reflect on the goals of research when evaluating the appropriateness of samples. However, although they argue that certain types of samples should not be dismissed out of hand, they note that there is only scant evidence demonstrating that online sources can provide usable data for organizational research and that there is a need for further research evaluating the validity of these new sources of data. Because the target article does not directly address the potential problems with such samples, we will review what is known about collecting online data (with a particular focus on MTurk) and illustrate some potential problems using data derived from such sources.
Book
Full-text available
The intent of this book is to review the research on selection interviews from an integrative perspective. The book is organized around a conception of the interview as a multistage process. The process begins as the interviewer forms initial impressions of the applicant from previewing paper credentials and from initial encounters with the applicant. The actual face-to-face interview follows, consisting of verbal, nonverbal, and paralinguistic exchanges between interviewer and applicant. The process concludes with the interviewer forming final impressions and judgments of the applicant's qualifications and rendering a decision (e.g., hire, reject, gather more information). The book follows from this general sequence of events, with each chapter focusing on a stage of the interview. In exploring the phases of the interview, the text draws freely from basic research on social cognition, decision making, information processing, and social interaction. Chapter 1: An overview of selection interview research and practice Chapter 2: Cognitive processes of the interviewer Chapter 3: First encounters: Impression formation in the preinterview phase Chapter 4: Social interaction in the interview Chapter 5: Final impressions: judgments and decisions in the post interview phase Chapter 6: Alternative models of the interview process Chapter 7: Evaluating the selection interview Chapter 8: Legal issues in selection interviews Chapter 9: Strategies for improving selection interviews Chapter 10: Other functions of the interview Chapter 11: Concluding comments References Author Index Index
Article
Applicant impression management (IM), and especially its deceptive side (i.e., faking), has been described as a potential threat to the validity of employment interviews. This threat was confirmed by evidence of interviewers’ inability to detect (deceptive) IM tactics. Previous studies suggested that some interviewers could be better IM detectors than others, but did not examine the reasons explaining higher abilities. Building on interpersonal deception theory, this study explores individual differences in cognitions (i.e., cognitive ability) and social sensitivity (associated with generalized trust and honesty) as predictors of IM detection abilities. Results of a study with 250 individuals suggest that these individual differences did not independently predict IM detection. Although high trust was associated with higher IM detection when combined with high cognitive ability, a high-trust/low-ability combination appears to be the most harmful for detection. Organizations may consider fighting applicant deception by relying on interviewers who are high cognitive ability trusters. Available at: http://scholarworks.bgsu.edu/pad/vol2/iss1/1
Article
This chapter discusses the practical application of video technology for assessing applicants for law enforcement officer positions at U.S. Customs and Border Protection (CBP). CBP has developed a video-based test (VBT) for assessing the judgment and interactional skills of applicants. The VBT uses video technology to enhance the realism of the situations presented to applicants as well as applicants’ responses to the situations. The chapter describes the VBT approach used by CBP, and outlines the process for developing VBTs. It describes the scoring process, presents psychometric and related evidence in support of the VBT, and provides practical suggestions to industrial/organizational (I/O) psychologists who are interested in developing and implementing VBTs. The chapter provides a description of the VBT approach and highlights its effectiveness in enhancing the realism of applicant assessment while also reducing the burden on the organization. Video based interview