“The interviewer is a machine!” Investigating the effects of conventional and technology‐mediated interview methods on interviewee reactions and behavior
To read the full-text of this research, you can request a copy directly from the authors.
Abstract
Despite the growing number of organizations interested in the use of asynchronous video interviews (AVIs), little is known about its impact on interviewee reactions and behavior. We randomly assigned participants ( N = 299) from two different countries (Switzerland and India) to a face‐to‐face interview, an avatar‐based video interview (with an avatar as a virtual recruiter), or a text‐based video interview (with written questions) and collected data on a set of self‐rated and observer‐rated criteria. Overall, we found that whereas participants reported more negative reactions towards the two asynchronous interviews, observer ratings revealed similar performance across the three interviews and lower stress levels in the two AVIs. These findings suggest that despite technology‐mediated interview methods still not being well‐accepted, interviewees are not at a disadvantage when these methods are used in terms of how well interviewees perform and how stressed they appear to external observers. Implications are discussed.
To read the full-text of this research, you can request a copy directly from the authors.
... Traditionally segmented into sourcing, screening, interviewing, and candidate selection phases (Bogen and Rieke 2018), the screening stage has evolved with the introduction of Asynchronous Video Interviews (AVIs). A derivative of technology-mediated interviews (TMIs), AVIs offer a scalable solution to assess candidates beyond their resumes through pre-recorded video responses (Brenner, Ortner, and Fay 2016;Kleinlogel et al. 2023). This method promises standardization and fairness by providing all candidates with identical questions, eliminating the variability inherent in live interactions (Moore and Kearsley 1996;Rasipuram, Rao, and Jayagopi 2016). ...
The persistent issue of human bias in recruitment processes poses a formidable challenge to achieving equitable hiring practices, particularly when influenced by demographic characteristics such as gender and race of both interviewers and candidates. Asynchronous Video Interviews (AVIs), powered by Artificial Intelligence (AI), have emerged as innovative tools aimed at streamlining the application screening process while potentially mitigating the impact of such biases. These AI-driven platforms present an opportunity to customize the demographic features of virtual interviewers to align with diverse applicant preferences, promising a more objective and fair evaluation. Despite their growing adoption, the implications of virtual interviewer identities on candidate experiences within AVIs remain underexplored. We aim to address this research and empirical gap in this paper. To this end, we carried out a comprehensive between-subjects study involving 218 participants across six distinct experimental conditions, manipulating the gender and skin color of an AI virtual interviewer agent. Our empirical analysis revealed that while the demographic attributes of the agents did not significantly influence the overall experience of interviewees, variations in the interviewees' demographics, significantly altered their perception of the AVI process. Further, we uncovered that the mediating roles of Social Presence and Perception of the virtual interviewer critically affect interviewees' Perceptions of Fairness (+), Privacy (-), and Impression management (+).
... Traditionally segmented into sourcing, screening, interviewing, and candidate selection phases (Bogen and Rieke 2018), the screening stage has evolved with the introduction of Asynchronous Video Interviews (AVIs). A derivative of technology-mediated interviews (TMIs), AVIs offer a scalable solution to assess candidates beyond their resumes through pre-recorded video responses (Brenner, Ortner, and Fay 2016;Kleinlogel et al. 2023). This method promises standardization and fairness by providing all candidates with identical questions, eliminating the variability inherent in live interactions (Moore and Kearsley 1996;Rasipuram, Rao, and Jayagopi 2016). ...
The persistent issue of human bias in recruitment processes poses a formidable challenge to achieving equitable hiring practices, particularly when influenced by demographic characteristics such as gender and race of both interviewers and candidates. Asynchronous Video Interviews (AVIs), powered by Artificial Intelligence (AI), have emerged as innovative tools aimed at streamlining the application screening process while potentially mitigating the impact of such biases. These AI-driven platforms present an opportunity to customize the demographic features of virtual interviewers to align with diverse applicant preferences, promising a more objective and fair evaluation. Despite their growing adoption, the implications of virtual interviewer identities on candidate experiences within AVIs remain underexplored. We aim to address this research and empirical gap in this paper. To this end, we carried out a comprehensive between-subjects study involving 218 participants across six distinct experimental conditions, manipulating the gender and skin color of an AI virtual interviewer agent. Our empirical analysis revealed that while the demographic attributes of the agents did not significantly influence the overall experience of interviewees, variations in the interviewees' demographics significantly altered their perception of the AVI process. Further, we uncovered that the mediating roles of Social Presence and Perception of the virtual interviewer critically affect interviewees' perceptions of fairness (+), privacy (-), and impression management (+).
Character is the psychological, moral, or ethical traits that distinguish one person from another. Leadership is the ability to influence people to be willing to follow guidance or obey one's decisions. This study to determine the understanding of the role of fathers about leadership character education for early childhood. Qualitative research researchers use because the object under study takes place in a natural setting and aims to know, understand, and appreciate carefully and in more depth about the role of fathers in improving early childhood leadership character education at RA Nurul Abror Cibinong. This study shows the results that the father becomes a role model and determines the direction and rules of the family to make children grow and develop their character for the better. Future research is expected to find more qualified factors and solutions to guide fathers as role models in the family
Asynchronous video interviews can use many configurations of design features to create the interviewee experience, but not all designs are equal. Design features may influence interviewees' deceptive and honest impression management, their reactions to the procedure, and interview performance evaluations. Three experiments using mock interviews tested the effects of preparation time and self‐views ( N = 206, from Prolific), reviewing and re‐recording ( N = 230, from Prolific), and giving faking warnings with human versus automated evaluation ( N = 297 university students) on interview outcomes. The design had limited effects on interviewee behavior, but some features may increase interviewees' willingness to fake when used in combination. Opportunities for longer preparation time and re‐recording increased interview performance ratings. Warnings and evaluator type did not affect behavior, reactions, or performance. The implications of these effects are discussed.
Emotion AI is increasingly used to automatically evaluate asynchronous hiring interviews. Although touted for increasing hiring fit and reducing bias, it is unclear how job-seekers perceive emotion AI-enabled asynchronous interviews. This gap is striking, given job-seekers' marginalized position in hiring and how job-seekers with marginalized identities may be particularly vulnerable to this technology's potential harms. Addressing this gap, we conducted exploratory interviews with 14 U.S.-based participants with direct, recent experience with emotion AI-enabled asynchronous interviews. While participants acknowledged the asynchronous, virtual modality's potential benefits to employers and job-seekers, they perceived harms to job-seekers associated with automatic emotion inferences that our analysis maps to distributive, procedural, and interactional injustices. We find that social identity can inform job-seekers' perceptions of emotion AI, extending prior work's understandings of the factors contributing to job-seekers' perceptions of AI (broadly) in hiring. Moreover, our results suggest that emotion AI use may reconfigure demands for emotional labor in hiring and that deploying this technology in its current state may unjustly risk harmful outcomes for job-seekers - or, at the very least, perceptions thereof, which shape behaviors and attitudes. Accordingly, we recommend against the present adoption of emotion AI in hiring, identifying opportunities for the design of future asynchronous hiring interview platforms to be meaningfully transparent, contestable, and privacy-preserving. We emphasize that only a subset of perceived harms we surface may be alleviated by these efforts; some injustices may only be resolved by removing emotion AI-enabled features.
In diesem Kapitel wird die Bedeutung von diagnostischen Interviews zur Beurteilung der Eignung von Personen für Studien, Ausbildungen und Berufe dargestellt. Dabei werden nicht nur die Vorteile, sondern auch die Herausforderungen von Eignungsinterviews bei der Erfassung relevanter eignungsdiagnostischer Informationen diskutiert. Die Rolle von diagnostischen Gesprächen im gesamten Prozess der Eignungsbeurteilung wird erörtert. Hierbei werden die Festlegung von Anforderungen, verschiedene Interviewformen, die Bedeutung von Gesprächsleitfäden und die Auswertung von eignungsdiagnostischen Interviews behandelt. Um die Qualität der Interviews bewerten zu können, ist es entscheidend, psychometrische Standards, verschiedene Qualitätsrichtlinien und rechtliche Grundlagen einzuhalten, die ebenfalls in diesem Kapitel thematisiert werden.
Eighty-four managers who make hiring decisions in 1 of 6 occupations representative of J. L. Holland's (1973) 6 job typologies (medical technologist, insurance sales agent, carpenter, licensed practical nurse, reporter, and secretary) rated 39 hypothetical job applicants on 2 dependent variables, hirability and counterproductivity. Applicants were described on the Big Five personality factors (Emotional Stability, Extraversion, Openness to Experience, Agreeableness, and Conscientiousness) and on general mental ability. Results showed that general mental ability and conscientiousness were the most important attributes related to applicants' hirability and that Emotional Stability, Conscientiousness, and Agreeableness were the most important attributes related to counterproductivity. In most respects, these results mirror meta-analytic reviews of validity studies, thereby confirming hypotheses.
The present study examined how variations in the design of asynchronous video interviews (AVIs) impact important interviewee attitudes, behaviors, and outcomes, including perceived fairness, anxiety, impression management, and interview performance. Using a 2x2 experimental design, we investigated the impact of two common and important design elements on these outcomes: (a) preparation time (unlimited versus limited) and (b) the ability to re-record responses. Using a sample of 175 participants completing a mock AVI, we found that whereas providing such options (i.e., unlimited preparation time and/or re-recording) did not impact outcomes directly, the extent to which participants actually used these options did affect outcomes. For instance, those who used more re-recording attempts performed better in the interview and engaged in less deceptive impression management. Moreover, those who used more preparation time performed better in the interview while engaging in slightly less honest impression management. These findings point to the importance of investigating the effects of AVI design on applicant experiences and outcomes. Specifically, AVI design elements produce opportunities for applicants not typically present in synchronous interviews, and can alter interview processes in crucial ways. Finally, not all applicants use these opportunities equally, and this has implications for understanding interview behavior and outcomes.
Asynchronous video interviews (AVIs) are increasingly used to preselect applicants. Previous research found that interviewees in AVIs receive better interview ratings compared to other forms of interviews. It has been suggested that this difference could be due to the preparation time given for each AVI question. A pilot study confirmed that preparation time in AVIs is indeed beneficial for interview performance. Furthermore, our main study replicated the significant effect of preparation time on interview performance and revealed that it was mediated by active response preparation, whereas no mediation effects were found for strain and for the use of impression management. Finally, preparation time had no direct effect on fairness perceptions but a positive indirect effect via honest impression management.
Practitioner points
• It was previously suggested that applicants receive better interview ratings in asynchronous video interviews (AVIs) than in synchronous interviews because of the preparation time that is provided for each question in an AVI.
• Our results confirmed that preparation time in AVIs indeed leads to better interview performance ratings.
• The positive effects of preparation time were due to active response preparation (i.e., interviewees made notes and structured their answers).
• Longer preparation time did not affect dishonest impression management or fairness perceptions but might affect the validity of AVIs.
The study of nonverbal behavior (NVB), and in particular kinesics (i.e., face and body motions), is typically seen as cost-intensive. However, the development of new technologies (e.g., ubiquitous sensing, computer vision, and algorithms) and approaches to study social behavior [i.e., social signal processing (SSP)] makes it possible to train algorithms to automatically code NVB, from action/motion units to inferences. Nonverbal social sensing refers to the use of these technologies and approaches for the study of kinesics based on video recordings. Nonverbal social sensing appears as an inspiring and encouraging approach to study NVB at reduced costs, making it a more attractive research field. However, does this promise hold? After presenting what nonverbal social sensing is and can do, we discussed the key challenges that researchers face when using nonverbal social sensing on video data. Although nonverbal social sensing is a promising tool, researchers need to be aware of the fact that algorithms might be as biased as humans when extracting NVB or that the automated NVB coding might remain context-dependent. We provided study examples to discuss these challenges and point to potential solutions.
Advances in employment assessment technology have increasingly enabled employers to recruit from around the world by allowing interviewees to respond to live or pre-recorded video or text prompts live or via asynchronous video recordings. Despite their greater scheduling convenience, asynchronous virtual interviews decrease applicants’ ability to engage in impression management and relationship building and therefore may negatively impact applicant reactions compared to their synchronous counterparts. Further, national culture has the potential to influence reactions to virtual interview synchronicity. Previous research has yielded mixed results, with some studies suggesting that culture can moderate how applicants react to selection tests and some finding little or no effect. Drawing from applicant reactions and media richness theories, we integrate Hofstede’s cultural dimensions to investigate the role of national culture in applicant reactions to virtual interview synchronicity in a sample of 644,905 virtual interviewees from 46 countries. Overall, our findings demonstrate that, though they rated both highly, interviewees around the world were generally more satisfied with synchronous virtual interviews and found them to be more effective than asynchronous virtual interviews. Three dimensions of national culture—uncertainty avoidance, long-term orientation, and indulgence—had small to medium moderating effects on these relationships.
Organizations are increasingly adopting automated video interviews (AVIs) to screen job applicants despite a paucity of research on their reliability, validity, and generalizability. In this study, we address this gap by developing AVIs that use verbal, paraverbal, and nonverbal behaviors extracted from video interviews to assess Big Five personality traits. We developed and validated machine learning models within (using nested cross-validation) and across three separate samples of mock video interviews (total N = 1,073). Also, we examined their test–retest reliability in a fourth sample (N = 99). In general, we found that the AVI personality assessments exhibited stronger evidence of validity when they were trained on interviewer-reports rather than self-reports. When cross-validated in the other samples, AVI personality assessments trained on interviewer-reports had mixed evidence of reliability, exhibited consistent convergent and discriminant relations, used predictors that appear to be conceptually relevant to the focal traits, and predicted academic outcomes. On the other hand, there was little evidence of reliability or validity for the AVIs trained on self-reports. We discuss the implications for future work on AVIs and personality theory, and provide practical recommendations for the vendors marketing such approaches and organizations considering adopting them.
Organizations increasingly use technology-mediated interviews. However, only limited research is available concerning the comparability of different interview media and most of the available studies stem from a time when technology-mediated interviews were less common than in the present time. In an experiment using simulated selection interviews, we compared traditional face-to-face (FTF) interviews with telephone and videoconference interviews to determine whether ratings of interviewees’ performance, their perceptions of the interview, or their strain and anxiety are affected by the type of interview. Before participating in the actual interview, participants had a more positive view of FTF interviews compared to technology-mediated interviews. However, fairness perceptions did not differ anymore after the interview. Furthermore, there were no differences between the three interview media concerning psychological and physiological indicators of strain or interview anxiety. Nevertheless, ratings of interviewees’ performance were lower in the technology-mediated interviews than in FTF interviews. Thus, differences between different interview media can still be found nowadays even though most applicants are much more familiar with technology-mediated communication than in the past. The results show that organizations should take this into account and therefore avoid using different interview media when they interview different applicants for the same job opening.
Asynchronous video interviews (AVIs) are increasingly used by organizations in their hiring process. In this mode of interviewing, the applicants are asked to record their responses to predefined interview questions using a webcam via an online platform. AVIs have increased usage due to employers' perceived benefits in terms of costs and scale. However, little research has been conducted regarding applicants' reactions to these new interview methods. In this work, we investigate applicants' reactions to an AVI platform using self-reported measures previously validated in psychology literature. We also investigate the connections of these measures with nonverbal behavior displayed during the interviews. We find that participants who found the platform creepy and had concerns about privacy reported lower interview performance compared to participants who did not have such concerns. We also observe weak correlations between nonverbal cues displayed and these self-reported measures. Finally, inference experiments achieve overall low-performance w.r.t. to explaining applicants' reactions. Overall, our results reveal that participants who are not at ease with AVIs (i.e., high creepy ambiguity score) might be unfairly penalized. This has implications for improved hiring practices using AVIs.
Asynchronous video interviews (AVIs) are a form of one-way, technology-mediated,
selection interviewing that continue to grow in popularity. An AVI is a broad method that varies substantially in design and execution. Despite being adopted by many organizations, human resources professionals, and hiring managers, research on AVIs is lagging far behind practice. Empirical evidence is scarce and conceptual work to guide research efforts and best practice recommendations is lacking. We propose a framework for examining the role and impact of specific design features of AVIs, building on theories of justice-based applicant reactions, social presence, interview anxiety, and impression management. More precisely, our framework highlights how pre-interview design decisions by organizations and completion decisions by applicants can influence reactions and behaviors during the interview, as well as post-interview outcomes. As such, we offer an agenda of the central topics that need to be addressed, and a set of testable propositions to guide future research.
Due to technological progress, videoconference interviews have become more and more common in personnel selection. Nevertheless, even in recent studies, interviewees received lower performance ratings in videoconference interviews than in face-to-face (FTF) interviews and interviewees held more negative perceptions of these interviews. However, the reasons for these differences are unclear. Therefore, we conducted an experiment with 114 participants to compare FTF and videoconference interviews regarding interview performance and fairness perceptions and we investigated the role of social presence, eye contact, and impression management for these differences. As in other studies, ratings of interviewees' performance were lower in the videoconference interview. Differences in perceived social presence, perceived eye contact, and impression management contributed to these effects. Furthermore, live ratings of interviewees' performance were higher than ratings based on recordings. Additionally, videoconference interviews induced more privacy concerns but were perceived as more flexible. Organizations should take the present results into account and should not use both types of interviews in the same selection stage.
Videoconference interviews and asynchronous interviews are increasingly used to select applicants. However, recent research has found that technology-mediated interviews are less accepted by applicants compared to face-to-face (FTF) interviews. The reasons for these differences have not yet been clarified. Therefore, the present study takes a closer look at potential reasons that have been suggested in previous research.
The present study surveyed 154 working individuals who answered questions concerning their perceptions of FTF, videoconference, and asynchronous interviews in terms of perceived fairness, social presence, and the potential use of impression management tactics. Furthermore, potential attitudinal and personality correlates were also measured.
Technology-mediated interviews were perceived as less fair than FTF interviews and this difference was stronger for asynchronous interviews than for videoconference interviews. The perceived social presence and the possible use of impression management followed the same pattern. Furthermore, differences in fairness perceptions were mediated by perceived social presence and the possible use of impression management tactics. Additionally, affinity for technology and core self-evaluations correlated positively with perceptions of videoconference interviews but not with those of FTF and asynchronous interviews. This is the first study to compare fairness perceptions of FTF, videoconference, and asynchronous interviews and to confirm previous assumptions that potential applicants perceive technology-mediated interviews as less favorable because of impairments in social presence and the potential use of impression management.
Asynchronous video interviews are used more and more for the preselection of potential job candidates. However, recent research has shown that they are less accepted by applicants than face-to-face interviews. Our study aimed to identify ways to improve perceptions of video interviews by using explanations that emphasize standardization and flexibility. Our results showed that an explanation stressing the higher level of standardization improved fairness perceptions, whereas an explanation stressing the flexibility concerning interview scheduling improved perceptions of usability. Additionally, the improvement of fairness perceptions eventually influenced perceived organizational attractiveness. Furthermore, older participants accepted video interviews less. Practical implications and recommendations for future research are discussed.
Technological advancements in Artificial Intelligence allow the automation of every part of job interviews (information acquisition, information analysis, action selection, action implementation) resulting in highly automated interviews. Efficiency advantages exist, but it is unclear how people react to such interviews (and whether reactions depend on the stakes involved). Participants (N = 123) in a 2 (highly automated, videoconference) × 2 (high‐stakes, low‐stakes situation) experiment watched and assessed videos depicting a highly automated interview for high‐stakes (selection) and low‐stakes (training) situations or an equivalent videoconference interview. Automated high‐stakes interviews led to ambiguity and less perceived controllability. Additionally, highly automated interviews diminished overall acceptance through lower social presence and fairness. To conclude, people seem to react negatively to highly automated interviews and acceptance seems to vary based on the stakes.
OPEN PRACTICES
This study was pre‐registered on the Open Science Framework (osf.io/hgd5r) and on AsPredicted (https://AsPredicted.org/i52c6.pdf).
This article was migrated. The article was marked as recommended.
Over the last two decades, technological advancements internationally have meant that the Internet has become an important medium for recruitment and selection. Consequently, there is an increased need for research that examines the effectiveness of newer technology-mediated selection methods. This exploratory research study qualitatively explored applicant perceptions of fairness of asynchronous video interviews used in medical selection. Ten undergraduate medical students participated in a pilot asynchronous multiple-mini interview and were invited to share their experiences and perceptions in a follow-up interview. The data was transcribed verbatim and analysed using template analysis, with Gilliland’s (1993) organisational justice theory guiding the original template. Many of the original themes from Gilliland’s model were uncovered during analysis. Additionally, some significant themes were identified that did not form part of the original template and were therefore added to the final coding template - these were specifically relating to technology, including acceptability in a medical context; technical issues and adverse impact. Overall, results suggested that participants perceived asynchronous video interviews to be a fair method of selection. However, participants thought asynchronous interviews should only be used as part of an extensive selection process and furthermore, should not replace face-to-face interviews. Findings are discussed in line with existing research of fairness perceptions and justice theory in selection ( Gilliland, 1993) and implications for research and practice are presented.
When people interact with novel technologies (e.g., robots, novel technological tools), the word “creepy” regularly pops up. We define creepy situations as eliciting uneasy feelings and involving ambiguity (e.g., on how the behave or how to judge the situation). A common metric for creepiness would help evaluating creepiness of situations and developing adequate interventions against creepiness. Following psychometrical guidelines, we developed the Creepiness of Situation Scale (CRoSS) across four studies with a total of N = 882 American and German participants. In Studies 1–3, participants watched a video of a creepy situation involving technology. Study 1 used exploratory factor analysis in an American sample and showed that creepiness consists of emotional creepiness and creepy ambiguity. In a German sample, Study 2 confirmed these subdimensions. Study 3 supported validity of the CRoSS as creepiness correlated positively with privacy concerns and computer anxiety, but negatively with controllability and transparency. Study 4 used the scale in a 2 (male vs. female experimenter) × 2 (male vs. female participant) × 2 (day vs. night) field study to demonstrate its usefulness for non-technological settings and its sensitivity to theory-based predictions. Results indicate that participants contacted by an experimenter at night-time reported higher feelings of creepiness. Overall, these studies suggest that the CRoSS is a psychometrically sound measure for research and practice.
We conducted a meta-analysis to estimate the effect of self-reported interview anxiety on job candidates’ interview performance. Correspondingly, we examined the extent to which this relation was moderated by anxiety measurement approaches, type of interview (mock vs. real), timing of the anxiety measurement (before vs. after the interview), age, and gender. The overall meta-analytic correlation of −.19 was moderated by measurement approach and type of interview. Additionally, we evaluated the contributing studies with respect to power/sample size and provide sample size guidance for future research. The overall negative relation of −.19 (a medium effect size in this research area) indicates that anxiety may have a meaningful impact on hiring decisions in competitive situations through a decrease in interview performance.
Understanding and modeling people’s behavior in social interactions is an important problem in Social Computing. In this work, we automatically predict the communication skill of a person in two kinds of interview-based social interactions namely interface-based (without an interviewer) and traditional face-to-face interviews. We investigate the differences in behavior perception and automatic prediction of communication skill when the same participant gives both interviews. Automated video interview platforms are gaining increasing attention that allows conducting interviews anywhere and anytime. Until recently, interviews were conducted face-to-face either for screening or for automatic assessment purposes. Our dataset consists of 100 dual interviews where the same participant participates in both settings. External observers rate the interviews by answering several behavioral based assessment questions (manually annotated attributes). Multimodal features related to lexical, acoustic and visual behavior are extracted automatically and trained using supervised learning algorithms like Support Vector Machines (SVM) and Logistic Regression. We make an extensive study of the verbal behavior of the participant using the spoken response obtained from manual transcriptions and an Automatic Speech Recognition (ASR) tool. We also explore early and late fusion of modalities for better prediction. Our best results indicate that automatic assessment can be done with interface-based interviews.
Digital interviews (or Asynchronous Video Interviews) are a potentially efficient new form of selection interviews, in which interviewees digitally record their answers. Using Potosky's framework of media attributes, we compared them to videoconference interviews. Participants (N = 113) were randomly assigned to a videoconference or a digital interview and subsequently answered applicant reaction questionnaires. Raters evaluated participants' interview performance. Participants considered digital interviews to be creepier and less personal, and reported that they induced more privacy concerns. No difference was found regarding organizational attractiveness. Compared to videoconference interviews, participants in digital interviews received better interview ratings. These results warn organizations that using digital interviews might cause applicants to self-select out. Furthermore, organizations should stick to either videoconference or digital interviews within a selection stage.
Technologically advanced selection procedures are entering the market at exponential rates. The current study tested two previously held assumptions: (a) providing applicants with procedural information (i.e., making the procedure more transparent and justifying the use of this procedure) on novel technologies for personnel selection would positively impact applicant reactions, and (b) technologically advanced procedures might differentially affect applicants with different levels of computer experience. In a 2 (computer science students, other students) × 2 (low information, high information) design, 120 participants watched a video showing a technologically advanced selection procedure (i.e., an interview with a virtual character responding and adapting to applicants’ nonverbal behavior). Results showed that computer experience did not affect applicant reactions. Information had a positive indirect effect on overall organizational attractiveness via open treatment and information known. This positive indirect effect was counterbalanced by a direct negative effect of information on overall organizational attractiveness. This study suggests that computer experience does not affect applicant reactions to novel technologies for personnel selection, and that organizations should be cautious about providing applicants with information when using technologically advanced procedures as information can be a double-edged sword.
Update: While not specifically mentioned in the paper it has implications for explainability and XAI research: providing people with more transparency can have simultaneous positive and negative effects on acceptance.
Effective communication is an important social skill that facilitates us to interpret and connect with people around us and is of utmost importance in employment based interviews. This paper presents a methodical study and automatic measurement of communication skill of candidates in different modes of behavioural interviews. It demonstrates a comparative analysis of non-conventional methods of employment interviews namely 1) Interface-based asynchronous video interviews and 2) Written interviews (including a short essay). In order to achieve this, we have collected a dataset of 100 structured interviews from participants. These interviews are evaluated independently by two human expert annotators on rubrics specific to each of the settings. We, then propose a predictive model using automatically extracted multimodal features like audio, visual and lexical, applying classical machine learning algorithms. Our best model performs with an accuracy of 75% for a binary classification task in all the three contexts. We also study the differences between the expert perception and the automatic prediction across the settings.
The use of technology such as telephone and video has become common when conducting employment interviews. However, little is known about how technology affects applicant reactions and interviewer ratings. We conducted meta-analyses of 12 studies that resulted in K = 13 unique samples and N = 1,557. Mean effect sizes for interview medium on ratings (d = -.41) and reactions (d = -.36) were moderate and negative, suggesting that interviewer ratings and applicant reactions are lower in technology-mediated interviews. Generalizing research findings from face-to-face interviews to technology mediated interviews is inappropriate. Organizations should be especially wary of varying interview mode across applicants, as inconsistency in administration could lead to fairness issues. At the same time, given the limited research that exists, we call for renewed attention and further studies on potential moderators of this effect.
Expanding research on employment interview training, this study introduces virtual employment interview (VI) training with focus on nonverbal behavior. In VI training, participants took part in a simulated interview with a virtual character. Simultaneously, the computer analyzed participants’ nonverbal behavior and provided real-time feedback for it. The control group received parallel interview training. Following training, participants took part in mock interviews, where interviewers rated participants’ nonverbal behavior, and interview performance. Analyses revealed (a) that participants of VI training showed better interview performance, (b) that this effect was mediated by nonverbal behavior, and (c) that VI training has a positive influence on interview anxiety. These results have important practical implications for applicants, career counseling centers, and organizations.
The present study aimed to integrate findings from technology acceptance research with research on applicant reactions to new technology for the emerging selection procedure of asynchronous video interviewing. One hundred six volunteers experienced asynchronous video interviewing and filled out several questionnaires including one on the applicants’ personalities. In line with previous technology acceptance research, the data revealed that perceived usefulness and perceived ease of use predicted attitudes toward asynchronous video interviewing. Furthermore, openness revealed to moderate the relation between perceived usefulness and attitudes toward this particular selection technology. No significant effects emerged for computer self-efficacy, job interview self-efficacy, extraversion, neuroticism, and conscientiousness. Theoretical and practical implications are discussed.
In everyday life, judgments people make about others are based on brief excerpts of interactions, known as thin slices. Inferences stemming from such minimal information can be quite accurate, and nonverbal behavior plays an important role in the impression formation. Because protagonists are strangers, employment interviews are a case where both nonverbal behavior and thin slices can be predictive of outcomes. In this work, we analyze the predictive validity of thin slices of real job interviews, where slices are defined by the sequence of questions in a structured interview format. We approach this problem from an audio-visual, dyadic, and nonverbal perspective, where sensing, cue extraction, and inference are automated. Our study shows that although nonverbal behavioral cues extracted from thin slices were not as predictive as when extracted from the full interaction, they were still predictive of hirability impressions with values up to 0.34, which was comparable to the predictive validity of human observers on thin slices. Applicant audio cues were found to yield the most accurate results.
Purpose
Increased use of past behavior questions makes it important to understand applicants’ responses. Past behavior questions are designed to elicit stories from applicants. Four research questions were addressed: How do applicants respond to past behavior questions, in particular, how frequent are stories? When applicants produce stories, what narrative elements do they contain? Is story production related to applicants’ characteristics? Do responses affect interview outcomes?
Design/Methodology/Approach
Using a database of 62 real job interviews, the prevalence of five types of applicants’ response to past behavior questions were analyzed: story, pseudo-story, exemplification, value/opinion, and self-description. We also coded the narrative content of stories, distinguishing between situations, tasks/actions, and results. We analyzed relations between applicant characteristics (gender, age, personality, self-reported communication and persuasion skills, general mental ability) and response type. We used hierarchical multiple regression to predict hiring recommendations from response type.
Findings
Stories were only produced 23 % of the time. Stories featured more narrative elements related to situations than tasks, actions, or results. General mental ability and conscientiousness affected response types, and men produced more stories than women. There were differences in the storytelling rate according to the type of competency. Stories and pseudo-stories increased hiring recommendations, and self-descriptions decreased them.
Originality/Value
Behavioral interviews may not be conducive to storytelling. Recruiters respond positively to narrative responses. More research is needed on storytelling in the selection interview, and recruiters and applicants might need training on how to encourage and tell accurate and representative stories.
Despite the widespread use of exploratory factor analysis in psychological research, researchers often make questionable decisions when conducting these analyses. This article reviews the major design and analytical decisions that must be made when conducting a factor analysis and notes that each of these decisions has important consequences for the obtained results. Recommendations that have been made in the methodological literature are discussed. Analyses of 3 existing empirical data sets are used to illustrate how questionable decisions in conducting factor analyses can yield problematic results. The article presents a survey of 2 prominent journals that suggests that researchers routinely conduct analyses using such questionable methods. The implications of these practices for psychological research are discussed, and the reasons for current practices are reviewed. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Head nods occur in virtually every face-to-face discussion. As part of the backchannel domain, they are not only used to express a 'yes', but also to display interest or enhance communicative attention. Detecting head nods in natural interactions is a challenging task as head nods can be subtle, both in amplitude and duration. In this study, we make use of findings in psychology establishing that the dynamics of head gestures are conditioned on the person's speaking status. We develop a multimodal method using audio-based self-context to detect head nods in natural settings. We demonstrate that our multimodal approach using the speaking status of the person under analysis significantly improved the detection rate over a visual-only approach.
Understanding the basis on which recruiters form hirability impressions for a job applicant is a key issue in organizational psychology and can be addressed as a social computing problem. We approach the problem from a face-to-face, nonverbal perspective where behavioral feature extraction and inference are automated. This paper presents a computational framework for the automatic prediction of hirability. To this end, we collected an audio-visual dataset of real job interviews where candidates were applying for a marketing job. We automatically extracted audio and visual behavioral cues related to both the applicant and the interviewer. We then evaluated several regression methods for the prediction of hirability scores and showed the feasibility of conducting such a task, with ridge regression explaining 36.2% of the variance. Feature groups were analyzed, and two main groups of behavioral cues were predictive of hirability: applicant audio features and interviewer visual cues, showing the predictive validity of cues related not only to the applicant, but also to the interviewer. As a last step, we analyzed the predictive validity of psychometric questionnaires often used in the personnel selection process, and found that these questionnaires were unable to predict hirability, suggesting that hirability impressions were formed based on the interaction during the interview rather than on questionnaire data.
Nonverbal behavior coding is typically conducted by “hand”. To remedy this time and resource intensive undertaking, we illustrate how nonverbal social sensing, defined as the automated recording and extracting of nonverbal behavior via ubiquitous social sensing platforms, can be achieved. More precisely, we show how and what kind of nonverbal cues can be extracted and to what extent automated extracted nonverbal cues can be validly obtained with an illustrative research example. In a job interview, the applicant’s vocal and visual nonverbal immediacy behavior was automatically sensed and extracted. Results show that the applicant’s nonverbal behavior can be validly extracted. Moreover, both visual and vocal applicant nonverbal behavior predict recruiter hiring decision, which is in line with previous findings on manually coded applicant nonverbal behavior. Finally, applicant average turn duration, tempo variation, and gazing best predict recruiter hiring decision. Results and implications of such a nonverbal social sensing for future research are discussed.
In this article, we provide guidance for substantive researchers on the use of structural equation modeling in practice for theory testing and development. We present a comprehensive, two-step modeling approach that employs a series of nested models and sequential chi-square difference tests. We discuss the comparative advantages of this approach over a one-step approach. Considerations in specification, assessment of fit, and respecification of measurement models using confirmatory factor analysis are reviewed. As background to the two-step approach, the distinction between exploratory and confirmatory analysis, the distinction between complementary approaches for theory testing versus predictive application, and some developments in estimation methods also are discussed.
There has been a rise in the use of electronic selection (e-selection) systems in organizations. Given the widespread use of these systems, this article reviews the factors that affect their effectiveness and acceptance by job applicants (applicant acceptance), and offers directions for future research on the topic. In particular, we examine the effectiveness and acceptance of these systems at each stage of the selection process including (a) job analysis, (b) job application, (c) pre-employment testing, (d) interviewing, (e) selection decision-making, and (f) evaluation and validation. We also consider their potential for adverse impact and invasion of privacy. Finally, we present some implications for e-selection system design and implementation.
More than 40 years ago, Masahiro Mori, a robotics professor at the Tokyo Institute of Technology, wrote an essay [1] on how he envisioned people's reactions to robots that looked and acted almost like a human. In particular, he hypothesized that a person's response to a humanlike robot would abruptly shift from empathy to revulsion as it approached, but failed to attain, a lifelike appearance. This descent into eeriness is known as the uncanny valley. The essay appeared in an obscure Japanese journal called Energy in 1970, and in subsequent years, it received almost no attention. However, more recently, the concept of the uncanny valley has rapidly attracted interest in robotics and other scientific circles as well as in popular culture. Some researchers have explored its implications for human-robot interaction and computer-graphics animation, whereas others have investigated its biological and social roots. Now interest in the uncanny valley should only intensify, as technology evolves and researchers build robots that look human. Although copies of Mori's essay have circulated among researchers, a complete version hasn't been widely available. The following is the first publication of an English translation that has been authorized and reviewed by Mori. (See “Turning Point” in this issue for an interview with Mori.).
The authors examined the influence of personal information privacy concerns and computer experience on applicants’ reactions to online screening procedures. Study 1 used a student sample simulating application for a fictitious management intern job with a state personnel agency (N = 117) and employed a longitudinal, laboratory-based design. Study 2 employed a field sample of actual applicants (N = 396) applying for jobs online. As predicted, procedural justice mediated the relationship between personal information privacy concerns and test-taking motivation, organizational attraction, and organizational intentions in the laboratory and field. Experience with computers moderated the relationship between procedural justice with test-taking motivation and organizational intentions in the field but not in the laboratory sample. Implications are discussed in terms of the importance of considering applicants’ personal information privacy concerns and testing experience when designing online recruitment and selection systems.
AIthough intraclass correlation coefficients (lCCs) are commonIy used in behavioral measurement, pychometrics, and behavioral genetics, procodures available for forming inferences about ICC are not widely known. Following a review of the distinction between various forms of the ICC, this article presents procedures available for calculating confidence intervals and conducting tests on ICCs developed using data from one-way and two-way random and mixed-efFect analysis of variance models. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Background
The COVID-19 pandemic caused graduate medical education (GME) programs to pivot to virtual interviews (VIs) for recruitment and selection. This systematic review synthesizes the rapidly expanding evidence base on VIs, providing insights into preferred formats, strengths, and weaknesses.
Methods
PubMed/MEDLINE, Scopus, ERIC, PsycINFO, MedEdPublish, and Google Scholar were searched from 1 January 2012 to 21 February 2022. Two authors independently screened titles, abstracts, full texts, performed data extraction, and assessed risk of bias using the Medical Education Research Quality Instrument. Findings were reported according to Best Evidence in Medical Education guidance.
Results
One hundred ten studies were included. The majority (97%) were from North America. Fourteen were conducted before COVID-19 and 96 during the pandemic. Studies involved both medical students applying to residencies (61%) and residents applying to fellowships (39%). Surgical specialties were more represented than other specialties. Applicants preferred VI days that lasted 4–6 h, with three to five individual interviews (15–20 min each), with virtual tours and opportunities to connect with current faculty and trainees. Satisfaction with VIs was high, though both applicants and programs found VIs inferior to in-person interviews for assessing ‘fit.’ Confidence in ranking applicants and programs was decreased. Stakeholders universally noted significant cost and time savings with VIs, as well as equity gains and reduced carbon footprint due to eliminating travel.
Conclusions
The use of VIs for GME recruitment and selection has accelerated rapidly. The findings of this review offer early insights that can guide future practice, policy, and research.
Interview anxiety is common among interviewees and has the potential to undermine an applicant's interview performance. Nevertheless, there is much that we do not understand about the role of anxiety in job interviews. In this paper, we advance a conceptual model that highlights the multidimensional nature of interview anxiety by incorporating its cognitive, behavioral, and physiological components, termed the Tripartite Interview Anxiety Framework (TIAF). This model highlights the role of person, interviewer, and contextual characteristics in shaping interview anxiety, elucidates the underlying relations between interview anxiety and performance, and delineates critical moderators of these important relations. In doing so, the TIAF simultaneously advances the theory of interview anxiety, promotes further work in this area, and highlights implications for practice.
In behavioural interviews, past-behaviour questions invite applicants to tell a story about a past job-related situation. Nevertheless, applicants often do not produce stories on demand, resorting to less appropriate responses. In a sample of real selection interviews (Study 1), only 50% of applicants’ responses to past-behaviour questions were indeed stories. We explored two factors that may increase applicants’ storytelling tendencies: probing and information about past-behaviour questions. In two experiments simulating selection interviews, we manipulated recruiter probing during the interview (Study 2) and the level of participants’ information about the expected answer format of past-behaviour questions (Studies 2 and 3). Probing induced participants to tell more stories and to include more narrative diversity in their stories, but there was no effect of giving participants information or not. More information did help participants to tell less pseudostories (generic descriptions of situations). Analyses of participants’ thoughts and emotions experienced during question-answering suggest that finding an appropriate example to narrate is a major problem. Storytelling rate also varied by competency. Findings are relevant for theories of behaviour elicitation in selection situations.
This review critically examines the literature from 1985 to 1999 on applicant perceptions of selection procedures. We organize our review around several key questions: What perceptions have been studied? What are determinants of perceptions? What are the consequences or outcomes associated with perceptions applicants hold? What theoretical frameworks are most useful in examining these perceptions? For each of these questions, we provide suggestions for key research directions. We conclude with a discussion of the practical implications of this line of research for those who design and administer selection processes.
Most prior research on perceived procedural justice vis-à-vis human resource management selection procedures focuses on comparisons between nations and between types of employees. So far, findings indicate slight, if any, differences between nations. Predicated on a random sample of 950 respondents – native Israelis and Israelis from the former Soviet Union – we find significant differences between the two groups concerning five selection methods, which we ascribe to inherent cultural dissimilarities. We attribute these differences to Hofstede's uncertainty avoidance dimension. These results may elicit increased focus on inherent cultural differences among potential employees with the view of considering these differences in opting for selection methods in order to accommodate for existing cultural differences. This consideration appears particularly pertinent in culturally diverse workforces, given the increased proportion of immigrants.
In the 20 years since frameworks of employment interview structure have been developed, a considerable body of empirical research has accumulated. We summarize and critically examine this literature by focusing on the 8 main topics that have been the focus of attention: (a) the definition of structure; (b) reducing bias through structure; (c) impression management in structured interviews; (d) measuring personality via structured interviews; (e) comparing situational versus past-behavior questions; (f) developing rating scales; (g) probing, follow-up, prompting, and elaboration on questions; and (h) reactions to structure. For each topic, we review and critique research and identify promising directions for future research. When possible, we augment the traditional narrative review with meta-analytic review and content analysis. We concluded that much is known about structured interviews, but there are still many unanswered questions. We provide 12 propositions and 19 research questions to stimulate further research on this important topic.
Video interviews are increasing in popularity and are considered an efficient and effective selection tool. As companies start implementing this selection tool, it is important to understand interviewee perceptions of video interviewing and its impact on the overall effectiveness of the selection process. This study explores the pros and cons of video interviewing from the perspectives of 151 hospitality management students from the Southern United States that are currently seeking career placement upon graduation. Participants were asked to go through an online video interview, following which they completed a survey questionnaire. Qualitative findings indicated that some factors that lead to favorableness of video interviewing were comfort, convenience, and saving on resources (money and time), while some factors that resulted in unfavorableness were its impersonal nature, lack of feedback, and technological glitches. Quantitative findings indicated that the overall favorability of video interviewing was low but fairness was high. Based on the findings, recommendations were provided to improve the effectiveness and acceptance of this selection tool.
Executive Overview Despite years of research designed fo match jobs and people, selection decisions are not always based on an exact fit between the person and the job. Microsoft values intelligence over all else, for all jobs. Southwest Airlines values character. When are these general characteristics adequate to the task of selecting job candidates? Should firms value intelligence and conscientiousness above specific skills? Ask any ten human resource managers how they select employees and you will find that most of them work from the same set of unchallenged, gen- erally unspoken ideas. Their way of thinking and the employee selection procedures that stem from it involve precise matching of knowledge, ability, and skill profiles. They see employee selection as fitting a key—a job candidate—into a lock—the job. The perfect candidate's credentials match the job requirements in all respects. Only an exact fit guarantees top employee performance. Cook, Mc- Clelland and Spencer capture the precise match- ing idea in the AMA's HandbooJi for Employee Re- cruitment and Retention: The final selection decision must match the 'whole person' with the 'whole job.' This re- quires a thorough analysis of both the person and the job; only then can an intelligent de- cision be made as to how well the two will fit together.. .stress should be placed on match- ing an applicant to a specific position.'
This study investigates reactions to personnel selection techniques from the perspectives of working adults in the United States and Singapore, and provides a comparison of the two samples. Differences in the cultural values of the two countries are used to generate hypotheses. Working adults in Singapore (N = 158) and the United States (N = 108) rated the process favourability of eleven selection procedures and then indicated the bases for their reactions on seven procedural dimensions. Implications for selection in Singapore, the United States and in international contexts are discussed.
Reports 3 errors in the original article by K. O. McGraw and S. P. Wong (Psychological Methods, 1996, 1[1], 30–46). On page 39, the intraclass correlation coefficient (ICC) and r values given in Table 6 should be changed to r = .714 for each data set, ICC(C,1) = .714 for each data set, and ICC(A,1) = .720, .620, and .485 for the data in Columns 1, 2, and 3 of the table, respectively. In Table 7 (p. 41), which is used to determine confidence intervals on population values of the ICC, the procedures for obtaining the confidence intervals on ICC(A,k) need to be amended slightly. Corrected formulas are given. On pages 44–46, references to Equations A3, A,4, and so forth in the Appendix should be to Sections A3, A4, and so forth. (The following abstract of this article originally appeared in record 1996-03170-003.). Although intraclass correlation coefficients (ICCs) are commonly used in behavioral measurement, psychometrics, and behavioral genetics, procedures available for forming inferences about ICC are not widely known. Following a review of the distinction between various forms of the ICC, this article presents procedures available for calculating confidence intervals and conducting tests on ICCs developed using data from one-way and two-way random and mixed-effect analysis of variance models. (PsycINFO Database Record (c) 2012 APA, all rights reserved)