Chapter

Social Robots and Digital Humans as Job Interviewers: A Study of Human Reactions Towards a More Naturalistic Interaction

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

More and more companies have started to use nonhuman agents for employment interviews, making the selection process easier, faster, and unbiased. To assess the effectiveness of the above, in this paper, we systematically analyzed, reviewed, and compared human interaction with a social robot, a digital human, and another human under the same scenario simulating the first phase of a job interview. Our purpose is to allow the understanding of human reactions, concluding to a disclosure of the human needs towards human – nonhuman interaction. We also explored how the appearance and the physical presence of an agent can affect human perception, expectations, and emotions. To support our research, we used time-related and acoustic features of audio data, as well as psychometric data. Statistically significant differences were found for almost all extracted features and especially for intensity, speech rate, frequency, and response time. We also developed a Machine Learning model that can recognize the nature of the interlocutor a human interacts with. Although human was generally preferred, the interest level was higher and the shyness level was lower during human-robot interaction. Thus, we believe that, following some improvements, social robots, compared to digital humans, have the potential to act effectively as job interviewers.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
It is well established in the literature that biases (e.g., related to body size, ethnicity, race etc.) can occur during the employment interview and that applicants’ fairness perceptions related to selection procedures can influence attitudes, intentions, and behaviors toward the recruiting organization. This study explores how social robotics may affect this situation. Using an online video vignette-based experimental survey (n=235), the study examines applicant fairness perceptions of two types of job interviews: a face-to-face and a robot-mediated interview. To reduce the risk of socially desirable responses, desensitize the topic, and detect any inconsistencies in the respondents’ reactions to vignette scenarios, the study employs a first-person and a third-person perspective. In the robot-mediated interview, two teleoperated robots are used as fair proxies for the applicant and the interviewer, thus providing symmetrical visual anonymity unlike prior research that relied on asymmetrical anonymity, in which only one party was anonymized. This design is intended to eliminate visual cues that typically cause implicit biases and discrimination of applicants, but also to prevent biasing the interviewer’s assessment through impression management tactics typically used by applicants. We hypothesize that fairness perception (i.e., procedural fairness and interactional fairness) and behavioral intentions (i.e., intentions of job acceptance, reapplication intentions, and recommendation intentions) will be higher in a robot-mediated job interview than in a face-to-face job interview, and that this effect will be stronger for introvert applicants. The study shows, contrary to our expectations, that the face-to-face interview is perceived as fairer, and that the applicant’s personality (introvert vs. extravert) does not affect this perception. We discuss this finding and its implications, and address avenues for future research.
Article
Full-text available
We present a targeted review of recent developments and advances in digital selection procedures (DSPs) with particular attention to advances in internet-based techniques. By reviewing the emergence of DSPs in selection research and practice, we highlight five main categories of methods (online applications, online psychometric testing, digital interviews, gamified assessment and social media). We discuss the evidence base for each of these DSP groups, focusing on construct and criterion validity, and applicant reactions to their use in organizations. Based on the findings of our review, we present a critique of the evidence base for DSPs in industrial, work and organizational psychology and set out an agenda for advancing research. We identify pressing gaps in our understanding of DSPs, and ten key questions to be answered. Given that DSPs are likely to depart further from traditional non-digital selection procedures in the future, a theme in this agenda is the need to establish a distinct and specific literature on DSPs, and to do so at a pace that reflects the speed of the underlying technological advancement. In concluding, we, therefore, issue a call to action for selection researchers in work and organizational psychology to commence a new and rigorous multidisciplinary programme of scientific study of DSPs.
Chapter
Full-text available
The current preliminary study concerns the identification of the effects human-humanoid interaction can have on human emotional states and behaviors, through a physical interaction. Thus, we have used three cases where people face three different types of physical interaction with a neutral person, Nadine social robot and the person on which Nadine was modelled, Professor Nadia Thalmann. To support our research, we have used EEG recordings to capture the physiological signals derived from the brain during each interaction, audio recordings to compare speech features and a questionnaire to provide psychometric data that can complement the above. Our results mainly showed the existence of frontal theta oscillations while interacting with the humanoid that probably shows the higher cognitive effort of the participants, as well as differences in the occipital area of the brain and thus, the visual attention mechanisms. The level of concentration and motivation of participants while interacting with the robot were higher indicating also higher amount of interest. The outcome of this experiment can broaden the field of human-robot interaction, leading to more efficient, meaningful and natural human-robot interaction.
Article
Full-text available
Gamification has attracted increased attention among organizations and human resource professionals recently, as a novel and promising concept for attracting and selecting prospective employees. In the current study, we explore the construct validity of a new gamified assessment method in employee selection that we developed following the situational judgement test (SJT) methodology. Our findings support the applicability of game elements into a traditional form of assessment built to assess candidates' soft skills. Specifically, our study contributes to research on gamification and employee selection exploring the construct validity of a gamified assessment method indicating that the psychometric properties of SJTs and their transformation into a gamified assessment are a suitable avenue for future research and practice in this field.
Conference Paper
Full-text available
In this paper, we present our early findings on the utilization of a social robot during the formal interview process. We implemented a mechanism that enabled the robot to ask context-aware questions based on the data based on the resume or linked-in profile of the applicant. Later, we conducted an exploratory between-subject evaluation with 8 adult participants to find the difference in the duration of applicants responses given to the NAO robot and to the human interviewer. Our results didn't find the significant difference in terms of participant responses to human and robotic interviewers.
Article
Full-text available
This article introduces a new communicational format called Fair Proxy Communication. Fair Proxy Communication is a specific communicational setting in which a teleoperated robot is used to remove perceptual cues of implicit biases in order to increase the perceived fairness of decision-related communications. The envisaged practical applications of Fair Proxy Communication range from assessment communication (e.g. job interviews at Affirmative Action Employers) to conflict mediation, negotiation and other communication scenarios that require direct dialogue but where decision-making maybe negatively affected by implicit social biases. The theoretical significance of Fair Proxy Communication pertains primarily to the investigation of 'mechanisms' of implicit social cognition in neuropsychology, but this new communicational format also raises many research questions for the fields of organisational psychology, negotiation and conflict research and business ethics. Fair Proxy Communication is currently investigated by an interdisciplinary research team at Aarhus University, Denmark.
Article
Full-text available
Both robotic and virtual agents could one day be equipped with social abilities necessary for effective and natural interaction with human beings. Although virtual agents are relatively inexpensive and flexible, they lack the physical embodiment present in robotic agents. Surprisingly, the role of embodiment and physical presence for enriching human-robot-interaction is still unclear. This paper explores how these unique features of robotic agents influence three major elements of human-robot face-to-face communication, namely the perception of visual speech, facial expression, and eye-gaze. We used a quantitative approach to disentangle the role of embodiment from the physical presence of a social robot, called Ryan, with three different agents (robot, telepresent robot, and virtual agent), as well as with an actual human. We used a robot with a retro-projected face for this study, since the same animation from a virtual agent could be projected to this robotic face, thus allowing comparison of the virtual agent's animation behaviors with both telepresent and the physically present robotic agents. The results of our studies indicate that the eye gaze and certain facial expressions are perceived more accurately when the embodied agent is physically present than when it is displayed on a 2D screen either as a telepresent or a virtual agent. Conversely, we find no evidence that either the embodiment or the presence of the robot improves the perception of visual speech, regardless of syntactic or semantic cues. Comparison of our findings with previous studies also indicates that the role of embodiment and presence should not be generalized without considering the limitations of the embodied agents.
Article
Full-text available
Technologically advanced selection procedures are entering the market at exponential rates. The current study tested two previously held assumptions: (a) providing applicants with procedural information (i.e., making the procedure more transparent and justifying the use of this procedure) on novel technologies for personnel selection would positively impact applicant reactions, and (b) technologically advanced procedures might differentially affect applicants with different levels of computer experience. In a 2 (computer science students, other students) × 2 (low information, high information) design, 120 participants watched a video showing a technologically advanced selection procedure (i.e., an interview with a virtual character responding and adapting to applicants’ nonverbal behavior). Results showed that computer experience did not affect applicant reactions. Information had a positive indirect effect on overall organizational attractiveness via open treatment and information known. This positive indirect effect was counterbalanced by a direct negative effect of information on overall organizational attractiveness. This study suggests that computer experience does not affect applicant reactions to novel technologies for personnel selection, and that organizations should be cautious about providing applicants with information when using technologically advanced procedures as information can be a double-edged sword. Update: While not specifically mentioned in the paper it has implications for explainability and XAI research: providing people with more transparency can have simultaneous positive and negative effects on acceptance.
Article
Full-text available
The ability to modulate vocal sounds and generate speech is one of the features which set humans apart from other living beings. The human voice can be characterized by several attributes such as pitch, timbre, loudness, and vocal tone. It has often been observed that humans express their emotions by varying different vocal attributes during speech generation. Hence, deduction of human emotions through voice and speech analysis has a practical plausibility and could potentially be beneficial for improving human conversational and persuasion skills. This paper presents an algorithmic approach for detection and analysis of human emotions with the help of voice and speech processing. The proposed approach has been developed with the objective of incorporation with futuristic artificial intelligence systems for improving human-computer interactions.
Article
Full-text available
Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library.
Article
Full-text available
The use of speech for robots to communicate with their human users has been facilitated by improvements in speech synthesis technology. Now that the intelligibility of synthetic speech has advanced to the point that speech synthesizers are a widely accepted and used technology, what are other aspects of speech synthesis that can be used to improve the quality of human-robot interaction? The communication of emotions through changes in vocal prosody is one way to make synthesized speech sound more natural. This article reviews the use of vocal prosody to convey emotions between humans, the use of vocal prosody by agents and avatars to convey emotions to their human users, and previous work within the human–robot interaction (HRI) community addressing the use of vocal prosody in robot speech. The goals of this article are (1) to highlight the ability and importance of using vocal prosody to convey emotions within robot speech and (2) to identify experimental design issues when using emotional robot speech in user studies.
Article
Full-text available
A new procedure for automatic diagnosis of pathologies of the larynx is presented. The new procedure has the advantage over other traditional techniques of being non-invasive, inexpensive and objective. The algorithms for determination of jitter and shimmer parameters by their Jitta, Jitt, RAP, ppq5 in case of jitter and Shim, SHDB, apq3 and apq5 in case of shimmer are presented. The algorithm developed and implemented for determining the HNR (Harmonic to Noise Ratio) are also presented. The developed tools allow the diagnosis that indicates whether or not the voice is pathologic.
Conference Paper
Full-text available
The TARDIS project aims to build a scenario-based serious-game simulation platform for NEETs and job-inclusion associations that supports so-cial training and coaching in the context of job interviews. This paper presents the general architecture of the TARDIS job interview simulator, and the serious game paradigm that we are developing.
Conference Paper
Full-text available
This paper presents an approach that makes use of a virtual character and social signal processing techniques to create an immersive job interview simulation environment. In this environment, the virtual character plays the role of a recruiter which reacts and adapts to the user's behavior thanks to a component for the automatic recognition of social cues (conscious or unconscious behavioral patterns). The social cues pertinent to job interviews have been identified using a knowledge elicitation study with real job seekers. Finally, we present two user studies to investigate the feasibility of the proposed approach as well as the impact of such a system on users.
Article
Full-text available
This study compared shy and nonshy Internet users in online and offline contexts on the Revised Cheek and Buss Shyness Scale (RCBSS; Cheek, 1983) and other measures intended to gauge 4 underlying aspects of shyness: rejection sensitivity, initiating relationships, self-disclosure, and providing emotional support and advice. University students (N = 134; 76% female) participated in a Web-based survey that investigated the impact of computer-mediated communications (CMC) on shyness level. Results show that individuals classified as shy or nonshy on the basis of their scores on the RCBSS in the offline context were also significantly different on offline measures of rejection sensitivity, initiating relationships, and self-disclosure. However, they were not significantly different on these same 3 domains in the online context. The results are interpreted as support for a self-presentation theory account that the absence of visual and auditory cues online reduces shy individuals' experience of detecting negative or inhibitory feedback cues from others. We discuss positive and negative aspects of use of CMC by shy individuals.
Article
Full-text available
The use of Praat (open-source acoustic analysis software) to provide feedback for learning vowels and diphthongs was described by Brett (2004 - ReCALL 16:103-113). However, his conclusion, and that of Setter and Jenkins (2005 - Language Teaching 38:1-17), was that formant plot interpretation using Praat's interface is too complex for learners. In this paper, classroom data elucidates the use of Praat for measurements such as the duration, pitch, and intensity of sounds. It is shown that a combination of Praat and the Choice activity in Moodle (an open-source Learning Management System) provides a method of pinpointing the weaknesses of each student, thus helping the teacher to make efficient use of class time.
Article
Full-text available
Three studies examined the notion that computer mediated communication (CMC) can be characterised by high levels of self-disclosure. In Study one, significantly higher levels of spontaneous self-disclosure were found in computer-mediated compared to face-to-face discussions. Study two examined the role of visual anonymity in encouraging self-disclosure during CMC. Visually anonymous participants disclosed significantly more information about themselves than non-visually anonymous participants. In Study three, private and public self-awareness were independently manipulated, using videoconferencing cameras and accountability cues, to create a 2x2 design (public self-awareness (high and low) x private self-awareness (high and low). It was found that heightened private self-awareness, when combined with reduced public self-awareness, was associated with significantly higher levels of spontaneous self-disclosure during computer-mediated communication.
Article
Full-text available
Studies have established that normative data is necessary for acoustic analysis. The aim of the present study is to standardize fundamental frequency measures (fo), jitter, shimmer and harmonic-noise ratio (HNR) for young adults with normal voice. 20 males and 20 females, between 20 and 45 years, without signs and symptoms of vocal problems; CSL-4300 Kay-Elemetrics; vowels /a/ and /é/. for females, vowels /a/ and /é/ had average measures of: fo 205.82 Hz and 206.56 Hz; jitter of 0.62% and 0.59%; shimmer of 0.22 dB and 0.19 dB; PHR of 10.9 dB and 11.04 dB, respectively. For males, vowel /a/ and /é/ had average measures of: fo 119.84 Hz and 118.92 Hz; jitter of 0.49% and 0.5%; shimmer of 0.22 dB and 0.21 dB; HNR 9.56 dB and 9.63 dB, respectively. Both fo and NHR female measures were significantly higher than their male counterparts. our results differ from the literature; therefore, it is important to standardize the program in use.
Article
Full-text available
Although there are numerous potential benefits to diversity in work groups, converging dimensions of diversity often prevent groups from exploiting this potential. In a study of heterogeneous decision-making groups, the authors examined whether the disruptive effects of diversity faultlines can be overcome by convincing groups of the value of diversity. Groups were persuaded either of the value of diversity or the value of similarity for group performance, and they were provided with either homogeneous or heterogeneous information. As expected, informationally diverse groups performed better when they held pro-diversity rather than pro-similarity beliefs, whereas the performance of informationally homogeneous groups was unaffected by diversity beliefs. This effect was mediated by group-level information elaboration. Implications for diversity management in organizations are discussed.
Chapter
We demonstrate a job interview dialogue with the autonomous android ERICA which plays the role of an interviewer. Conventional job interview dialogue systems ask only pre-defined questions. The job interview system of ERICA generates follow-up questions based on the interviewee’s response on the fly. The follow-up questions consist of two kinds of approaches: selection-based and keyword-based. The first type question is based on selection from a pre-defined question set, which can be used in many cases. The second type of question is based on a keyword extracted from the interviewee’s response, which digs into the interviewee’s response dynamically. These follow-up questions contribute to realizing natural and trained dialogue.
Chapter
Humans are poor at detecting deception under the best conditions. The need for having a decision support system that can be a baseline for data-driven decision making is obvious. Such a system is not biased like humans are, and these often subconscious human biases can impair people’s judgment. A system for helping people at border security (CBP) is the AVATAR. The AVATAR, an Embodied Conversational agent (ECA), is implemented as a self-service kiosk. Our research uses this AVATAR as the baseline and we plan to augment the automated credibility assessment task that the AVATAR performs using a Humanoid robot. We will be taking advantage of humanoid robots’ capability of realistic dialogue and nonverbal gesturing. We are also capturing data from various sensors like microphones, cameras and an eye tracker that will help in model building and testing for the task of deception detection. We plan to carry out an experiment where we compare the results of an interview with the AVATAR and an interview with a humanoid robot. Such a comparative analysis has never been done before, hence we are very eager to conduct such a social experiment.
Article
Job interviews are significant barriers for individuals with autism spectrum disorder because these individuals lack good nonverbal communication skills. We developed a job interview training program using an android robot. The job interview training program using an android robot consists the following three stages: (1) tele-operating an android robot and conversing with others through the android robot, (2) a face-to-face mock job interview with the android robot, and (3) feedback based on the mock job interview and nonverbal communication exercises using the android robot. The participants were randomly assigned to the following two groups: one group received a combined intervention with "interview guidance by teachers and job interview training program using an android robot" ( n = 13), and the other group received an intervention with interview guidance by teachers alone ( n = 16). Before and after the intervention, the participants in both groups underwent a mock job interview with a human interviewer, who provided outcome measurements of nonverbal communication, self-confidence, and salivary cortisol. After the training sessions, the participants who received the combined interview guidance by teachers and the job interview training program using an android robot intervention displayed improved nonverbal communication skills and self-confidence and had significantly lower levels of salivary cortisol than the participants who only received interview guidance by teachers. The job interview training program using an android robot improved various measures of job interview skills in individuals with autism spectrum disorder.
Preprint
The study of emotional expression in the voice has typically relied on acted portrayals of emotions, with the majority of studies focussing on the perception of emotion in such portrayals. The acoustic characteristics of natural, often involuntary encoding of emotion in the voice, and the mechanisms responsible for such vocal modulation, have received little attention from researchers. The small number of studies on natural or induced emotional speech have failed to identify acoustic patterns specific to different emotions. Instead, most acoustic changes measured have been explainable as resulting from the level of physiological arousal characteristic of different emotions. Thus measurements of the acoustic properties of angry, happy and fearful speech have been similar, corresponding to their similar elevated arousal levels. An opposing view, the most elaborate description of which was given by Scherer (1986), is that emotions affect the acoustic characteristics of speech along a number of dimensions, not only arousal. The lack of empirical data supporting such a theory has been blamed on the lack of sophistication of acoustic analyses in the little research that has been done.By inducing real emotional states in the laboratory, using a variety of computer administered induction methods, this thesis aimed to test the two opposing accounts of how emotion affects the voice. The induction methods were designed to manipulate some of the principal dimensions along which, according to multidimensional theories, emotional speech is expected to vary. A set of acoustic parameters selected to capture temporal, fundamental frequency (F0), intensity and spectral vocal characteristics of the voice was extracted from speech recordings. In addition, electroglottal and physiological measurements were made in parallel with speech recordings, in an effort to determine the mechanisms underlying the measured acoustic changes.The results indicate that a single arousal dimension cannot adequately describe a range of emotional vocal changes, and lend weight to a theory of multidimensional emotional response patterning as suggested by Scherer and others. The correlations between physiological and acoustic measures, although small, indicate that variations in sympathetic autonomic arousal do correspond to changes to F0 level and vocal fold dynamics as indicated by electroglottography. Changes to spectral properties, speech fluency, and F0 dynamics, however, can not be fully explained in terms of sympathetic arousal, and are probably related as well to cognitive processes involved in speech planning.
Article
In this paper, we focus on experience-based role play with virtual agents to provide young adults at the risk of exclusion with social skill training. We present a scenario-based serious game simulation platform. It comes with a social signal interpretation component, a scripted and autonomous agent dialog and social interaction behavior model, and an engine for 3D rendering of life-like virtual social agents in a virtual environment. We show how two training systems developed on the basis of this simulation platform can be used to educate people in showing appropriate socio-emotive reactions in job interviews. Furthermore, we give an overview of four conducted studies investigating the effect of the agents' portrayed personality and the appearance of the environment on the players' perception of the characters and the learning experience.
Conference Paper
This paper presents a study investigating perceptions of a human versus NAO robot physician conducting a simulated medical interview with undergraduate students. Results show that both human and NAO doctor were perceived to be credible and produced positive patient affect. However, the human doctor received significantly higher ratings when compared to the NAO. This pattern of results was fully explained by the higher social presence attributed to the human physician.
Article
Affective computing is an emerging interdisciplinary research field bringing together researchers and practitioners from various fields, ranging from artificial intelligence, natural language processing, to cognitive and social sciences. With the proliferation of videos posted online (e.g., on YouTube, Facebook, Twitter) for product reviews, movie reviews, political views, and more, affective computing research has increasingly evolved from conventional unimodal analysis to more complex forms of multimodal analysis. This is the primary motivation behind our first of its kind, comprehensive literature review of the diverse field of affective computing. Furthermore, existing literature surveys lack a detailed discussion of state of the art in multimodal affect analysis frameworks, which this review aims to address. Multimodality is defined by the presence of more than one modality or channel, e.g., visual, audio, text, gestures, and eye gage. In this paper, we focus mainly on the use of audio, visual and text information for multimodal affect analysis, since around 90% of the relevant literature appears to cover these three modalities. Following an overview of different techniques for unimodal affect analysis, we outline existing methods for fusing information from different modalities. As part of this review, we carry out an extensive study of different categories of state-of-the-art fusion techniques, followed by a critical analysis of potential performance improvements with multimodal analysis compared to unimodal analysis. A comprehensive overview of these two complementary fields aims to form the building blocks for readers, to better understand this challenging and exciting research field.
Chapter
Emotion is a multimodal entity. It can be recognized by analyzing brain and speech signals generated by emotions. This chapter reports on methods of acquiring brain and speech signals using noninvasive techniques, and describes in detail the RMS EEG 32-channel electroencephalography (EEG) machine which is commonly used in medical and research applications. The chapter presents key aspects of EEG imaging technology. Speech signals can be acquired and analyzed by many commercially available types of equipment, including Computerized Speech Laboratory (CSL) which is described in the chapter. The chapter serves as a resource for the fundamentals of EEG and speech processing equipment.
Conference Paper
Children that have a disability are up to four times more likely to be a victim of abuse than typically developing children. However, the number of cases that result in prosecution is relatively low. One of the factors influencing this low prosecution rate is communication difficulties. Our previous research has shown that typically developing children respond to a robotic interviewer very similar compared to a human interviewer. In this paper we conduct a follow up study investigating the possibility of Robot-Mediated Interviews with children that have various special needs. In a case study we investigated how 5 children with special needs aged 9 to 11 responded to the humanoid robot KASPAR compared to a human in an interview scenario. The measures used in this study include duration analysis of responses, detailed analysis of transcribed data, questionnaire responses and data from engagement coding. The main questions in the interviews varied in difficulty and focused on the theme of animals and pets. The results from quantitative data analysis reveal that the children interacted with KASPAR in a very similar manner to how they interacted with the human interviewer, providing both interviewers with similar information and amounts of information regardless of question difficulty. However qualitative analysis suggests that some children may have been more engaged with the robotic interviewer.
Article
Degree of hoarseness can be evaluated by judging the extent to which noise replaces the harmonic structure in the spectrogram of a sustained vowel. However, this visual method is subjective. The present study was undertaken to develop the harmonics‐to‐noise (H/N) ratio as an objective and quantitative evaluation of the degree of hoarseness. The computation is conceptually straightforward; 50 consecutive pitch periods of a sustained vowel /a/ are averaged; H is the energy of the averaged waveform, while N is the mean energy of the differences between the individual periods and the averaged waveform. Recordings of 42 normal voices and 41 samples with varying degrees of hoarseness were analyzed. Two experts rated the spectrogram of each voice sample, based on the amount of noise relative to that of the harmonic component. The results showed a highly significant agreement (the rank correlation coefficient = 0.849) between H/N calculations and the subjective evaluations of the spectrograms. The H/N ratio also proved useful in quantitatively assessing the results of treatment for hoarseness.
Article
The effects of physical embodiment and physical presence were explored through a survey of 33 experimental works comparing how people interacted with physical robots and virtual agents. A qualitative assessment of the direction of quantitative effects demonstrated that robots were more persuasive and perceived more positively when physically present in a user's environment than when digitally-displayed on a screen either as a video feed of the same robot or as a virtual character analogue; robots also led to better user performance when they were collocated as opposed to shown via video on a screen. However, participants did not respond differently to physical robots and virtual agents when both were displayed digitally on a screen – suggesting that physical presence, rather than physical embodiment, characterizes people's responses to social robots. Implications for understanding psychological response to physical and virtual agents and for methodological design are discussed.
Article
Despite the growing use of communication technologies, such as videoconferencing, in recruiting and selection, there is little research examining whether these technologies influence interviewers' perceptions of candidates. The present field experiment analysed evaluations of 92 real job applicants who were randomly assigned either to be interviewed face-to-face (FTF) (N = 48) or using a desktop videoconference system (N = 44). The results show a bias in favour of the videoconference applicants relative to FTF applicants, F(1,91) = 7.35, p = .01. A significant interaction of interview structure and interviewer gender was also found, F(1,91) = 3.70, p < .05, with female interviewers using an unstructured interview rating applicants significantly higher than males or females using a structured interview. Interview structure did not significantly moderate the influence of interview medium on interviewers' evaluations of applicants. These findings highlight the need to be aware of potential biases resulting from the use of communication technologies in the hiring process.
Towards a socially adaptive virtual agent
  • B Youssef
  • M C Atef
  • H Jones
  • N Sabouret
  • C Pelachaud
  • M Ochs
Meet Tengai, the job interview robot who won’t judge you
  • M Savage