Stepwise robust regression (M-estimators) using object-subject classification as the criterion, student sample

Stepwise robust regression (M-estimators) using object-subject classification as the criterion, student sample

Source publication
Article
Full-text available
We aim to investigate the nature of doubt regarding voice-based agents by referring to Piaget’s ontological object–subject classification “thing” and “person,” its associated equilibration processes, and influential factors of the situation, the user, and the agent. In two online surveys, we asked 853 and 435 participants, ranging from 17 to 65 yea...

Contexts in source publication

Context 1
... identify relevant influences on the classification (RQ2) we conducted a stepwise robust regression (SRR). The SRR indicated that more participants classified Alexa further away from the pole thing than Google Assistant if the previous experienced situations were held constant (Table 2, Model 2). However, the explained variance was low, and including the VBAs' agency eliminated this effect (Table 2, Models 4). ...
Context 2
... SRR indicated that more participants classified Alexa further away from the pole thing than Google Assistant if the previous experienced situations were held constant (Table 2, Model 2). However, the explained variance was low, and including the VBAs' agency eliminated this effect (Table 2, Models 4). Attributes of the Situation. ...
Context 3
... of the Situation. As predicted in H2, previous interactions affected the classification (Model 2, Table 2). If people owned the VBA they classified it more distanced from the pole thing than people who presumably equilibrated their classification for the first time. ...
Context 4
... to our assumption (H3), the absence of other people increased the distancing from the mere thing scheme. However, the explained variance was still less than 1%, and the effects disappeared after including the VBAs' agency (Table 2, Model 4). ...
Context 5
... of the User. As age increased, the classification tended toward the thing scheme (Table 2, Model 3). In contrast, people with more affinity for technology tended to distance from the mere thing scheme (Table 2, Model 3). ...
Context 6
... age increased, the classification tended toward the thing scheme (Table 2, Model 3). In contrast, people with more affinity for technology tended to distance from the mere thing scheme (Table 2, Model 3). In line with K. M. Lee & Nass (2005) the gender of the user was not significant at all for the classification. ...
Context 7
... the explanatory power of the model remained small and the inclusion of the VBAs' attributes eliminated most effects. Whereas agency negated the effects of age, affinity for technology, emotional stability, and agreeableness (Table 2, Model 4) assumed mind negated the effects of extraversion (Table 2, Model 5). ...
Context 8
... the explanatory power of the model remained small and the inclusion of the VBAs' attributes eliminated most effects. Whereas agency negated the effects of age, affinity for technology, emotional stability, and agreeableness (Table 2, Model 4) assumed mind negated the effects of extraversion (Table 2, Model 5). ...
Context 9
... of the Agent. Perceived agency contributed significantly to a classification distanced from the mere thing scheme and increased the explained variance to 22% (Table 2, Model 4). The VBAs' attentiveness (M = 3.3, SD = 1.29) and interdependence (M = 3.9, SD = 1.23) were rated moderately high, whereas their behavior was not perceived as very similar (M = 2.1, SD = 1.64). ...
Context 10
... effects held when mind attributes were added but weakened substantially. Perceived mind increased the explained variance to 34% (Table 2, Model 5). Although the similarity of mind was rated low, the VBAs' thinking (M = 2.12, SD = 1.61) was assumed to be more similar than their feeling (M = 1.60, ...
Context 11
... = 1.61). Consistently, distancing of the thing scheme was predicted to a higher degree by similarity in thinking than in feeling (Table 2, Model 5). In contrast, the VBAs' ability to understand the user was rated moderately high (M = 3.64, SD = 1.67) and also affected the distancing from the thing scheme to a similar amount. ...
Context 12
... 5.803, p < .001. However, only the number of personality items and the VBAs' creativity predicted a classification distanced from the mere thing scheme (Table 2, Model 5). ...
Context 13
... focus on mediated variables that had a significant impact on the classification in Models 1 to 3, verified by the second sample (see below), and on mediating variables which were significantly affecting these. Results are reported in Table A2 (OSF). ...

Similar publications

Article
Full-text available
Deep learning has recently unleashed the ability for Machine learning (ML) to make unparalleled strides. It did so by confronting and successfully addressing, at least to a certain extent, the knowledge bottleneck that paralyzed ML and artificial intelligence for decades. The community is currently basking in deep learning’s success, but a question...

Citations

... A second important question in the debate about gender-neutral, ambiguous, or androgynous VBA voices is whether people actually form different stereotypes of them and react to them differently than acoustically male or female VBAs. On this second question, the ambiguous voice was situated between the male and female voices on perceived instrumentality (a masculine stereotype) and expressiveness (a feminine stereotype), which "strengthens previous reflections on the emergence of novel heuristics regarding artificial agents (e.g., Etzrodt & Engesser, 2021;Gambino et al., 2020;Guzman, 2020)" (p. 64). ...
... Continuing with a provocative and valuable interpretation of the finding that respondent gender was not influential to perceptions of VBA gender, Mooshammer and Eztrodt find additional support for the operation of a VBA-specific gender heuristic in the lack of impact of the theoretically indicated over-exclusion of the ambiguous voice from the participants' own gender. If the VBA was not perceived as a gendered person, but as a gendered 'personified thing' (Etzrodt & Engesser, 2021) or 'social thing' (Guzman, 2015), the VBA is already part of an outgroup, independent of its gender. " (p. ...
Article
Full-text available
In this introduction to the fifth volume of the journal Human-Machine Communication, we present and discuss the five articles focusing on gender and human-machine communication. In this essay, we will analyze the theme of gender, including how this notion has historically and politically been set up, and for what reasons. We will start by considering gender in in-person communication, then we will progress to consider what happens to gender when it is mediated by the most important ICTs that preceded HMC: the telephone, mobile phone, and computer-mediated communication (CMC). We outline the historical framework necessary to analyze the last section of the essay, which focuses on gender in HMC. In the conclusion, we will set up some final sociological and political reflections on the social meaning of these technologies for gender and specifically for women.
... Although the impression of something lies in a pre-cognitive dimension, it is essential to explore it since it "often shapes our final appraisal of that object" (de Graaf & Allouch, 2017, p. 28). These findings, which are in line with the studies carried out by Fortunati et al. (2021) and cited above, seem to point mainly to the digital world in which Alexa lives, while, for example, Etzrodt and Engesser (2021) found that VBAs were conceptualized as "personified things. " However, Etzrodt and Engesser's findings and the current study may be the result of an artifact of methodology. ...
... Respondents did not consider Alexa as a mere gadget but as the outcome of the most innovative high-tech industry, with one foot in the future. Respondents did not report any hybridization or uncertain boundaries between Alexa and humans, although an increasing amount of scientific literature reflects the blurring boundaries between humans and machines (Etzrodt & Engesser, 2021;Weidmüller, 2022). Decidedly, these first three categories accounted for 75.1% of the words that form the core of the social representations of Alexa. ...
Article
Full-text available
Mainly, the scholarly debate on Alexa has focused on sexist/anti-woman gender representations in the everyday life of many families, on a cluster of themes such as privacy, insecurity, and trust, and on the world of education and health. This paper takes another stance and explores via online survey methodology how university student respondents in two countries (the United States, n = 333; and Italy, n = 322) perceive Alexa’s image and gender, what they expect from this voice-based assistant, and how they would like Alexa to be. Results of a free association exercise showed that Alexa’s image was scarcely embodied or explicitly gendered. Rather, Alexa was associated with a distinct category of being—the VBA, virtual assistant, or digital helper—with which one talks, and which possesses praiseworthy technical and social traits. Expectations of Alexa and desires regarding Alexa’s ideal performance are presented and compared across the two country samples.
... This indicates that some kind of ambiguity was perceived nonetheless-especially compared to the gender perception for the distinctly gendered voices. In accordance to Piaget (1997), it could be interpreted as an evoked equilibration process due to the uncertainty in gender ascription: Therefore-similar to other ambiguous objects (Etzrodt & Engesser, 2021)-when confronted with ambiguity, people most of the time use the less exhausting strategy of accommodating the voice by modifying an existing category stemming from ...
... Besides the perceived topic gender, age appeared to be the only influential factor, indicating that older people perceived the voice as more ambiguous, whereas younger people tended toward a more male assessment on average. A reason for this might be availability heuristics as described in the theory section: At increasing age, people have had more chances to encounter voices with acoustic parameters that do not fit into the prevailing genderism which might have led to the accommodation (Etzrodt & Engesser, 2021) of their gender scheme, enabling them to classify ambiguity. ...
... Hence, it is plausible that VBAs' application as taskfulfilling assistants in everyday life and their artificiality cause this more instrumental bias. This strengthens previous reflections on the emergence of novel heuristics regarding artificial agents (e.g., Etzrodt & Engesser, 2021;Gambino et al., 2020;Guzman, 2020). If VBAs now have their own heuristics as this indicates, traditional gender stereotypes might not be as relevant for their classification anymore, causing the lack of stereotype effects. ...
Article
Full-text available
Recently emerging synthetic acoustically gender-ambiguous voices could contribute to dissolving the still prevailing genderism. Yet, are we indeed perceiving these voices as “unassignable”? Or are we trying to assimilate them into existing genders? To investigate the perceived ambiguity, we conducted an explorative 3 (male, female, ambiguous voice) × 3 (male, female, ambiguous topic) experiment. We found that, although participants perceived the gender-ambiguous voice as ambiguous, they used a profoundly wide range of the scale, indicating tendencies toward a gender. We uncovered a mild dissolve of gender roles. Neither the listener’s gender nor the personal gender stereotypes impacted the perception. However, the perceived topic gender indicated the perceived voice gender, and younger people tended to perceive a more male-like gender.
... On the one hand, there has been a proliferation of HMC as an object of investigation. Digital interlocutors, such as Artificial Intelligence (e.g., Gunkel 2020; Guzman and Lewis 2020; Schäfer and Wessler 2020; Sundar and Lee 2022), avatars (e.g., Banks and Bowman 2016), chatbots (e.g., Araujo 2018;Edwards et al. 2014;Brandtzaeg and Følstad 2017;Gehl and Bakardjieva 2017), voice-based assistants (e.g., Etzrodt and Engesser 2021;Humphry and Chesher 2021;Natale and Cooke 2021), and social robots (e.g., Hepp 2020; Fortunati 2018; Peter and Kühne 2018) are on the rise. As a result, we are witnessing a profound change, in which communication through technologies is extended by communication with technologies (cf. ...
... Thus, they inhabit crucial primary cues (Lombard and Xu, 2021) that have a high potential to evoke social responses. Previous studies found various (social) reactions to and relationships with different technologies, including computers in general (e.g., Turkle, 1984Turkle, /2005Reeves and Nass, 1996), digital agents (e.g., Sundar, 2008;Skjuve et al., 2021), talking computers (e.g., Burgoon et al., 1999;Nass and Brave, 2005), social robots (e.g., Edwards et al., 2019;Laban et al., 2021), and commercial VPAs (e.g., Etzrodt and Engesser, 2021;Guzman, 2015;Shani et al., 2021;Wienrich et al., 2021;Chung et al., 2021). ...
... However, the situations in which more than one person is involved with the artificial agent are becoming increasingly important, especially when technologies integrate into the users' everyday environment. Recent studies on robots (e.g., Diederich et al., 2019;Thompson and Gillan, 2010;Fortunati et al., 2020) and commercial VPAs (Etzrodt and Engesser, 2021;Lopatovska and Williams, 2018;Porcheron et al., 2018;Purington et al., 2017;Raveh et al., 2019) indicate that the social situation regarding how many people interact with the agent alters the interaction with and the perception of the agent as well as how people might relate to it. However, their results remain vague and are primarily collateral findings. ...
... In contrast, Etzrodt and Engesser (2021) uncovered that people who had primarily prior dyadic interactions with a VPA rated the VPA's self-similarity higher than those with prior multi-person interactions, which in turn moved the classification towards subjectivity. The authors suggested that the lower self-similarity in triads may originate from the presence of the second person, emphasizing the difference between the first person and the VPA in terms of subjectivity by serving as a blueprint for subjects and subject-like behavior. ...
Article
As commercial voice-based personal agents (VPAs) like Google Assistant increasingly penetrate people’s private social habitats, sometimes involving more than one user, these social situations are gaining importance for how people define the human-machine relationship (HMR). The paper contributes to the understanding of the situation’s impact on HMR on a theoretical and methodological level. First, Georg Simmel’s theory on the “Quantitative determination of the group” is applied to the HMR. A 2x1 between-subjects quasi-experiment (N = 100) contrasted the defined HMR in dyadic social situations (one human interacting with the Google Assistant) to the defined HMR in triadic social situations (two humans interacting with the Google Assistant). Second, the method of central tendency analysis was extended by the more robust and informative comparison of distributions and quantiles using the two-sample Kolmogorov–Smirnov test and the shift function. The results show that the triadic situation, compared to the dyadic one, led to a more confounded categorization of the VPA’s subjecthood in terms of self-similarity, while simultaneously strengthening a definition of the relationship that resembled those of a business relation through lowered intimacy and feedback, mainly grounded in a more realistic definition of the agent’s inability to understand affects. In contrast to Simmel’s inter-human theory the relationship’s dimension of reciprocity and commitment remained unaffected by the situation. The paper discusses how these effects and non-effects of the triad could be explained by drawing on Simmel as well as peculiarities of HMR and methodology. Finally, it offers preliminary hypotheses about the situation’s implications for the HMR and outlines avenues for future research. Free download until 09/30/2022 at: https://authors.elsevier.com/a/1fZVe3pfaRpm5N
... We know in fact that names as well as pronouns and adjectives are "doing" words, important in the social construction of gender (Pilcher, 2017). In our case, the general consistency between ascriptions of human personhood and subsequent gender pronoun use may be complicated, because virtual agents blur the lines between person and thing, subject and object (Etzrodt & Engesser, 2021). For instance, Purington et al. (2017, May) found that more than half of the Amazon user reviews of Echo/Alexa they analyzed included the personified name "Alexa" but that most reviewers employed object pronouns instead of fully personifying Alexa as a subject. ...
... Or do VBAs represent a new kind of entity? As Etzrodt and Engesser (2021) found in their research, most participants classified VBAs as "personified things," which reflects the hybridization of historically discrete ontological categories. ...
Article
The gendering of machine agents risks further complicating the social framework in ways that reverberate to humans' relationships and identities. This paper explores how users perceive the gender and status of Amazon's Alexa. We argue that voice-based assistants, particularly those with feminine names and voices, may contribute to reinforcing a retrieval ideology of the feminine as the place of social subordination and contempt. We conducted an online survey of women and men in the US (n = 322) and Italy (n = 333). Most (80%) identified Alexa as “female.” However, there was a lack of concordance between the gender respondents ascribed to Alexa and their spontaneous use of pronouns in writing. In terms of status, over half of the sample perceived Alexa as an inferior communicator (for Italian respondents, inferiority was associated with perceiving Alexa as female), while over one-third rated Alexa as equal or superior to humans, evidencing the change happening in the ontological order. The few respondents who noticed gender differences in people's interactions with Alexa perceived women to be more courteous, serious and accommodating in their use. Respondent gender and culture comparisons are presented, implications of the findings are discussed, and future research is proposed to reduce harmful impact.
... This paper provokes the invitation to further explore this theme from a psychological and sociological perspective. The HMC community has already investigated and discussed contemporary ontological boundaries between humans, animals, and machines at a qualitative level (Edwards, 2018;Etzrodt & Engesser, 2021; Guzman, 2020), but there is also the need to go for representative surveys capable of capturing whether the ontological frameworks that affect people's attitudes and behaviors are changing and, if so, in which directions. As a scientific community, we should learn to live with the symptom of which Gunkel talks in his essay and to cultivate it, to understand the strategies with which individuals, groups, and societies cope with the permeation of machines into the social body. ...
... Whereas human-centered definitions of trustworthiness highlight dimensions of integrity, competence, and benevolence (or character, competence, and caring), machine-centered models stress reliability, functionality, and helpfulness. This opens a question as to which of these approaches to assessing trustworthiness (human, machine, or hybrid) best applies to the emergent ontology of "personified things" (Etzrodt & Engesser, 2021). Results of an online survey of German university students (N = 853) and staff (N = 435) demonstrated acceptable model fit for both human and hybrid trustworthiness models, but insufficient fit for the machine model; further, fit was moderated by prior experience with VBAs. ...
Article
Full-text available
In this introduction to the fourth volume of the journal Human-Machine Communication, we present and discuss the nine articles selected for inclusion. In this essay, we aim to frame some crucial psychological, sociological, and cultural aspects of this field of research. In particular, we situate the current scholarship from a historical perspective by (a) discussing humanity’s long walk with hybridity and otherness, at both the cultural and individual development levels, (b) considering how the organization of capital, labor, and gender relations serve as fundamental context for understanding HMC in the present day, and (c) contextualizing the development of the HMC field in light of seismic, contemporary shifts in society and the social sciences. We call on the community of researchers, students, and practitioners to ask the big questions, to ground research and theory in the past as well as the real and unfolding lifeworld of human-machine communication (including what HMC may become), and to claim a seat at the table during the earliest phases in design, testing, implementation, law and policy, and ethics to intervene for social good.
... P. Edwards, 2018;Gambino et al., 2020;Garcia et al., 2018;Nass & Moon, 2000;Reeves & Nass, 1996). In fact, humans often perceive these technologies as "social things" (Guzman, 2015) or "personified things" (Etzrodt & Engesser, 2021), ascribing both human and machine characteristics to them. ...
... As aforementioned, research has shown that VBAs are perceptual hybrids. While undeniably technological devices, their human-like CUI causes uncertainty about whether VBAs are ontologically human or machine and, consequently, people attribute both human and machine characteristics to them (e.g., Etzrodt & Engesser, 2021). Additionally, this causes an overlap between the attributed role in the communication process as source or channel (e.g., Guzman, 2019). ...
Article
Full-text available
This study investigates how people assess the trustworthiness of perceptually hybrid communicative technologies such as voice-based assistants (VBAs). VBAs are often perceived as hybrids between human and machine, which challenges previously distinct definitions of human and machine trustworthiness. Thus, this study explores how the two trustworthiness models can be combined in a hybrid trustworthiness model, which model (human, hybrid, or machine) is most applicable to examine VBA trustworthiness, and whether this differs between respondents with different levels of prior experience with VBAs. Results from two surveys revealed that, overall, the human model exhibited the best model fit; however, the hybrid model also showed acceptable model fit as prior experience increased. Findings are discussed considering the ongoing discourse to establish adequate measures for HMC research.
... Furthermore, the extent to which individual variation by humans' social and cognitive characteristics shapes speech adaptation to voice-AI is a promising area for future research. Prior work has shown variation in how people perceive and personify technological agents, such as robots (Hinz et al., 2019) and voice-AI (Cohn, Raveh, et al., 2020;Etzrodt & Engesser, 2021). Recently, some work has revealed differences in speech alignment toward voice-AI by speaker age (e.g., older vs. college-age adults in Zellou, Cohn, & Ferenc Segedin, 2021) and cognitive processing style (e.g., autisticlike traits in Snyder et al., 2019), suggesting these differences could shape voice-AI speech adaptation as well. ...
Article
Full-text available
Millions of people engage in spoken interactions with voice activated artificially intelligent (voice-AI) systems in their everyday lives. This study explores whether speakers have a voice-AI-specific register, relative to their speech toward an adult human. Furthermore, this study tests if speakers have targeted error correction strategies for voice-AI and human interlocutors. In a pseudo-interactive task with pre-recorded Siri and human voices, participants produced target words in sentences. In each turn, following an initial production and feedback from the interlocutor, participants repeated the sentence in one of three response types: after correct word identification, a coda error, or a vowel error made by the interlocutor. Across two studies, the rate of comprehension errors made by both interlocutors was varied (lower vs. higher error rate). Register differences are found: participants speak louder, with a lower mean f0, and with a smaller f0 range in Siri-DS. Many differences in Siri-DS emerged as dynamic adjustments over the course of the interaction. Additionally, error rate shapes how register differences are realized. One targeted error correction was observed: speakers produce more vowel hyperarticulation in coda repairs in Siri-DS. Taken together, these findings contribute to our understanding of speech register and the dynamic nature of talker-interlocutor interactions.
... Chatbot, one of the most prevalent practical AI examples, is a conversational agent that imitates human-to-human conversations. There are text-based chatbots (e.g., Bank of America's Erica, Capital One's Eno, Geico's Kate, Amtrak's Julie, etc.) (Ali, 2021) and voicebased chatbots (e.g., Amazon Alexa, Apple Siri, Google Assistant, Microsoft Cortana, etc.) (Etzrodt & Engesser, 2021). As alluded to in the previous paragraph, although users directly interact with the chatbot such as Erica and Alexa, the AI algorithms form an integral part of the users' experience with the chatbot. ...
Article
Full-text available
The COVID-19 pandemic is an unprecedented global emergency. Clinicians and medical researchers are suddenly thrown into a situation where they need to keep up with the latest and best evidence for decision-making at work in order to save lives and develop solutions for COVID-19 treatments and preventions. However, a challenge is the overwhelming numbers of online publications with a wide range of quality. We explain a science gateway platform designed to help users to filter the overwhelming amount of literature efficiently (with speed) and effectively (with quality), to find answers to their scientific questions. It is equipped with a chatbot to assist users to overcome infodemic, low usability, and high learning curve. We argue that human-machine communication via a chatbot play a critical role in enabling the diffusion of innovations.