Article

Speciesism: an obstacle to AI and robot adoption

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Once artificial intelligence (AI) is indistinguishable from human intelligence, and robots are highly similar in appearance and behavior to humans, there should be no reason to treat AI and robots differently from humans. However, even perfect AI and robots may still be subject to a bias (referred to as speciesism in this article), which will disadvantage them and be a barrier to their commercial adoption as chatbots, decision and recommendation systems, and staff in retail and service settings. The author calls for future research that determines causes and psychological consequences of speciesism, assesses the effect of speciesism on the adoption of new products and technologies, and identifies ways to overcome it.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... With this technique, intelligent machines are able to understand written or spoken language, pictures or videos, to draw conclusions on this basis independently as well as interact or communicate with their environment [26]. The latest developments even show that robots equipped with such AI techniques are able to detect, respond and display emotions [28]. ...
... Humans can also tend to place too much confidence in AI technology or rely completely on its judgment because AI can be very good in some specific areas and can even outperform humans in speed, scalability and quantitative capabilities [14,28]. Instead, humans should know how to combine their distinctive human skills with those of a smart technology [3]. ...
... 8. Showing an awareness of an AI agent as a sort of virtual colleague Human-AI collaboration can also be particularly challenging if the AI only has a virtual appearance, on the one hand, but, on the other hand, is to be attributed traits of a great teammate, for example, for joint task processing in stressful situations [28,48]. ...
Chapter
Full-text available
Artificial intelligence (AI) has become an integral element of modern machines, devices and materials, and already transforms the way humans interact with technology in business and society. The traditionally more hierarchical interaction, where humans usually control machines, is constantly blurring as machines become more capable of bringing in their own (intelligent) initiatives to the interaction with humans. Thus, nowadays it is more appropriate to consider the interactive processes between humans and machines as a novel form of interdependent learning efforts between both sides, where processes such as critical discourses between humans and machines may take place (hybrid intelligence). However, these developments demand a shift in the understanding about the role of technology at work and about specific competencies required among human actors to collaborate constructively and sustainably with AI systems. This paper seeks to address this issue by identifying human actors’ key competencies, which enable a more constructive collaboration between humans and intelligent technologies at work.
... All the SST-failure scenarios are set in fully automated self-service encounters without any employee involvement. Based on the psychology of speciesism against machines in human-machine relationships (Schmitt, 2020), machines are seen as "less human" and people discriminate against machines when comparing humans and machines. Hence, we propose that customers will get angrier in SST failure when compared with employee failure, which results in more-negative responses in SST failure than in employee failure. ...
... Hence, we propose that customers will get angrier in SST failure when compared with employee failure, which results in more-negative responses in SST failure than in employee failure. Extending the absence of empathy in human-machine interactions (Kummer et al., 2012;Schmitt, 2020), we propose that empathy will alleviate customers' negative responses in employee failure, while this effect will not hold in SST failure. ...
... Even perfect AI and robots may still be subjected to bias, which will disadvantage them relative to humans and be a barrier to their commercial adoption as staff in retail and service settings, which is identified as speciesism against machines (Schmitt, 2020). In sexism and racism, the bias and discrimination occur within the human species; in the case of speciesism, they occur between humans and nonhumans. ...
Full automation and self-service technologies have become popular in service marketing. However, customers often face multiple issues when dealing with self-service technologies. This paper examines the effect of service-failure type (employee failure vs. self-service technology failure) on customers' negative responses (dissatisfaction, forgiveness, willingness to switch between employee and self-service technology, and negative word of mouth). Through four experiments with Amazon Mechanical Turk workers and undergraduate students, this research finds that customers have more negative responses for a self-service technology failure than for an employee failure. This is because they get angrier with machines' mistakes than with those of humans. Moreover, empathy alleviates anger and customers’ negative responses in employee failure, but not in self-service technology failure. This research offers service providers new insights by scrutinizing the flip side of complete automation in service marketing.
... Our research is novel as it provides important insight into the understanding of AI influencers, a topic, which to our knowledge, has yet to be examined in the advertising literature. Specifically, our findings suggest that upon initial implementation, AI influencers are evaluated and benefit a brand in a manner similar to the employment of a novel celebrity endorser; thus, this research answers the call for further exploration of endorser types (Voorveld 2019), brand building activities involving a wider variety of brand partners (Swaminathan et al. 2020), and consumer reaction to autonomous AI technologies (Schmitt 2020). ...
... Insight into these differences might be similar to those in research related to the uncanny valley which suggests that humanlike physical appearance and behavior in technology may create a feeling of unease in consumers when the robots become too human-like (e.g. Kim, Schmitt, and Thalmann 2019;Schmitt 2020). ...
Article
Brand endorsers can contribute to a brand’s success or failure (in the case of endorser transgressions). Recent advancements in technology have produced new, nonhuman alternatives to traditional celebrity endorsers. These new endorsers rely on artificial intelligence (AI) to interact with and influence consumers. Two studies demonstrate that AI influencers can produce positive brand benefits similar to those produced by human celebrity endorsers. Moreover, just like their human counterparts, AI influencers can also commit transgressions that result in degradation of the endorsed brand. Importantly, though, AI influencers differ from human celebrity endorsers in that consumers are less likely to view them as unique entities (as tested in a pilot study). Thus, consumers are more likely to perceive a transgression committed by an AI influencer as behavior applicable to all AI influencers, but they are less likely to view celebrity endorser behaviors as interchangeable. As such, after an AI influencer has committed a transgression, replacing the AI influencer with a celebrity endorser attenuates negative brand perceptions, an effect which cannot be realized if the replacement is another AI influencer.
... They are otherwise reluctant to use its more advanced features, such as its ability to pay bills, make a donation, or refill their prescriptions. Moreover, consumers do not engage with this highly humanized form of technology in the same meaningful, relational capacity that they do with other non-human agents such as products and brands (Davenport et al., 2020;Honig & Oron-Gilad, 2018;Liao et al., 2019;Schmitt, 2019). This research aims to investigate the reasons for this lack of engagement. ...
Article
As the digital era continues to have a strong influence on how consumers effectively leverage technology, the prospect of introducing artificial intelligence, including smart speakers, into our homes and routines has become largely unavoidable (Bressgott, 2019; Davenport et al., 2020). Consumer use of smart speakers can provide both a competitive advantage for firms (though large amounts of valuable consumer data), as well convenience benefits for users. However, the availability of this data requires continued engagement with these devices in a deep, meaningful manner. This paper employs a mixed methods strategy to investigate the underlying reasons for how individual user, task, and technology characteristics influence deep customer engagement with smart speakers. While much research has been conducted concerning technology adoption and self-service technology adoption, in particular, this research seeks to add to current marketing and IS literature by examining the drivers of actual, continued, and deep engagement with smart speakers in the post-adoption phase. Currently, we see mixed findings between a willingness and resistance to engage with AI technology, many of which seem to be rooted in a) user characteristics such as personality, b) technology characteristics such as perceived anthropomorphism, and/or c) task characteristics such as willingness to delegate tasks to AI (Serenko, 2007; Swartz, 2003; Waytz et al., 2010a). Therefore, depth interviews in study one of this paper seek to examine how user, task, and technology characteristics that interact to influence or deter engagement with smart speakers. It also employs a metaphor analysis technique to identify moderating variables that may strengthen or weaken relationships between user, task, and technology characteristics and engagement. Findings from study one brought forth several user, task, and technology characteristics that were used in the development of a new empirical model. Study 2 tests this model through partial least squares structural equation modeling (PLS-SEM), subsequently contributing empirical evidence on drivers of engagement with smart speakers to the current body of literature (Wagner & Schramm-Klein, 2019).
... Indeed, a growing number of consumers are already becoming comfortable interacting with service providers via smart speakers such as Alexa, and this technology is advancing rapidly (Dawar and Bendle 2018). As recently noted by Schmitt (2020), once robots and AI become as intelligent as humans, "there should be no reason to treat AI and robots differently from humans" (p. 3). ...
Article
Full-text available
In a brief but ambitious Journal of Macromarketing commentary, Lusch (2017) offered a set of observations about the “evolution of economic exchange systems.” His first observation states that over the past 40,000 years, humans have routinely engaged in “exchange with strangers.” Our research complements Lusch’s retrospective commentary by taking a prospective look at how the digital revolution alters the degree to which humans transact with strangers. We specifically focus on the recent emergence of the sharing economy by theoretically and empirically examining how closely this new form of economic exchange conforms with Lusch’s observation. We conclude that, despite its promise of bridging divides between friends and strangers, our new digital world is still replete with transactions with strangers, and may be more similar to our old world than commonly recognized. Thus, transacting with strangers appears to be endemic to not just our past but also our future. We discuss the implications of transacting with strangers in a digital world for the future of macromarketing thought.
... Similarly, perceived service quality and retail patronage will differ in consumers' who prefer contact with a human than an "avatar" (Lee & Yang, 2013). AI and chatbots may face "speciesism" as some customers may consider them as less human, poor on cognitive abilities and more automated in nature (Cubric, 2020;Pozzana & Ferrara, 2020;Schmitt, 2020). ...
Article
Digital retail is a technology-driven business. Customers shop with the help of cutting-edge self-service technologies deployed by marketers to enhance customer experience and e- service quality (e-SQ). However, there is a lack of understanding of how customers differentiate between various digital retailers while shopping. We attempt to compare similarity and dissimilarity between top e-retailers based on customer perception grounded in seven dimensions of e-SQ using data from an important emerging market. Multi-Dimensional Scaling (MDS) technique was applied to analyze similarity judgments of the respondents to draw an aggregate perceptual map of the selected e-retailers. Subsequently, discriminant analysis was carried out and the results were used to create combined spatial maps of e- retailers and e-SQ attributes. It was found that consumers can perceive top e-retailers as similar or isolated brands. Our findings suggest that all seven e-SQ attributes can create differentiation among leading e-retailing brands. However, we recommend e-retailers to fortify their service recovery dimensions, as consumers give greater importance to them. Further, we benchmarked fulfilment and contact as critical dimensions for managing e-SQ from the top two e-retailers (Amazon India and Flipkart) and discussed how they are deploying cutting-edge technologies to beef up these dimensions.
... een as fundamentally human and fundamentally lacking in machines, may enable more deep-rooted explanations behind such preference patterns and design decisions, which are likely to be different from prior perspectives based on anthropomorphism and warmth-competence models(Belanche et al., 2020;Huang & Rust, 2021;Kim et al., 2019;Mende et al., 2019;B. Schmitt, 2020;van Doorn et al., 2017). ...
Article
Full-text available
This guest editorial starts with an introduction to evolutionary psychology (EP) in the marketing domain and delineates some of the building blocks of EP, both generally and when applied to consumer research. While EP is a debated discipline among marketing scholars, with some praising its presence and others perceiving it as patriarchal, politically incorrect, and problematic, a central tenet of this meta-framework is a focus on deep-rooted, ultimate explanations for human behavior. Marketing scholars have traditionally focused on proximate “how” and “what” questions, which are indeed important to address. However, unlike such proximate questions, EP strives to capture the ultimate “why” reasons behind our purchases and product preferences in terms of which adaptive functions they may have in giving us an evolutionary advantage. Having highlighted and exemplified this proximate-ultimate distinction, I then present each state-of-the-art paper included in this special issue. All special issue articles use a variety of EP arguments and theories to elucidate several consumption-relevant phenomena with implications for marketing theory and practice. In closing, I provide a set of suggestions for future EP-based consumer research, meant to make this field flourish further.
... Accordingly, data collection through speech recognition, in which the clients' tone of voice when communicating with voice bots, along with other data used to improve marketing strategies, requires alignment with the General Data Protection Regulation and approval of the client (Butterworth, 2018). Hence, in order to reduce consumers' skepticism and avoid speciesism toward AI (Schmitt, 2020), practitioners are reminded of ethical codes (Stone et al., 2020) and the importance of data protection (Kolbjørnsrud et al., 2017). ...
Article
An increasing amount of research on Intelligent Systems/Artificial Intelligence (AI) in marketing has shown that AI is capable of mimicking humans and performing activities in an ‘intelligent’ manner. Considering the growing interest in AI among marketing researchers and practitioners, this review seeks to provide an overview of the trajectory of marketing and AI research fields. Building upon the review of 164 articles published in Web of Science and Scopus indexed journals, this article develops a context-specific research agenda. Our study of selected articles by means of Multiple Correspondence Analysis (MCA) procedure outlines several research avenues related to the adoption, use, and acceptance of AI technology in marketing, the role of data protection and ethics, the role of institutional support for marketing AI, as well as the revolution of the labor market and marketers’ competencies. 50 days' free access - no sign-up, registration, or fees are required – available at the following link: https://www.sciencedirect.com/science/article/pii/S0148296321000643?dgcid=author
... Johnson et al., 2008 in relation to self-service technology), and yet the risk or uneasiness of dealing with it was shown to have some negative impact on expected service quality. The underlying psychological reasons may be found in the (now well-explored) depths of the uncanny valley (Kim et al., 2019;Mori, 1970); or perhaps there is some inherent bias in us against non-human entities that appear/attempt to resemble us (known as speciesism; Schmitt, 2020) manifesting in the negativity towards humanoid robots. Such reasons were enough to prompt one prominent expert to declare "whatever you do, don't humanize [care] robots" (van Doorn, 2020). ...
Article
The accelerated deployment of humanoid robots in hospitality services precipitates the need to understand related consumer reactions. Four scenario-based experiments, building on social presence and social cognition theories, examine how humanoid robots (vs. self-service machines) shape consumer service perceptions and intentions vis-à-vis concurrent presence/absence of human staff. The influence of consumers’ need for human interaction and technology readiness is also examined. We find that anthropomorphizing service robots positively affects expected service quality, first-visit intention, and willingness to pay, as well as increasing warmth/competence inferences. These effects, however, are contingent on the absence of human frontline staff, which can be understood by viewing anthropomorphism as a relative concept. Humanoid robots also increase psychological risk, but this poses no threat to expected service quality when consumers’ need for human interaction is controlled for. Hence, humanoid robots can be a differentiating factor if higher service quality expectations are satisfied. Additionally, we show that a humanoid robot’s effect on expected service quality is positive for all but low levels of technology readiness. Further implications for theory/practice are discussed.
... Our research makes three important theoretical contributions. First, the present research contributes to the incipient stream of research on consumers' receptivity to AI. Extant relevant literature could be classified into two categories, namely, AI aversion/resistance Effect of AIenabled checkouts and AI appreciation/acceptance. Research on the former posits that resistance stems from perceptions of AI's inability to detect consumers' unique characteristics (Longoni et al., 2019), beliefs of speciesism (Schmitt, 2019) and distrust (Dietvorst et al., 2015), especially in intuitive and empathetic tasks (Huang and Rust, 2018). However, scholars in favor of AI appreciation argue the opposite, especially when explanations of AI-made recommendations are given (Logg et al., 2019;Marchand and Marx, 2020). ...
... The small evidence found regarding the closeness of the in-group also implies that an intimate work team requiring closer interaction poses a greater threat to people than a loose mutual in-group membership on an organizational level. In addition to threat caused by prejudice, it could be argued that the negative reactions toward robots are due to fear of the unknown (Carleton, 2016), or speciesism which has been noted to be an obstacle to robot adoption (Schmitt, 2020). ...
Article
Full-text available
We investigated how people react emotionally to working with robots in three scenario-based role-playing survey experiments collected in 2019 and 2020 from the United States (Study 1: N = 1003; Study 2: N = 969, Study 3: N = 1059). Participants were randomly assigned to groups and asked to write a short post about a scenario in which we manipulated the number of robot teammates or the size of the social group (work team vs. organization). Emotional content of the corpora was measured using six sentiment analysis tools, and socio-demographic and other factors were assessed through survey questions and LIWC lexicons and further analyzed in Study 4. The results showed that people are less enthusiastic about working with robots than with humans. Our findings suggest these more negative reactions stem from feelings of oddity in an unusual situation and the lack of social interaction.
... Accordingly, data collection through speech recognition, in which the clients' tone of voice when communicating with voice bots, along with other data used to improve marketing strategies, requires alignment with the General Data Protection Regulation and approval of the client (Butterworth, 2018). Hence, in order to reduce consumers' skepticism and avoid speciesism toward AI (Schmitt, 2020), practitioners are reminded of ethical codes (Stone et al., 2020) and the importance of data protection (Kolbjørnsrud et al., 2017). ...
... Despite all capabilities of ML techniques and their great potential to change the future of marketing and personalized marketing, some researchers argue that AI will be more effective if the computing capabilities of AI augment human insight rather than replace it (Davenport et al., 2020;Ma & Sun, 2020;Schmitt 2020). The question that arises here is how human insight can practically augment ML techniques for effective marketing, especially for offering efficient personalized marketing. ...
Conference Paper
Machine Learning (ML) techniques enable business-to-business (B2B) companies to offer highly personalized services to their customers. Some recent studies argue that ML-based services are more effective when the computing capabilities of ML methods augment human insight rather than replace it. However, no study demonstrates practically how human capabilities can be integrated with the computing abilities of ML techniques for personalized marketing. Thus, the purpose of this paper is to theoretically contribute by demonstrating the need for taking the hybrid ML-human approach for creating effective personalized marketing in B2B contexts. The extant paper also practically contributes by providing field experimental evidence that shows how human insight can be integrated with ML power for developing a personalized information service (PIS). Finally, the paper presents an integrated model for creating a PIS for business customers.
... In this research, we study virtual service agents in the form of chatbot agents because of the following reason: Globally, more than 50% of companies are already adopting chatbot technology or plan to do so, given that chatbots are predicted to account for more than US$8 billion of cost saving per year by 2022 (Sands et al., 2020). Yet, customers, in general, are reluctant to use services provided by artificial intelligence (Schmitt, 2020). In response to the threat of a pandemic, many customers have shown a preference for services offered by artificial intelligence over those offered by human employees owing to concerns about social distancing and safety . ...
Article
Health organizations have relied heavily on social distancing to limit the spread of the COVID-19 pandemic. The purpose of this research is to examine what factors can influence customers’ evaluations of social distancing as well as how and when these evaluations drive their usage of chatbot services. Using structural equation modeling to analyze the experimental data from 200 U.S. consumers, we found that when the service situation is utilitarian (hedonic) in nature, customers’ contamination fear influences their chatbot usage during service encounters through their social distancing attitudes (subjective norms) and then perceived usefulness of chatbots. Our findings provide meaningful theoretical contributions and practical implications.
... Humans often show biases against machines and algorithms (e.g., Chen et al. 2021;Thomas and Fowler 2021), even when algorithms produce objectively superior outcomes. Schmitt (2020) attributes this to speciesism, which views humans as a superior species and discriminates against other non-human species. Individuals with a high level of speciesism can categorically reject the notion of AI agents as equal social partners. ...
Article
Full-text available
Artificial intelligence (AI) continues to transform firm-customer interactions. However, current AI marketing agents are often perceived as cold and uncaring and can be poor substitutes for human-based interactions. Addressing this issue, this article argues that artificial empathy needs to become an important design consideration in the next generation of AI marketing applications. Drawing from research in diverse disciplines, we develop a systematic framework for integrating artificial empathy into AI-enabled marketing interactions. We elaborate on the key components of artificial empathy and how each component can be implemented in AI marketing agents. We further explicate and test how artificial empathy generates value for both customers and firms by bridging the AI-human gap in affective and social customer experience. Recognizing that artificial empathy may not always be desirable or relevant, we identify the requirements for artificial empathy to create value and deduce situations where it is unnecessary and, in some cases, harmful.
... For instance, people who are used to different types of AI applications (e.g., robots, chatbots) may develop different lay beliefs about AI. Similarly, individual differences in speciesism (i.e., a fundamental bias toward the human species) may favor the development of different lay beliefs about AI (Schmitt, 2020). ...
Article
Full-text available
There is little research on how consumers decide whether they want to use algorithmic advice or not. In this research, we show that consumers’ lay beliefs about artificial intelligence (AI) serve as a heuristic cue to evaluate accuracy of algorithmic advice in different professional service domains. Three studies provide robust evidence that consumers who believe that AI is higher than human intelligence are more likely to adopt algorithmic advice. We also demonstrate that lay beliefs about AI only influence adoption of algorithmic advice when a decision task is perceived to be complex.
... Likewise, evolutionary theory may provide a useful perspective for understanding the emergence and growth of new technologies such as artificial intelligence and machine learning (Eiben and Smith, 2015). As recently proposed by Schmitt (2020), evolutionary forces appear to be promoting the rise of speciesism, which may present an obstacle to the adoption of new technologies that have human-like qualities, such as artificial intelligence and robotics. ...
Article
Full-text available
Since its founding in 1984, the Journal of Product Innovation Management (JPIM) has published leading‐edge research on a number of important topics in the innovation domain, such as improving the new product development process (e.g., Cooper, 2008; Cooper and Kleinschmidt, 1987; Herstatt and Von Hippel, 1992) and identifying the drivers of innovation success (e.g., Faems, Van Looy, and Debackere, 2005; Kleinschmidt and Cooper, 1991; Montoya‐Weiss and Calatone, 1994). In addition to publishing research that is academically rigorous, JPIM has also focused on topics that are highly relevant to innovation practice (e.g., Brown and Katz, 2011; Griffin and Page, 1996; West and Bogers, 2014).
... However, it is time to act in anticipation and understand the emergence and development of this form of intelligence, which is different from humans and assumes roles that can no longer be considered inferior to humans. Despite this advancement, we are, at the same time, confronted with a stream of researchers that reinforce the superiority already mentioned and that support human speciesism (Schmitt, 2020), considering human beings to have a moral status superior to that of non-human: animals and computers. ...
... Behind the boom in chatbot applications, consumers' acceptance of AI chatbots is not as high as expected. In most cases, consumers are more willing to accept human staff than AI chatbots, reflecting the phenomenon of "chatbot aversion" (Castelo et al., 2019;Schmitt, 2020). For example, online person-to-person interactions last longer than humanto-machine interactions (Hill et al., 2015). ...
Article
This study examines how the certainty of consumer needs affects consumers' acceptance of artificial intelligence (AI) chatbots in the online pre-purchase stage. Three experiments are conducted to demonstrate that consumers are more likely to choose AI chatbots when their needs are more certain. This effect is mediated by consumers' perceived effectiveness of AI chatbots and moderated by product type. Specifically, when the certainty of needs is higher, consumers perceive AI chatbots to be more effective, ultimately promoting consumers' acceptance of AI chatbots. For search products, higher (vs. lower) certainty of needs increases consumers' acceptance of AI chatbots, while for experience products the certainty of needs does not significantly affect consumers' acceptance of AI chatbots. These findings make important theoretical contributions to the existing literature on AI chatbots and also provide some practical implications for electronic commerce companies to implement AI chatbot strategies more effectively.
... Past marketing and consumer research investigating human-robot interactions (HRI) has mainly portrayed them as a rational, transactional, and interactional relationship between two individuals (e.g., Huang and Rust, 2021;Letheren et al., 2021;Esfahani and Reynolds, 2021;Kim, So and Wirtz, 2022). Precisely because of the current pervasiveness of robots, and the way we human beings create and are created by these technologies (Müller, 2016), fluid, non-static, and collective aspects of the consumption of emerging technologies, such as consumer robots, have been included in calls for research (Belk, Weijo and Kozinets, 2021;Schmitt, 2020;Puntoni et al., 2021;Grewal et al., 2020;Delgosha and Hajiheydari, 2021;Hoffman and Novak, 2018;Gonzalez-Jimenez, 2018;Kang, Diao and Zanini, 2020). This rising interest is also a reflection of the current annual growth rate (i.e., 17.45%) of the global consumer robot market, which was valued at USD 27.73 billion in 2020 and is expected to reach USD 74.1 billion by 2026 (Mordor Intelligence, 2021). ...
Article
Purpose This research investigates and conceptualizes non-dyadic human–robot interactions (HRI). Design/methodology/approach The authors conducted a netnographic study of the Facebook group called “iRobot – Roomba,” an online brand community dedicated to Roomba vacuums. Their data analysis employed an abductive approach, which extended the grounded theory method. Findings Dyadic portrayals of human–robot interactions can be expanded to consider other actants that are relevant to the consumption experiences of consumer robots. Not only humans but also nonhumans, such as an online brand community, have a meaningful role to play in shaping interactions between humans and robots. Research limitations/implications The present study moves theoretical discussions on HRI from the individual level grounded in a purely psychological approach to a more collective and sociocultural approach. Practical implications If managers do not have a proper assessment of human–robot interactions that considers different actants and their role in the socio-technical arrangement, they will find it more challenging to design and suggest new consumption experiences. Originality/value Unlike most previous marketing and consumer research on human–robot interactions, we show that different actants exert agency in different ways, at different times and with different socio-technical arrangements.
... Teknologi yang menggunakan AI mampu menghadirkan paradigma baru bagi otomasi industri dalam sistem produksi. Selain itu AI juga memiliki pengaruh yang besar terhadap dunia otomatisasi, mekatronika, robotika dan interaksi manusia dengan mesin [1]. ...
Article
Full-text available
Human-machine interface (HMI) is a system that can connect human organs to the control of a machine using computer technology. Today's robot technology has been widely researched, especially in terms of robot motion control. This research adopts the accelerometer technique by utilizing the wrist to control the motion of a wheeled robot via wireless communication. A wheeled robot as a data receiver and hand movements as a data transmitter. The MPU6050 sensor is mounted on the back of the hand to read hand movements based on the Pitch (y-axis) and Roll (x-axis) values. Communication between robots and hand movements using Bluetooth. The results prove that the accelerometer sensor on the MPU 6050 has successfully identified wrist motion as a remote navigation robot. Pitch value is greater than 160, so the robot moves backward and for moving forward is less than 140. Meanwhile, for moving left, the Roll value is less than 40 and for moving right is greater than 60.
... Our study not only expands the discussion of consumer reception of emerging technologies (e.g. Castelo et al., 2019;Harrigan et al., 2020;Pedersen & Iliadis, 2020;Schmitt, 2020) but helps to understand consumer sentiments (Gopaldas, 2014;Joy et al., 2020), particularly the ways in which love helps to engage and valorise consumer biohacking (Savulescu & Sandberg, 2008;Warwick, 2020). The next section presents key Transhumanism and biohacking premises and the consumer sentiments literature, focusing on love. ...
Article
Just as the mythical Greek Titan Prometheus created humans and brought them the technology of fire, the biohackers studied here seek to recreate humanity and bring us technologies that make us god-like. In this paper, we explore how and why biohackers have been integrating technologies into their bodies. Drawing on Transhumanism, biohacking, and consumer sentiments literature, we depict three avatars of Promethean biohackers. While distinct from one another, their biohackings are tied by a single sentiment: that of love. Here, love is what energises biohackers’ actions on their Promethean journey to become transhumans. This study challenges prior discussions on human-technology relationships framed in largely instrumental terms. Our research also highlights the importance for scholars and practitioners to address ethical issues around eugenics and human commodification.
... According to Rust and Huang (2021), feeling intelligence requires the capability of recognizing, simulating and reacting appropriately to customers in an empathetic manner but robots do not have such natural and biological feelings to understand customers' emotional status and to form appropriate emotional responses during frontline interactions (De Cremer and Kasparov, 2021). This is because robots, in spite of many similarities to humans, have not undergone a biological evolution like humans (Schmitt, 2019). Thus, even if future robots can be more advanced to respond empathetically to customers, the lack of authenticity and sincerity in such expressions might backfire and make the service experience worse (Bove, 2019). ...
Article
Purpose While robots have increasingly threatened frontline employees’ (FLEs) future employment by taking over more mechanical and analytical intelligence tasks, they are still unable to “experience” and “feel” to occupy empathetic intelligence tasks that can be handled better by FLEs. This study, therefore, aims to empirically develop and validate a scale measuring the new so-called empathetic creativity as being creative in practicing and performing empathetically intelligent skills during service encounters. Design/methodology/approach This study adopts a multistage design to develop the scale. Phase 1 combines a literature review with text mining from 3,737 service robots-related YouTube comments to generate 16 items capturing this new construct. Phase 2 assesses both face and content validity of those items, while Phase 3 recruits Prolific FLEs sample to evaluate construct validity. Phase 4 checks this construct’s nomological validity using PLS-SEM and Phase 5 experiments dedicated effort (vs natural talent) as an effective approach to foster FLEs’ perceived empathetic creativity. Findings The final scale is comprised of 13 refined items that capture three dimensions (social, interactive and emotional) of empathetic creativity. This research provides timely implications to help FLEs in high-contact services stay competitive. Originality/value This study introduces the new construct of empathetic creativity, which goes beyond the traditional definition of creativity in services and highlights the importance of empathetic intelligence for FLEs in future employment. This study also develops a multi-item scale to measure this construct, which can be applied to future service management research.
... (Kate Letheren, Queensland University of Technology) At the same time, an extensive list of extra theoretical perspectives was put forth by our expert panel. Among the most mentioned theories and models are theory of mind (Banks, 2020), consumer culture theory (Arnould and Thompson, 2005), the job demands-resources model (Bakker and Demerouti, 2007), robot intelligence levels theory (Huang and Rust, 2018), self-determination theory (Deci and Ryan, 1985), speciesism theory (Schmitt, 2020), servicedominant logic (Vargo and Lusch, 2004), construal level theory (Trope and Liberman, 2010), actor network theory (Latour, 2007) and regulatory focus theory (Higgins, 1998). Recent work by Mariana et al. (2022) and Schepers and Streukens (this issue) outline several service robots and AI-related research avenues in relation to most of these theories. ...
Article
Full-text available
Purpose Service robots are now an integral part of people's living and working environment, making service robots one of the hot topics for service researchers today. Against that background, the paper reviews the recent service robot literature following a Theory-Context-Characteristics-Methodology (TCCM) approach to capture the state of art of the field. In addition, building on qualitative input from researchers who are active in this field, the authors highlight where opportunities for further development and growth lie. Design/methodology/approach The paper identifies and analyzes 88 manuscripts (featuring 173 individual studies) published in academic journals featured on the SERVSIG literature alert. In addition, qualitative input gathered from 79 researchers who are active in the service field and doing research on service robots is infused throughout the manuscript. Findings The key research foci of the service robot literature to date include comparing service robots with humans, the role of service robots' look and feel, consumer attitudes toward service robots and the role of service robot conversational skills and behaviors. From a TCCM view, the authors discern dominant theories (anthropomorphism theory), contexts (retail/healthcare, USA samples, Business-to-Consumer (B2C) settings and customer focused), study characteristics (robot types: chatbots, not embodied and text/voice-based; outcome focus: customer intentions) and methodologies (experimental, picture-based scenarios). Originality/value The current paper is the first to analyze the service robot literature from a TCCM perspective. Doing so, the study gives (1) a comprehensive picture of the field to date and (2) highlights key pathways to inspire future work.
... (Kate Letheren, Queensland University of Technology) At the same time, an extensive list of extra theoretical perspectives was put forth by our expert panel. Among the most mentioned theories and models are theory of mind (Banks, 2020), consumer culture theory (Arnould and Thompson, 2005), the job demands-resources model (Bakker and Demerouti, 2007), robot intelligence levels theory (Huang and Rust, 2018), self-determination theory (Deci and Ryan, 1985), speciesism theory (Schmitt, 2020), servicedominant logic (Vargo and Lusch, 2004), construal level theory (Trope and Liberman, 2010), actor network theory (Latour, 2007) and regulatory focus theory (Higgins, 1998). Recent work by Mariana et al. (2022) and Schepers and Streukens (this issue) outline several service robots and AI-related research avenues in relation to most of these theories. ...
Article
Full-text available
Purpose-Service robots are now an integral part of our living and working environment, making them one of the hot topics for service researchers today. Against this background, this paper reviews the recent service robot literature following a Theory-Context-Characteristics-Methodology (TCCM) approach to capture the state-of-art of the field. In addition, building on qualitative input from researchers active in this field, we highlight where opportunities for further development and growth lie. Design/methodology/approach-This paper identifies and analyzes 88 manuscripts (featuring 173 individual studies) published in academic journals featured on the SERVSIG literature alert. In addition, qualitative input gathered from 79 researchers active in the service field and doing research on service robots is infused throughout the manuscript. Findings-The key research foci of the service robot literature to date include comparing service robots with humans, the role of service robots' look & feel, consumer attitudes toward service robots, and the role of service robot conversational skills & behaviors. From a TCCM view, we discern dominant theories (anthropomorphism theory), contexts (retail/healthcare, U.S. samples, B2C settings, and customer-focused), study characteristics (robot type: chatbots, not embodied, and text/voice-based; outcome: customer intentions), and methodologies (experimental, picture-based scenarios). Originality/value-This paper is the first to analyze the service robot literature from a TCCM perspective. Doing so, this study gives (1) a comprehensive picture of the field to date and (2) highlights key pathways to inspire future work.
... At the same time, robots are designed in a way that would arouse positive feelings such as social belonging in their human users. The special role that robots have in our culture is shown when choosing human-human interaction over robots can actually be viewed as a form of speciesism (Lancaster 2019;Schmitt 2020), that is, speciesism against robots in preference of human interaction. In an opposite vein, there are also implications of the human-robot dichotomy still being more rigid than the human-animal dichotomy (Bryson et al., 2020). ...
Article
Full-text available
Social robotics designed to enhance anthropomorphism and zoomorphism seeks to evoke feelings of empathy and other positive emotions in humans. While it is difficult to treat these machines as mere artefacts, the simulated lifelike qualities of robots easily lead to misunderstandings that the machines could be intentional. In this post-anthropocentrically positioned article, we look for a solution to the dilemma by developing a novel concept, "abiozoomorphism." Drawing on Donna Haraway's conceptualization of companion species, we address critical aspects of why robots should not be categorized with animals by showing that the distinction between nonliving beings and living beings is still valid. In our phenomenologically informed approach to social robotics, we propose that the concept of abiozoomorphism makes it possible to transcend the strong ethos of biotism that prevails in both robot design and academic research on social robots.
Article
Few studies on the effects of smartphone assistants' anthropomorphism on consumer behavior have been conducted. This study explored the effects of anthropomorphism on consumers' psychological ownership of smartphone assistants and perceptions of their competence. Moreover, it investigated arousal during smartphone assistant use and examined how relationship norms governing the consumer–smartphone assistant relationship moderate the effects of anthropomorphism on psychological ownership. In study 1, which had a one‐factorial design (high vs. low anthropomorphism), a highly anthropomorphic smartphone assistant was perceived to be more competent. In study 2, which followed the same experimental procedure under a 2 × 2 full factorial design, the anthropomorphism–relationship norms interaction moderated psychological ownership. Psychological ownership fully mediated the effect of anthropomorphism on perceived competence. Study 3 employed a 2 (anthropomorphism: high vs. low) × 2 (arousal: high vs. low) × 2 (relationship norms: exchange vs. communal) full factorial design. In the high arousal condition, anthropomorphism and relationship norms exerted no significant effect on psychological ownership. A conditional indirect effect from anthropomorphism to perceived competence through psychological ownership was only significant under low arousal and adherence to communal relationship norms. In summary, the degree of anthropomorphism influenced perceived competence through psychological ownership. Therefore, companies should incorporate anthropomorphic cues into smartphone assistant design, thereby promoting their personification by consumers and benefiting perceptions of their competence.
Article
Multinational corporations (MNCs) are continuing to invest more in expanding into new markets around the world. These firms are faced with determining the optimal go-to-market strategy in these heterogeneous new markets to attract and retain profitable customers. This paper provides an organizing framework to help firms develop profitable customer-level strategies across countries in the digital environment. We start by providing a summary of the marketing literature on a customer-based execution strategy. Next, we discuss how the evolving digital landscape is affecting firms’ relationships with customers and describe some of the current digital product and process innovations in the marketplace. We discuss boundary conditions for how these digital product and process innovations might affect profitable customer strategies in a global context. In addition, we discuss implementation challenges that MNCs will likely face in deploying these customer-level strategies and other stakeholders (outside of customers) that will likely play a role in the execution of these customer-level strategies. Finally, we summarize set of research questions to guide future research on customer-level strategies in a global digital context.
Article
Full-text available
The rise of humanoid robots in hospitality services accelerates the need to understand related consumer re- actions. Four scenario-based experiments, building on social presence and social cognition theories, examine how humanoid robots (vs. self-service machines) shape consumer service perceptions vis-a`-vis concurrent presence/ absence of human staff. The influence of consumers’ need for human interaction and technology readiness is also examined. We find that anthropomorphizing service robots positively affects expected service quality, first-visit intention, willingness to pay, as well as increasing warmth/competence inferences. However, these effects are contingent on the absence of human frontline staff, explained by viewing anthropomorphism as a relative concept. Humanoid robots increase psychological risk, but this poses no threat to expected service quality when consumers’ need for human interaction is controlled for. Additionally, we show that a humanoid robot’s effect on expected service quality is positive for all but low technology readiness levels. Further implications for theory/ practice are discussed.
Chapter
Artificial intelligence applications are transforming many service sectors. Research on artificial intelligence in business-to-consumer services suggests approaches for innovative kinds of customer services, enhancement of human service provision, and new customer experiences based on artificial intelligence applications. The chapter provides an overview of scholarly research on the scientific understanding of artificial intelligence in services, investigated artificial intelligence algorithms and technologies in concrete service contexts, and a summary of the study findings as well as suggestions for future research in this dynamically expanding field.
Article
Full-text available
Purpose The goal of the paper is to identify the comprehensive trends, practical implications and risks of artificial intelligence (AI) technology in the economy and society, exploring the expectations of Hungarian powerful actors in a global arena. Design/methodology/approach Sociology of expectations framed the theoretical considerations. The explorative research design presents an anonymous qualitative online survey. Respondents represent the Hungarian AI Coalition with a quarter of the members. Findings The key finding is a controversial result. Although AI is interpreted as a decision-supportive and problem-solving technology for the economy, uncertainties and fears for the society are clearly formulated. Interpreting the results and the originality of the paper, trust building and responsibility sharing in cross-industrial collaborations are fundamental to reduce social uncertainties, override the popular or science fiction narratives and increase the future well-being. Research limitations/implications The length of textual responses did not allow a deeper analysis. However, for professional reasons, participants were committed to completing the survey. Practical implications The paper suggests for business and policymaking to identify the AI technology as a tool distinguishing from tech-owners’ responsibilities. Therefore, the implications of the study support a reliable AI and also potential for cross-industrial collaborations. Originality/value The paper highlights the uncertainties of business investment and policymaking to encourage a comparative research project in the EU for trustworthy AI. Similar exploratory studies with the same focus, sample and outcome are not available yet
Article
Full-text available
Artificial intelligence (AI) is rapidly reconstructing consumer experiences with brands in recent years. However, there have been the unsettled debates on whether humans react to robots (e.g., chatbots) in the same way as they do to other humans, and how the intrinsic strength of AI (i.e., autonomous processing and synthetization of information) and humans (i.e., emotional intelligence) factor in the human-AI interactions in brand communication settings. Hence, this study investigates the conditions under which a service entity of a brand can optimize their potential. To this end, the current study conceptualizes and operationalizes two dimensions that define chatbots’ capabilities – message contingency (i.e., contingency-based interactivity) and emotional intelligence (i.e., sympathy). Based on two experiments, we found that, regarding the same online customer service of an apparel brand, participants rated the human employee to be more competent and warmer than a chatbot. When a human employee who expresses sympathy to the afflicted customer during the conversation, participants considered the employee to be more competent when he/she also exhibits contingency (vs. no contingency) during the conversation, which in turn, elicited higher patronage intentions among participants.
Article
The advent of Internet of Things (IoT) technology has revolutionized both the roles and functions of everyday objects and how users interact with them. Using artificial intelligence (AI) and an advanced capacity for communication, smart objects can now function as communication sources and deliver persuasive messages. This study investigates how different types of agency and source cues shape the persuasiveness of a smart object via social presence. When users interacted with a smart object that exerted its own agency, they sensed greater social presence when the object used machine cues rather than human cues. Conversely, when users interacted with a smart object that allowed the user to exercise their own agency, human cues, rather than machine cues, produced greater feelings of social presence, which enhanced the persuasiveness of the messages conveyed by the object. However, the persuasive effects of social presence were reversed when the interaction prompted AI anxiety in the user.
Article
Full-text available
There is a growing need to understand how consumers will interact with artificially intelligent (AI) domestic service robots, which are currently entering consumer homes at increasing rates, yet without a theoretical understanding of the consumer preferences influencing interaction roles such robots may play within the home. Guided by anthropomorphism theory, this study explores how different levels of robot humanness and social interaction opportunities affect consumers' liking for service robots. A review of the extant literature is conducted, yielding three hypotheses that are tested via 953 responses to an online scenario‐based experiment. Findings indicate that while consumers prefer higher levels of humanness and moderate‐to‐high levels of social interaction opportunity, only some participants liked robots more when dialogue (high‐interaction opportunity) was offered. Resulting from this study is the proposed Humanized‐AI Social Interactivity Framework. The framework extends previous studies in marketing and consumer behavior literature by offering an increased understanding of how households will choose to interact with service robots in domestic environments based on humanness and social interaction. Guidelines for practitioners and two overarching themes for future research emerge from this study. This paper contributes to an increased understanding of potential interactions with service robots in domestic environments.
Article
Full-text available
Artificial intelligence (AI) has captured substantial interest from a wide array of marketing scholars in recent years. Our research contributes to this emerging domain by examining AI technologies in marketing via a global lens. Specifically, our lens focuses on three levels of analysis: country, company, and consumer. Our country-level analysis emphasizes the heterogeneity in economic inequality across countries due to the considerable economic resources necessary for AI adoption. Our company-level analysis focuses on glocalization because while the hardware that underlies these technologies may be global in nature, their application necessitates adaptation to local cultures. Our consumer-level analysis examines consumer ethics and privacy concerns, as AI technologies often collect, store and process a cornucopia of personal data across our globe. Through the prism of these three lenses, we focus on two important dimensions of AI technologies in marketing: (1) human-machine interaction and (2) automated analysis of text, audio, images, and video. We then explore the interaction between these two key dimensions of AI across our three-part global lens to develop a set of research questions for future marketing scholarship in this increasingly important domain.
Chapter
Full-text available
The latest shift in the industry, known as industry 4.0, has introduced new challenges in manufacturing. The main characteristic of this transformation is digital technologies’ effect on the way production processes occur. Due to the technological growth, knowledge and skills on manufacturing operations are becoming obsolete. Hence, the need for upskilling and reskilling individuals urges. In collaboration with other key entities, educational institutions are responsible for raising awareness and interest of young students to reach a qualified and equal workforce. Drawing on a thorough literature review focused on key empirical studies on learning factories and fundamental industry 4.0 concepts, trends, teaching approaches, and required skills, the goal of this paper is to provide a gateway to understand effective learning factories’ approaches and a holistic understanding of the role of advanced and collaborative learning practices in the so-called education 4.0.
Article
Full-text available
We introduce and investigate the philosophical concept of ‘speciesism’ —the assignment of different moral worth based on species membership —as a psychological construct. In five studies, using both general population samples online and student samples, we show that speciesism is a measurable, stable construct with high interpersonal differences, that goes along with a cluster of other forms of prejudice, and is able to predict real-world decision-making and behavior. In Study 1 we present the development and empirical validation of a theoretically driven Speciesism Scale, which captures individual differences in speciesist attitudes. In Study 2, we show high test-retest reliability of the scale over a period of four weeks, suggesting that speciesism is stable over time. In Study 3, we present positive correlations between speciesism and prejudicial attitudes such as racism, sexism, homophobia, along with ideological constructs associated with prejudice such as social dominance orientation, system justification, and right-wing authoritarianism. These results suggest that similar mechanisms might underlie both speciesism and other well-researched forms of prejudice. Finally, in Studies 4 and 5, we demonstrate that speciesism is able to predict prosociality towards animals (both in the context of charitable donations and time investment) and behavioral food choices above and beyond existing related constructs. Importantly, our studies show that people morally value individuals of certain species less than others even when beliefs about intelligence and sentience are accounted for. We conclude by discussing the implications of a psychological study of speciesism for the psychology of human-animal relationships.
Article
Full-text available
Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Article
Full-text available
The concept of dehumanization lacks a systematic theoretical basis, and research that addresses it has yet to be integrated. Manifestations and theories of dehumanization are reviewed, and a new model is developed. Two forms of dehumanization are proposed, involving the denial to others of 2 distinct senses of humanness: characteristics that are uniquely human and those that constitute human nature. Denying uniquely human attributes to others represents them as animal-like, and denying human nature to others represents them as objects or automata. Cognitive underpinnings of the "animalistic" and "mechanistic" forms of dehumanization are proposed. An expanded sense of dehumanization emerges, in which the phenomenon is not unitary, is not restricted to the intergroup context, and does not occur only under conditions of conflict or extreme negative evaluation. Instead, dehumanization becomes an everyday social phenomenon, rooted in ordinary social-cognitive processes.
Chapter
I propose to consider the question, “Can machines think?”♣ This should begin with definitions of the meaning of the terms “machine” and “think”. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words “machine” and “think” are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, “Can machines think?” is to be sought in a statistical survey such as a Gallup poll.
Reactions to medical artificial intelligence
  • C Logoni
  • A Bonezzi
  • C K Morewedge
Robot or human? Perceptions of human-like robots
  • N Castelo
  • B Schmitt
  • M Sarvary