Content uploaded by Raffaele Fabio Ciriello
Author content
All content in this area was uploaded by Raffaele Fabio Ciriello on Oct 29, 2024
Content may be subject to copyright.
Australasian Conference on Information Systems Chen, Ciriello, Rubinsztein, and Vaast
2024, Canberra Artificial Emotional Intelligence: A Scoping Review
1
The Past, Present, and Futures of Artificial Emotional
Intelligence: A Scoping Review
Full research paper
Angelina Ying Chen
Business Information Systems
University of Sydney
Sydney, Australia
angelina.chen@sydney.edu.au
Raffaele F Ciriello
Business Information Systems
University of Sydney
Sydney, Australia
raffaele.ciriello@sydney.edu.au
Zara Rubinsztein
Business Information Systems
University of Sydney
Sydney, Australia
zrub7658@uni.sydney.edu.au
Emmanuelle Vaast
Desautels Faculty of Management
McGill University
Montréal, Canada
emmanuelle.vaast@mcgill.ca
Abstract
Artificial emotional intelligence (AEI) systems, which sense, interpret, and respond to human emotions,
are increasingly utilised across various sectors, enhancing interpersonal interactions while raising
ethical concerns. This scoping review examines the evolving field of AEI, covering its historical
development, current applications, and emerging research opportunities. Our analysis draws from 96
articles spanning multiple disciplines, revealing significant progress from initial scepticism to growing
acceptance of AEI as an interdisciplinary study. We highlight AEI’s applications in healthcare, where it
improves patient care; in marketing, where it enhances customer interactions; and in the love and sex
industries, where it facilitates new forms of romantic and erotic engagement. Each sector demonstrates
AEI’s potential to transform practices and provoke ethical debates. The review provides a framework to
understand AEI’s sociotechnical implications and identifies future research opportunities regarding
trustworthy, privacy-preserving, context-aware, and culturally adaptive AEI systems, with a focus on
their profound impact on human relationships.
Keywords: artificial emotional intelligence, affective computing, AI, scoping review, history, futures,
applications
Australasian Conference on Information Systems Chen, Ciriello, Rubinsztein, and Vaast
2024, Canberra Artificial Emotional Intelligence: A Scoping Review
2
1 Introduction
Irish writer John Connolly once said: “The nature of humanity, its essence, is to feel another’s pain as
one’s own, and to act to take that pain away. There is a nobility in compassion, a beauty in empathy, a
grace in forgiveness.” This resonates with Meryl Streep’s reflection: “The great gift of human beings is
that we have the power of empathy, we can all sense a mysterious connection to each other.” For the
longest time in human history, our emotional intelligence–the capacity to comprehend and control
personal emotions while affecting those of others through empathy and empathetic communication
(Goleman 2012; Liu-Thompkins et al. 2022; Picard 1995)–was believed to be quintessentially human, a
unique aptitude that differentiates us from other animals or machines.
This belief is now being challenged by artificial emotional intelligence (AEI), a new class of generative
AI systems capable of sensing, interpreting, and responding to human emotions in ways resembling
human emotional intelligence (Krakovsky 2018; Podoletz 2022; Somers 2019). Here, sensing involves
acquiring data from sources like text, speech, facial expressions, physiological signals, and body
language to infer a user’s emotional state (Picard, 2000). Interpreting refers to processing this data
using machine learning algorithms and contextual models to understand emotions (Calvo & D'Mello
2010). Responding signifies the system’s ability to adapt its behavior based on interpreted emotions,
such as adjusting outputs or interaction styles for an emotionally responsive experience (Leite et al.
2013). We use "sensing, interpreting, and responding" in line with technical literature, not to imply
machines experience emotions like humans do.
AEI systems exhibiting human-like emotional behaviors are making headlines—from Google’s LaMDA
AI’s alleged sentience (The-Guardian 2022) to Microsoft Bing AI’s surprising emotional responses
(Leswing 2023) and users forming strong bonds with Replika’s chatbot (Brooks 2023; Ciriello et al.
2024). This technology arises amid a global ‘loneliness pandemic’ impacting individual and societal well-
being (Ernst et al. 2022; Montemayor et al. 2022). AEI offers potential remedies for loneliness through
human-like conversations and realistic reactions, fostering connection (Belk, 2022). However,
attributing consciousness and empathy to these agents raises ethical challenges, possibly leading to
confusion about their moral status and over-reliance on technology in roles requiring human empathy
(Chen et al. 2023; Montemayor et al. 2022). Despite advancements, a comprehensive synthesis of
interdisciplinary knowledge on AEI’s technological foundations, applications, and trajectories is lacking.
To address this gap, we conduct a scoping review to map AEI’s research landscape, outlining its
technological developments, applications, and future prospects. Our inquiry is guided by three research
questions: (1) What is the state of AEI technology, including its evolution and current capabilities? (2)
How is AEI applied in practice across domains like healthcare, marketing, education, policing, and
sex? (3) What are AEI’s potential future trajectories and research opportunities, especially concerning
societal implications and ethical dilemmas? We employ a scoping literature review (Paré et al. 2016),
grounded in the hermeneutic tradition (Boell & Cecez-Kecmanovic 2014), and enhanced by the rigor of
the PRISMA (2020) guidelines. This composite approach allows for a nuanced exploration of AEI,
capturing both depth and breadth in this interdisciplinary field.
Our contribution is to conceptualise AEI, map the current knowledge landscape, and provide an agenda
for future research from a sociotechnical perspective. By synthesising diverse literature and analysing
96 articles, we offer a framework for understanding AEI’s technological foundations, applications, and
future directions. This paper serves as a foundational text for scholars, practitioners, and policymakers
navigating AEI’s evolving landscape. Key insights include the need for privacy-preserving and
trustworthy AEI to protect emotional data, especially in sensitive contexts. Developing context-aware
and culturally adaptive AEI systems is essential to minimize biases and ensure effective interaction
across diverse environments. The phenomena of human-AI companionship and digisexuality highlight
the necessity of balancing emotional labour automation with ethical considerations to foster trust and
empathy (Ciriello et al. 2024; Rubinsztein & Ciriello 2024).
The paper proceeds with a description of our review methodology, followed by summaries of AEI’s
historical evolution, current applications, and future research opportunities. We conclude with key
takeaways and reflections on limitations.
2 Method
In alignment with Paré et al.’s (2016) typology, our methodology uses a scoping review to cultivate a
comprehensive overview of the extant literature of AEI. The primary aim of this approach is not to build
theory but to sketch the bigger picture in a research context, connecting diverse research streams,
stimulating new ideas, and guiding future research directions by identifying knowledge gaps, and
Australasian Conference on Information Systems Chen, Ciriello, Rubinsztein, and Vaast
2024, Canberra Artificial Emotional Intelligence: A Scoping Review
3
conceptual frameworks, and pinpointing emerging trends. Unlike other reviews focusing on depth and
theory, scoping reviews prioritise literature breadth and are characterised by a broad research question
targeting research trends and offer novel IS research questions.
2.1 Study Selection and Eligibility Criteria
We considered an inclusive range of 244 renowned journals in business and related disciplines, based
on various journal rankings, such as the AIS Senior Scholars’ Basket of Eight, top-ranked journals in the
Australian Business Deans Council list, the Danish BFI list, and the German VHB list, as well as journals
that are likely to publish on AEI. We searched on Scopus using the query: (affective OR emotional OR
sentiment OR empathy) AND (analytics OR computing OR intelligence OR “AI” OR artificial).
The inclusion criteria were that an article had to focus specifically on AEI from a technical, business, or
ethical perspective. Studies that only examined emotional intelligence without AI or AI without
emotions were excluded. Only studies available in English were selected. No restrictions were imposed
on publication dates, as our goal was a comprehensive review of AEI’s history and development. We
screened the identified 986 relevant articles by reading titles, abstracts, and keywords, leading to an
initial set of 107 articles for detailed consideration (Appendix A).
To increase inter-rater reliability and reduce bias in the selection process, two authors independently
screened all papers, beginning with the abstract, keywords, introduction, and ending with the
conclusion. A paper was excluded if it was deemed out of scope or a duplicate. In the event of a
disagreement regarding the inclusion of a paper, the two authors resolved the dispute by presenting
their arguments and basing a decision on the agreed research scope. During the reading and discovery
phase, additional studies were added through orientational reading or snowballing, considering the
references of the included works (Boell & Cecez-Kecmanovic 2014). As a result, a total of 96 articles were
included in the review, of which 89.22% are peer-reviewed. The remaining, non-peer-reviewed articles
were published in high-quality practitioner journals such as IEEE Spectrum, MIT Sloan Management
Review, or Harvard Business Review.
We followed the PRISMA (2020) guidelines to select and analyse the considered publications. While
hermeneutic and systematic approaches are widely regarded as distinct genres of literature review, they
may nonetheless overlap, because striving for systematicity (as per the PRISMA approach) does not
necessarily contradict the more iterative and reflective approach typical for hermeneutic reviews (Paré
et al. 2016). As the hermeneutic review process can be subjective, adopting a semi-structured approach
that blends hermeneutic techniques with elements of the PRISMA method increases structure and
transparency while retaining the ability to reflect critically on the literature.
2.2 Categorization and Analysis of Literature
We analysed the sampled articles by using a concept-centric literature analysis approach (Webster &
Watson 2002) to organise findings into three categories that reflect the chronological development of
AEI: 1) History of AEI Technological Development, 2) AEI Business Applications, and 3) AEI Future
Trajectories, corresponding to our title Past, Present, and Future of AEI. Appendix B presents a
trendline of AEI research from 1995 to the present, revealing a chronological evolution with initial
technological exploration, followed by increased interest in business applications from 2003, and more
recent addition of future trends focusing on ethical considerations in 2019.
To present the history of AEI’s technical development, we subcategorised articles on AEI technology
according to the particular technology in question, including natural language processing, facial
expression, physiological body sensors, and multimodal affective computing. Within the AEI Business
Applications category, studies were categorised according to industry sectors. The resulting distribution
of analysed papers is illustrated in Appendix C, highlighting healthcare and marketing as the most
researched industries for AEI by the number of publications. The ‘Futures’ category focuses on
highlighting key IS research topics based on gaps and issues we found during the literature review
process, reflected in the Discussion section. To track and organise the selected literature, an Excel
codebook was used to support data organization and analysis (Appendix D).
3 The Past: A Brief History of Affective Computing
This section responds to RQ1: What is the state of AEI technology, encompassing its historical evolution
and current capabilities? Since its introduction by Picard in 1995, AEI has grown from a concept met
with scepticism to a respected field integrating multimodal technologies for enhanced emotion
recognition across various sectors. In light of ongoing challenges in bias, data collection, and
Australasian Conference on Information Systems Chen, Ciriello, Rubinsztein, and Vaast
2024, Canberra Artificial Emotional Intelligence: A Scoping Review
4
standardization, future research opportunities lie in developing culturally adaptive, privacy-preserving
AEI applications. A pivotal moment in the history of AEI was when Picard (1995) coined affective
computing, defined as a computer’s capabilities to recognise a user’s emotional states, express its own
emotions, and respond to the user’s emotions. This concept aimed to embed emotions into computer
systems to improve their assistance to humans and decision-making capabilities. Picard drew
inspiration from Clyne’s "sentograph," a device intended to objectively measure emotions, highlighting
the early interest in utilising physiological sensors like the electroencephalograph (EEG) introduced by
Hans Berger in 1924 (Picard 2010).
Initially, the integration of emotions into computing faced scepticism and ridicule, being viewed as an
oxymoron by critics who doubted that computing could be affective (Hollnagel 2003; Picard 2010).
Despite initial reluctance from traditional peer-reviewed journals to publish on the topic (Picard 2010),
the growing accessibility of textual data since mid 2000s, driven by the release of smartphones, has
catalysed research in natural language processing (Shreetim 2024). This period of affective computing
research focused on analysing facial expressions, body gestures, and sentiment analysis, alongside
recognising affective states through physiological signals like skin conductance and heart rate variability
(Batrinca & Treleaven 2015; Pantic & Patras 2006; Zhou et al. 2011). Picard (2010) shares her
remembrances of how affective computing went “from laughter to IEEE” in the inaugural issue of the
IEEE Transactions on Affective Computing, illustrating its growing acceptance.
Post-2010, with the accumulation of single modal affect recognition research, the research focus has
been on multimodal approaches, combining different data sources like facial expressions and vocal cues,
powered by advancements in deep learning and neural networks to enhance emotion recognition
accuracy (Bhatti et al. 2016). This section dissects these modalities in contemporary AEI systems.
3.1 Natural Language Processing for Affective Computing
Natural language processing (NLP) is a subfield of machine learning that focuses on the interpretation
of human language by computers. It encompasses the collection, processing, and analytical examination
of textual data, enabling machines to comprehend, interpret, and generate human language (Kratzwald
et al. 2018). In the AEI realm, NLP has been predominantly applied for sentiment analysis, facilitating
the detection of emotions from spoken language and determining emotive states through textual
analysis (Lee et al. 2002; Salehan & Kim 2016; Schuller 2018). Sentiment analysis involves the
computational study of opinions, sentiments, emotions, and attitudes expressed in texts towards a
particular entity. It plays a vital role in domains such as social media advertising, political sentiment
evaluation, and consumer feedback interpretation (Ravi & Ravi 2015). Initiatives in the early 2000s
propelled the exploration of computational techniques for analysing sentiments within texts, employing
lexicon-based and rule-based systems to pinpoint words and phrases that express emotions or
sentiments (Batrinca & Treleaven 2015; Hercig et al. 2016; Ravi & Ravi 2015; Yadollahi et al. 2017).
Contemporary sentiment analysis techniques harness neural networks, integrating sentiment lexicons
and psychological theories to enhance sentiment prediction accuracy (Xiang et al. 2021).
Data collection for NLP-based sentiment analysis involves amassing extensive text corpora from diverse
sources, including literature, academic articles, social media platforms, websites, and audio media (Dai
et al. 2015; Schuller 2018). This data undergoes processing through methods like tokenization,
stemming, and part-of-speech tagging (Shanahan et al. 2006), facilitating a range of applications beside
sentiment analysis, such as text classification, machine translation, and question-answering. These
processes allow for the identification of patterns and relationships within the data, laying the
groundwork for predictive modelling and the generation of responses that closely mimic human
conversation (Schuller 2018). Deep learning approaches, such as convolutional neural networks, find
applications in healthcare, analysing vast datasets to infer emotional states (Lavanya & Sasikala 2021).
The inherent ambiguity and complexity of human language present significant challenges for NLP, as
contextual nuances are essential for accurately interpreting meanings (Shanahan et al. 2006). For
instance, processing ‘noisy’ and unstructured text from social media presents obstacles, given the
prevalence of abbreviations, misspellings, colloquialisms, and emoticons, which can undermine model
precision (Batrinca & Treleaven 2015; Ravi & Ravi 2015). Moreover, NLP models often struggle with the
subtleties of language, including sarcasm and cultural references (Schuller 2018). Although the rapid
advancement of large language models like ChatGPT demonstrates potential for future applications to
detect and respond to human emotions with greater accuracy, the fundamental ambiguity of language
and its propensity for multiple interpretations may ultimately limit the ability of AEI to mimic human
empathy.
Australasian Conference on Information Systems Chen, Ciriello, Rubinsztein, and Vaast
2024, Canberra Artificial Emotional Intelligence: A Scoping Review
5
3.2 Facial Expression Recognition for Affective Computing
Facial expression recognition analyses facial muscle movements to identify expressions, which is vital
for interpreting emotions and social cues (Pantic & Patras 2006). This technology helps behavioral
scientists and psychologists better understand emotional responses, helps medical professionals
monitor patient distress, enhances security systems by detecting suspicious behaviors, improves
deception detection through analysis of involuntary micro-expressions, and advances computer science
by refining algorithms for more intuitive human-computer interactions (D'Mello & Kory 2015; Dakalbab
et al. 2022; Jardine et al. 2022; Podoletz 2022). It involves the collection of data through tracking facial
points across sequences of face images, measuring changes in expressions. The adoption of facial
recognition technology in affective computing began in 2006, focusing on emotion recognition through
frontal-view facial images (Pantic & Patras, 2006). With the advent of deep neural networks, there has
been a significant enhancement in affect recognition, as these networks facilitate the learning of
expressive features and capture various levels of abstraction, thereby improving the accuracy of emotion
recognition (Rouast et al. 2019).
A notable challenge in this domain is the cultural variability of facial expressions of emotion. Jack et al.
(2012) reveal pronounced differences between cultures, with Eastern cultures displaying more reserved
expressions and Western cultures being more overt. This cultural disparity presents a considerable
obstacle for AEI systems, as cultural insensitivity may lead to trust issues, decreased accuracy, and
reduced effectiveness (Jack et al. 2012).
3.3 Physiological Body Sensors for Affective Computing
Body sensors can collect and analyse physiological signals to predict users’ affective states, employing
measures like skin conductance response (SCR), facial electromyography (EMG), and
electroencephalography (EEG) to distinguish basic emotions and estimate affective states in real-time
(Picard et al. 2001; Zhou et al. 2011). SCR, indicative of emotional arousal through sweat gland activity,
finds application in clinical healthcare, and physiological monitoring (Kim et al. 2014). EEG, capturing
brain activity related to the limbic system, plays a central role in human-computer interaction, medical
diagnostics, and military applications, demonstrating effectiveness in emotion recognition, particularly
through responses to stimuli such as music (Bhatti et al. 2016; Liu et al. 2021). Recent advancements
include the integration of motion capture and body sensing technology, utilising non-invasive wearables
for emotion recognition from physiological signals (Lisetti & Nasoz 2004; Noroozi et al. 2018). Such
technologies, alongside the development of wearable systems, facilitate the automatic recognition of
expressive movements. Researchers have explored various modelling approaches for the human body
and emotions, from part-based to kinematic models, enabling real-time emotion identification. For
instance, studies show correlations between head movements and motions (Purabi et al. 2019).
Challenges include the reliable labelling of affective states amidst cultural and gender differences in the
interpretation of body language and the discomfort associated with EEG electrode placement (Noroozi
et al. 2018; Zhou et al. 2011). The accuracy of EEG-based emotion recognition decreases as the number
of emotions classified increases (Chanel et al. 2009; Suhaimi et al. 2020). Moreover, there is a noted
scarcity of studies on virtual reality-based emotion classification, which could potentially evoke stronger
emotional responses compared to traditional stimuli (Suhaimi et al. 2020). There is a vital need for
accurate, multifunctional sensors that can simultaneously capture diverse physiological signals.
Overcoming these obstacles necessitates richer representations of affective body language and the
availability of high-quality labelled and unlabeled data (Asiain et al. 2022).
3.4 From Multimodal Affective Computing to Sociotechnical AEI Systems
Multimodal affective computing integrates the various aforementioned modalities to recognise human
emotions through multiple modalities, including facial expressions, physiology, and sentiment analysis
(Broekens & Brinkman 2013; Calvo & D'Mello 2010; Picard 2008). This approach collects data from
diverse sources, such as cameras for facial expressions, physiological sensors for emotional changes, and
dialogue interactions for linguistic expressions of emotion. Machine learning techniques train models
to identify patterns and predict emotions accurately from this multimodal data (Chen et al. 2020;
D'Mello & Kory 2015; Hudlicka 2003). Research has demonstrated that fusing modalities, like facial
expressions with voice, enhances emotion recognition accuracy (D'Mello & Kory 2015; Kukolja et al.
2014). Platforms like Multisense leverage 3D head positioning, facial tracking, gaze analysis, and audio
analysis to gather comprehensive emotional data (McDuff et al. 2019). Moreover, advancements in deep
learning and neural networks have led to more sophisticated models for affect recognition, improving
the dynamism, robustness, and accuracy of emotion prediction (McDuff et al. 2019; Yang et al. 2022).
Australasian Conference on Information Systems Chen, Ciriello, Rubinsztein, and Vaast
2024, Canberra Artificial Emotional Intelligence: A Scoping Review
6
Constructing effective multimodal systems poses challenges due to the complexity of integrating
computer vision, audio processing, natural language processing, and psychology (McDuff et al. 2019).
While progress has been made in non-intrusive data collection methods, further research is needed to
enhance the collection of multimodal affective data and the comparability and replicability of studies
and reporting methods (Broekens & Brinkman, 2013; Chen et al., 2020; Hudlicka, 2003). AEI extends
beyond the technical realm of affective computing to encompass a sociotechnical system that weaves
together technology, human users, societal norms, and ethical considerations. While affective
computing is centred on the multimodal technology, AEI broadens this scope to address the interplay
between technology and human elements across diverse applications (Schuller & Schuller 2018). Thus,
we frame AEI as a socio-technical system (Chen et al. 2023).
4 The Present: Mapping the Landscape of AEI Applications
This section explores AEI’s applications, responding to RQ2: How is AEI applied in practice? Steady
advancements in AI are rapidly improving the ability of systems to sense, interpret, and react to a wide
range of cues, including tone, pitch, facial expressions, eye contact, and body language. These
enhancements pave the way for AEI’s integration into professional roles that rely heavily on
communication skills. Notable applications include predicting real time anxiety for public speaking,
enabling product managers to deeply understand customer sentiments for better product development,
assisting therapists in recognising subtle emotional cues for targeted interventions, helping educators
tailor teaching approaches by monitoring student engagement, improving language learning with
instant feedback on emotional expression, and supporting medical professionals in assessing patient
emotions for more personalised care (Büschken & Allenby 2016; Kimani & Bickmore 2019; Le Glaz et
al. 2021; Schiff 2021). Projected to reach a market sise exceeding $55 billion by 2026, the demand for
emotion detection and conversational AI technologies underscores the growing impact of AEI in
business (Limon & Plaster 2022). The subsections are ordered by the number of papers per domain.
4.1 AEI in Healthcare
In healthcare, particularly since the COVID pandemic has strained healthcare systems worldwide, AEI
applications emerge as a transformative force, promising to augment patient care with personalised
empathetic interactions. At the forefront, AEI-enabled virtual assistants and chatbots augment patient
care by managing medications and providing emotional support, while its application in clinical
psychology marks a leap towards automating mental health diagnostics (Le Glaz et al. 2021).
Mental health diagnostics. AEI’s major role in mental health is in automating psychological disorder
diagnostics and leveraging multimodal methods—text, voice, and visual cues—as indicators of disorder
severity (X. Zhou et al. 2020). In psychiatric practice and research, AEI offers insights into an
individual’s mental state, speech patterns, and lifestyle factors pertinent for mental health assessment,
augmenting diagnostics with increased efficiency and high level accuracy (Pampouchidou et al. 2019).
Specifically, AEI can process data about an individual’s use of narrative, subjective, and structured
speech styles, as well as their lifestyle, inferring their educational level, socioeconomic status, living
conditions, and cultural background, all of which are routinely assessed in mental health diagnostics (Le
Glaz et al. 2021). Innovations like voice-based diagnostics and facial expression analysis offer promising
pathways for depression scoring and early detection of psychological disorders such as schizophrenia
(Schuller & Pei 2016; X. Zhou et al. 2020). Despite their potential, these approaches face obstacles such
as expression variability and data scarcity (Walsh et al. 2017; X. Zhou et al. 2020).
Clinician services. In the realm of clinician services, AEI leverages multimodal methods to enrich
healthcare provider-patient interactions. Tools like SimSensei, a virtual human interviewer, exemplify
the shift towards technology-assisted diagnostics, aiming to complement, not replace, clinician expertise
with objective, cost-effective evaluations (Schuller & Pei 2016). Yet, the inherent limitations of AEI in
mirroring human empathy underscore the technology’s role as a supportive tool rather than a substitute
for clinician services (Jardine et al. 2022; Montemayor et al. 2022). Montemayor et al. (2022) argue that
truly empathic AI is impossible to achieve because AI lacks an organic body and thus remains restricted
to cognitive empathy, resembling the kind of empathy psychopaths are also capable of, however without
the capacity to experience genuinely bodily emotions.
Challenges and future research. As AEI continues to carve its niche in healthcare, the journey is
fraught with challenges—understanding mental health predictors in machine learning, combating biases
in clinical records, and upholding ethical standards (X. Zhou et al. 2020). Future research opportunities
encompass enhancing model interpretability, exploring Explainable AI (XAI) to overcome acceptance
barriers, and fusing AEI with human expertise for a hybrid, accountable approach to healthcare
Australasian Conference on Information Systems Chen, Ciriello, Rubinsztein, and Vaast
2024, Canberra Artificial Emotional Intelligence: A Scoping Review
7
technology (Herm et al. 2022). The integration of AEI in healthcare stands not just as an opportunity
for technological progress but as a crossroads between compassionate and dispassionate patient care.
Amidst the technological achievements and ethical challenges, AEI’s path in healthcare necessitates a
collaborative future where technology and human empathy converge to redefine care, ensuring
equitable access to quality care (Le Glaz et al. 2021; Montemayor et al. 2022; Walsh et al. 2017).
4.2 AEI in Marketing & Sales
AEI applications are making strides in marketing, striving to enrich customer experiences and refine
marketing strategies (Liu-Thompkins et al. 2022). Central to AEI’s application in marketing are sales
enhancement, customer sentiment analysis, content marketing optimization, and the deployment of
empathetic chatbots for customer service. Despite its advancements, improving AEI’s grasp of customer
emotions and its ability to mimic social presence remains a regulatory challenge due to the need for large
amounts of emotional data that could impede privacy (Liu-Thompkins et al. 2022). Future endeavors in
AEI for marketing aim to enhance personalization in human-AI interaction, imbue chatbots with more
human-like qualities, and foster empathy in conversational agents (Liu-Thompkins et al., 2022).
Sales. AEI has found significant applications in sales by enabling nuanced customer sentiment analysis
and predicting escalation. Utilising multimodal approaches, including NLP and voice recognition, AEI
analyses customer interactions to discern emotions and sentiments during the sales journey (Werner et
al. 2019). AEI applications like BenchSci and Cyrano.AI personalise customer interactions by analysing
emotional cues, while Gong leverages ML and NLP to refine sales pitches based on customer emails and
video chat analyses (Limon & Plaster 2022). This facilitates more effective communication by sales
professionals, ultimately enhancing customer engagement and sales outcomes. Notably, sentiment
analysis of customer support exchanges has demonstrated up to 73% accuracy in identifying escalation
candidates with partial data analysis (Werner et al. 2019).
Content marketing. In content marketing, AEI uses multimodal methods to track real-time
emotional responses to video content, such as advertisements and trailers. Tools like nViso analyse
viewers’ emotional reactions to optimise content promotion strategies and predict a movie’s box office
success (Liu et al. 2018). Such AEI applications allow marketers to tailor video content more closely to
viewer preferences, enhancing engagement and effectiveness.
Customer service. Conversational agents such as chatbots, recognised as a competitive advantage for
brands, play a crucial role in personalising customer experiences and enhancing service delivery, with
one study finding that over 60% of marketing professionals deploy chatbots for consumer enquiries
(Araújo & Casais 2020). AEI can augment chatbot interactions, enabling a more empathetic and
personalised approach to customer support. Innovations in text-based AEI technologies, such as
Microsoft’s XiaoIce and Meta AI’s Blender Bot, seek to understand and support customer needs
emotionally (Liu-Thompkins et al. 2022). However, challenges persist in terms of the loss of human
touch and the need to establish trust in the chatbot (Liu-Thompkins et al. 2022).
Challenges and future research. A prominent challenge in integrating AEI into marketing is the
gap between chatbot capabilities and customer expectations, particularly in emotional understanding
and response customization. Future research should aim at enhancing AEI’s problem-solving and
personalization features, determining the optimal degree of anthropomorphism, and deepening AEI’s
comprehension of complex emotions (Liu-Thompkins et al., 2022). Examining AEI’s impact on
customers’ perceived social presence and the integration of empathic traits into chatbots will further
delineate the boundaries of emotionally intelligent customer interactions in marketing and sales.
4.3 AEI in the Workplace
AEI is finding its way into the modern workplace by facilitating employee well-being monitoring,
customer service enhancement, and workflow optimization (Gkinko & Elbanna 2022; Whelan et al.
2018). Its applications range from assessing employee stress levels using smartwatches and fitness
trackers to implementing both external customer assistance chatbots and internal enterprise assistants.
Despite its burgeoning application, research into workplace AEI, particularly chatbots, remains nascent.
Workplace Productivity. Human resources (HR) departments can leverage wearable technology to
monitor stress indicators such as heart rate and electrodermal activity, providing insights into employee
well-being. This data enables timely managerial interventions to prevent burnout, recklessness, or
conflicts and identify organizational “stress hot spots" for targeted action (Whelan et al. 2018).
Externally facing chatbots enhance customer interaction across sectors by offering information,
transaction support, advice, and educational assistance (Gkinko & Elbanna 2022). Meanwhile, internal
chatbots can support employees in information retrieval, task management, and system navigation,
Australasian Conference on Information Systems Chen, Ciriello, Rubinsztein, and Vaast
2024, Canberra Artificial Emotional Intelligence: A Scoping Review
8
promising to significantly boost productivity and reduce costs (Huang et al. 2019). As AEI assumes more
roles, there is a growing emphasis on the empathetic and emotional aspects of human work, raising
questions about the future of emotional intelligence in the AEI-enhanced workplace.
Challenges and future research. The exploration of workplace AEI is still at an early stage,
contrasting with its rapid market expansion (Gkinko & Elbanna 2022). The adoption of AEI technologies
brings privacy, transparency, and data governance concerns to the forefront (Whelan et al. 2018). Future
research should focus on addressing these challenges while exploring the integration of emotional
intelligence into AEI workplace applications.
4.4 AEI in Finance
AEI is progressively being integrated into the finance industry, offering novel insights into stock return
predictions and deception detection (Deng et al. 2018; Hobson et al. 2012). Despite its growing utility,
research gaps persist, particularly in discerning the impact of emotions related to cognitive dissonance
versus other emotions linked to deception (Hobson et al. 2012).
Fraud detection. Hobson et al. (2012) pioneered the use of CEO speech analysis to identify financial
misreporting by focusing on nonverbal vocal cues like vocal dissonance. Utilising voice emotion analysis
software, their study correlated vocal dissonance indicators with misreporting evidence. This approach
echoes earlier studies in the real estate sector, where voice stress detection techniques identified ethical
breaches by real estate agents (Allmon & Grant 1990).
Stock market prediction. More contemporary efforts by Deng et al. (2018) involve mining microblog
sentiments to forecast stock returns, uncovering a direct, statistically significant correlation between
microblog sentiment and hourly stock returns, underscoring the importance of AEI in financial
analytics, from compliance monitoring to sentiment analysis for investment strategies.
Challenges and future research. While AEI’s capacity for predicting stock returns from microblog
sentiment is noteworthy (Deng et al., 2018), it harbors risks of exploitation for market manipulation
through emotional contagion. Future research is thus needed to develop mechanisms for identifying and
counteracting social media-based market manipulations to safeguard sentiment-driven stock forecasts.
Moreover, open questions remain regarding the significance of cognitive dissonance-induced emotions
compared to other deception-related emotions in the financial domain (Hobson et al., 2012). Exploring
the interplay between vocal and linguistic indicators of deception could enhance our understanding of
AEI’s application in finance, promoting better decision-making and adherence to regulatory standards.
Caution is needed to safeguard against premature actions based on ‘false positives’, especially when AEI
systems indicate potential fraud solely based on emotions.
4.5 AEI in Love & Sex
In the domain of love and sex, AEI has found applications in emotionally responsive AI companions and
life-sized humanoid sex robots (Belk 2022; Brooks 2023). These innovations, while pushing the
boundaries of human-robot intimacy, also introduce complex ethical tensions.
Sexbots and AI companions. Advances in AEI have made realistic sexbots both more accessible and
sophisticated, capable of sensing and responding to human emotions (Belk 2022). These innovations
have the potential to transform societal dynamics by offering alternatives to human sex workers (Belk
2022), and employing child-like robots in therapeutic contexts for individuals with pedophilic
tendencies, as well as other sexual perversions, psycho-sexual disorders, sexual therapies for elderly and
disabled individuals, raising ethical questions regarding the humanization of AEI agents (Eisikovits
2023; Li & Sung 2021; McArthur & Twist 2017). However, the design of most sexbots, which often
features exaggerated feminine traits catering to male fantasies, raises concerns about reinforcing
traditional gender roles and objectifying women. This may influence family dynamics by normalising
such representations (Belk 2022). Additionally, privacy concerns emerge, particularly around the
collection of sensitive data (Y. Zhou et al. 2020). Moreover, the post-COVID era has seen a rise in AI
companions and sexbots for friendly, romantic, and erotic interactions, reflecting increased acceptance
of AEI in intimate settings (Brooks 2023). Furthermore, AI assistants with high emotional intelligence
capabilities can foster feelings of intimacy and passion among users (Song et al. 2022).
Challenges and future research. As people grow more attached to AEI agents, these applications
raise ethical questions about the balance between user autonomy and corporate control, the legal
identity and personhood of AEI companions, and entrenched gender and social stereotypes (Chen &
Burgess 2018; Depounti et al. 2023). As AEI becomes more integrated into human sexual and romantic
needs, addressing these ethical concerns is critical. Further exploration of the uncanny valley effect,
Australasian Conference on Information Systems Chen, Ciriello, Rubinsztein, and Vaast
2024, Canberra Artificial Emotional Intelligence: A Scoping Review
9
which describes aversion to nearly-but-not-quite realistic avatars (Seymour et al. 2021), and the optimal
degree of human likeness in AI companions will enhance understanding of AEI’s role in intimacy.
4.6 Emerging AEI Applications
Due to space constraints, section briefly summarises emerging AEI applications in other sectors.
AEI in retail. In the retail sector, AEI applications range from automated shopping recommendations
to emotionally intelligent in-store robots (Bertacchini et al. 2017; Lu et al. 2016). Australian retail giant
Woolworths experimented with a shopping robot to detect and report store hazards (Sadler 2019),
reflecting a trend towards more sophisticated, emotionally intelligent humanoid robots that interpret
customer emotions from social media data for personalised assistance Bertacchini et al. (2017).
Moreover, Kümpel et al. (2023) discuss linking shopping assistants with a semantic Digital Twin to
improve service quality. Lu et al. (2016) develop a video-based recommender system that employs
computer vision to analyse in-store customer behaviour and generate personalised clothing suggestions.
Still, AEI recommender systems and in-store robots face significant obstacles, primarily the need for
real-world trials to evaluate effectiveness and consumer receptivity. While Bertacchini et al. (2017)
suggest robotics could reduce operational costs, the lack of comprehensive cost and implementation
analysis makes widespread adoption speculative.
AEI in education. Educational applications of AEI include emotionally adaptive assessments,
empathetic virtual mentors, and social-emotional learning to improve self-awareness, self-
management, social awareness, relationship skills, and responsible decision (Schiff 2021; Slovák &
Fitzpatrick 2015). Intelligent Tutoring Systems (ITS) can monitor student engagement and recognize
emotions like frustration or boredom. Recent advancements aim to extend ITS capabilities to include
emotional expression, fostering motivation and demonstrating care for students’ progress. Schiff (2021)
introduces “emotional positionality” in educational agents, assigning specific emotional roles—
authoritative for teachers, motivating for mentors, and engaging for learning companions—to enhance
the learning experience. AEI’s growing ability to predict and respond to students’ emotions necessitates
examining its ethical dimensions. Concerns include the potential for AEI systems to inadvertently shame
students, highlighting the need for critical assessment by educational practitioners and policymakers.
AEI in law enforcement. AEI is increasingly incorporated into law enforcement, with applications
such as predictive policing and facial recognition being particularly notable (Podoletz, 2022). Key uses
include crime prediction through facial recognition and behavioral data analysis, deception detection,
and enhancing public security by predicting emotional states and intentions using biometric data
(Dakalbab et al., 2022; Podoletz, 2022). However, the accuracy of these technologies in crime prediction
and potential privacy infringements raise significant concerns. The reliability of AEI technologies and
their implications for public privacy are contentious (Podoletz, 2022). Additionally, concerns about
built-in biases leading to racism and discrimination in crime detection systems call for a critical
evaluation of AEI’s design and implementation (Verma, 2022).
5 The Futures: AEI’s Potential Trajectories and Research
Opportunities
Our scoping review synthesises nearly three decades of research on affective computing technology,
showing a growing interest in the practical applications and ethical implications of AEI systems. With
the steady accumulation of knowledge, AEI is poised to become a significant focus of future IS
scholarship. As we move toward this future, it is critical to examine potential conflicts and competing
concerns that may arise with AEI deployment in various contexts. Below, we outline three key research
opportunities that require careful consideration for IS scholarship, addressing RQ3: What are AEI’s
potential future trajectories and research opportunities?
5.1 IS Research Opportunity 1: Privacy-Preserving and Trustworthy AEI
The collection of emotionally sensitive data for commercial purposes carries substantial risks of misuse,
raising concerns about consent, privacy, and autonomy (Behnke et al. 2022; Peter & Ho 2022). The
increasing use of shopping robots, enhanced CCTV surveillance in stores, and wearable technologies for
supervision exemplifies these issues (Behnke et al. 2022; Kümpel et al. 2023; Sadler 2019). A pressing
concern is the potential of detecting emotions in public spaces for public safety and crime prevention
(Dakalbab et al. 2022), adding a layer of complexity to surveillance capitalism (Zuboff 2015). Highly
sensitive emotional data can be exploited by authoritarian regimes or toxic workplaces to monitor and
manipulate emotions on a large scale. Such scenarios evoke Orwellian fears about the erosion of
autonomy, dignity, and freedom of emotional expression. Furthermore, the addictive potential of AI
Australasian Conference on Information Systems Chen, Ciriello, Rubinsztein, and Vaast
2024, Canberra Artificial Emotional Intelligence: A Scoping Review
10
companions presents an ethical dilemma between optimising user engagement for profit and the risks
of fostering strong emotional dependencies on these technologies (Pentina et al. 2023).
These challenges present opportunities for future IS scholarship to focus on the design, deployment, and
use of trustworthy and privacy-preserving AEI systems. While trust and privacy have long been central
for IS scholarship (Lacity et al. 2024), AEI systems present novel challenges and amplify existing ones
due to the emotionally sensitive nature of the data and interactions involved. These challenges open new
avenues for both behaviour-oriented and design-oriented IS research to explore human trust in AEI
technologies across various sociotechnical contexts (Chen et al. 2023), leveraging affordances theory to
understand the actionable possibilities that these technologies provide, and to develop solutions that
uphold privacy and guide ethical deployment (Rubinsztein & Ciriello 2024). Such IS research should
emphasise the importance of consent, drawing lessons from past controversies involving the non-
consensual use of emotional contagion techniques on Facebook, where hundreds of thousands of users
were unwittingly included in an experiment on the spread of negative emotions through social networks
(Kramer et al. 2014). It is critical to establish stringent ethical safeguards to mitigate these risks, urging
IS researchers to champion consent, autonomy, and dignity in AEI systems (Ciriello et al. 2025).
5.2 IS Research Opportunity 2: Context-Aware and Culturally Adaptive AEI
AEI systems that lack an understanding of the sociotechnical context may not effectively augment
human work and could lead to negative outcomes such as alienating users or misinterpreting emotional
cues. For example, while AEI in marketing aims to amplify consumer emotions (Liu-Thompkins et al.
2022), its overarching goal in healthcare is to attenuate a patient’s emotions (Montemayor et al. 2022).
Such competing concerns necessitate context-aware design to ensure these systems augment human
activities without introducing bias or discrimination (Benbya et al. 2021). Thus, future IS research
should examine the appropriate type and level of emotional display by AEI applications across contexts.
In particular, there is a need for greater cultural adaptivity in AEI systems as different cultures express
emotions like happiness, surprise, fear, disgust, anger, and sadness differently (Jack et al. 2012). For
instance, a study on emotional expressiveness in Chinese and American preschoolers showed that
generally, Eastern cultures are less expressive than Western cultures (Ip et al. 2023). This highlights the
importance of integrating diverse cultural expressions in AEI research to enhance accessibility across
cultures, genders, and age groups.
However, such adaptability can introduce risks of discrimination. For example, AEI applications in
crime prediction have shown biases, such as facial recognition systems unfairly targeting Black and
Latinx individuals (Verma 2022). Such bias can lead to discriminatory outcomes in applications like
human resources, where AEI systems may exclude candidates from minority backgrounds, reinforcing
workplace uniformity and inequality. This underlines the need for culturally adaptive AEI systems
(Reinecke & Bernstein 2013) and provides pointers for IS system design, especially in human-computer
interactions. Behavioural IS researchers can investigate human behaviour, perceptions, and practices
within AEI systems and understand how cultural biases in these systems’ emotional responses affect
perceived empathy and support among diverse user populations. Design researchers should explore the
creation of culturally adaptive AEI systems that authentically recognise and adapt to diverse cultural
expressions, enhancing trust, effectiveness, inclusivity, and diversity.
5.3 IS Research Opportunity 3: AEI’s Impact on Human Relationships
The rapid advancement of AEI has significantly shifted perspectives on the future of human
relationships, challenging traditional notions of romantic and sexual companionship through a broad
spectrum of “radical new sexual technologies” (McArthur & Twist 2017, p. 344). The emerging
sociotechnical phenomenon of digisexuality has evolved in two distinct ‘waves’: the first wave uses
digital technology as a mediative agent for human connection, while the second wave offers an
immersive, reciprocal experience that eliminates the need for a human partner (McArthur & Twist
2017). AEI catalyses this second wave, with recent studies highlighting affordances such as the
customizability, trainability, and humanisability (Depounti et al. 2023; Pentina et al. 2023) of AEI-
enabled digisexuality as key influences on their projected use and impact. Future IS research should
explore how these digisexual affordances shape human-AI and human-human relationships,
particularly their effects on trust, compassion, and attachment, building on the affordance theory
(Gibson 1979).
The automation of emotional labour presents opportunities and threats to human relationships. By
simulating social, romantic, or erotic interactions, AEI can alleviate or substitute traditional emotional
or sexual labour. This relates to discourses on family decline and its implications for the sex work
Australasian Conference on Information Systems Chen, Ciriello, Rubinsztein, and Vaast
2024, Canberra Artificial Emotional Intelligence: A Scoping Review
11
industry (Belk 2022; Langcaster-James & Bentley 2018). AEI opens possibilities to explore utopic sexual
freedom outside traditional patriarchal systems of heteronormative, monogamous relationships and
exploitative practices (Knafo & Bosco 2023; Kubes 2019). The application of sexbots in therapeutic
contexts for individuals with dysfunctional sexual behaviors also requires further IS research on
complex decision-making, to determine whether these efforts effectively treat or inadvertently
exacerbate issues such as paedophilia or sadism (Belk 2022). Therefore, the inherent contradictions
within the current and anticipated state of digisexuality necessitate dialectical examinations to develop
AEI that enhances human empathy and compassion while finding innovative ways to mitigate harmful
sexual or emotional interactions (Ciriello & Kögel 2024). Such research can enrich theories and methods
in dialectics (Ciriello & Mathiassen 2022) and offer further insights into ethical AI applications.
6 Conclusion
AEI systems have steadily evolved over three decades and are now seeing increasing applications across
key industries. The future holds many possibilities for emotionally intelligent machines to enhance
human-machine and human-to-human interactions. The dilemmas between harnessing AEI’s potential
for enhancing human experiences and the ethical, social, and practical issues associated with these
developments create key research opportunities for future IS scholarship. As AEI gains traction, it
becomes important for the IS community to approach this emerging phenomenon with a critical and
reflective perspective, ensuring that we do not blindly pursue the advantages of new technology without
considering the kind of world we might inadvertently shape.
In response to RQ1–What is the state of AEI technology, encompassing its historical evolution and
current capabilities? –we discussed the evolution of AEI from its inception in 1995 to a recognised
multidisciplinary field today. Initially met with doubt, affective computing emerged as a means for
computers to recognise, express, and respond to human emotions. The turning point came with the
integration of multimodal technologies that enhanced emotion recognition capabilities across various
sectors, overcoming early challenges of acceptance. The proliferation of extensive textual data from
smartphones since the mid-2000s has fuelled advancements in sentiment analysis, enabling more
nuanced emotion detection from linguistic cues. This historical progression underscores ongoing
challenges such as cultural and gender biases in emotion interpretation.
In response to RQ2—How is AEI applied in practice?—our literature review reveals AEI’s
transformative impact across sectors, demonstrating its broad applicability and potential to impact
diverse domains. In healthcare, AEI can enhance patient care with empathetic virtual assistants and
chatbots. In marketing and sales, it can improve customer experiences through sentiment analysis and
personalised content. AEI is also used for employee well-being and productivity in the workplace. In
finance, AEI aids in fraud detection and stock market predictions. The domain of love and sex poses
intricate ethical dilemmas with the development of emotionally responsive sexbots and companions.
Each sector encounters challenges that necessitate future research to address ethical, social, and
technical issues and ensure AEI’s responsible integration.
In response to RQ3–What are AEI’s potential future trajectories and research opportunities?–we
showcase that AEI can become a focal point in IS scholarship, requiring careful in three key areas for
future research: 1) Developing privacy-preserving and trustworthy AEI systems, addressing risks
associated with the commercial use of emotionally sensitive data, and enhancing user autonomy; 2)
Creating context-aware and culturally adaptive AEI systems to avoid misinterpretation of emotional
cues and ensure effectiveness across diverse sociocultural environments; And, 3) Exploring AEI’s impact
on human relationships, particularly how emerging phenomena like digisexuality could reshape
traditional interpersonal dynamics. These focal areas directly address crucial Information Systems (IS)
concerns, encompassing digital privacy, the creation and design of ethical decision-making systems and
human-computer interaction. They also necessitate the advancement of theory concerning affordances,
employing sociotechnical frameworks and dialectical analysis to rigorously explore the paradoxes
inherent in these systems.
As with all research, ours has some limitations that open directions for future work. Notably, there is
uneven coverage across fields, with significant gaps in finance, policing, and education—sectors that
warrant deeper investigation to understand their broader impacts. Additionally, the cross-disciplinary
applications of AEI, such as adapting AI companions for nursing or workplace environments, offer
valuable insights for AEI design considerations across various contexts. Moreover, the rapid evolution
of AEI may mean that the latest developments were not included.
Australasian Conference on Information Systems Chen, Ciriello, Rubinsztein, and Vaast
2024, Canberra Artificial Emotional Intelligence: A Scoping Review
12
7 References
Allmon, D. E., & Grant, J. (1990). Real Estate Sales Agents and the Code of Ethics: A Voice Stress
Analysis. Journal of Business Ethics, 9(10), 807-812.
Araújo, T., & Casais, B. (2020). Customer Acceptance of Shopping-Assistant Chatbots. In (pp. 278-
287). https://doi.org/10.1007/978-981-15-1564-4_26
Asiain, D., Ponce de León, J., & Beltrán, J. R. (2022). Mswh: A Multi-Sensory Hardware Platform for
Capturing and Analyzing Physiological Emotional Signals. Sensors (Basel), 22(15).
https://doi.org/10.3390/s22155775
Batrinca, B., & Treleaven, P. C. (2015). Social Media Analytics: A Survey of Techniques, Tools and
Platforms. AI and Society, 30(1), 89-116. https://doi.org/10.1007/s00146-014-0549-4
Behnke, M., Saganowski, S., Kunc, D., & Kazienko, P. (2022). Ethical Considerations and Checklist for
Affective Research with Wearables. IEEE Transactions on Affective Computing.
Belk, R. (2022). Artificial Emotions and Love and Sex Doll Service Workers. Journal of Service
Research, 25(4), 521-536. https://doi.org/10.1177/10946705211063692
Benbya, H., Pachidi, S., & Jarvenpaa, S. (2021). Special Issue Editorial: Artificial Intelligence in
Organizations: Implications for Information Systems Research. Journal of the Association for
Information Systems, 22(2). https://doi.org/10.17705/1jais.00662
Bertacchini, F., Bilotta, E., & Pantano, P. (2017). Shopping with a Robotic Companion. Computers in
Human Behavior, 77, 382-395. https://doi.org/10.1016/j.chb.2017.02.064
Bhatti, A. M., Majid, M., Anwar, S. M., & Khan, B. (2016). Human Emotion Recognition and Analysis
in Response to Audio Music Using Brain Signals. Computers in Human Behavior, 65, 267-
275. https://doi.org/10.1016/j.chb.2016.08.029
Boell, S. K., & Cecez-Kecmanovic, D. (2014). A Hermeneutic Approach for Conducting Literature
Reviews and Literature Searches. Communications of the Association for Information
Systems, 34(12). https://doi.org/10.17705/1CAIS.03412
Broekens, J., & Brinkman, W. P. (2013). Affectbutton: A Method for Reliable and Valid Affective Self-
Report. International Journal of Human Computer Studies, 71(6), 641-667.
https://doi.org/10.1016/j.ijhcs.2013.02.003
Brooks, R. (2023). I Tried the Replika Ai Companion and Can See Why Users Are Falling Hard. The
App Raises Serious Ethical Questions. The Conversation. https://theconversation.com/i-
tried-the-replika-ai-companion-and-can-see-why-users-are-falling-hard-the-app-raises-
serious-ethical-questions-200257
Büschken, J., & Allenby, G. M. (2016). Sentence-Based Text Analysis for Customer Reviews.
Marketing Science, 35(6), 953-975.
Calvo, R. A., & D'Mello, S. (2010). Affect Detection: An Interdisciplinary Review of Models, Methods,
and Their Applications. IEEE Transactions on Affective Computing, 1(1), 18-37.
Chanel, G., Kierkels, J. J. M., Soleymani, M., & Pun, T. (2009). Short-Term Emotion Assessment in a
Recall Paradigm. International Journal of Human Computer Studies, 67(8), 607-627.
https://doi.org/10.1016/j.ijhcs.2009.03.005
Chen, A. Y., Kögel, S. I., Hannon, O., & Ciriello, R. (2023). Feels Like Empathy: How "Emotional" Ai
Challenges Human Essence. Australasian Conference on Information Systems (ACIS2023),
Wellington, New Zealand.
Chen, J., & Burgess, P. (2018). The Boundaries of Legal Personhood: How Spontaneous Intelligence
Can Problematise Differences between Humans, Artificial Intelligence, Companies and
Animals. Artificial Intelligence and Law, 27, 73-92. https://doi.org/10.1007/s10506-018-
9229-x
Chen, R., Islam, A., Gedeon, T., & Hossain, M. Z. (2020). Observers’ Pupillary Responses in
Recognising Real and Posed Smiles: A Preliminary Study. Australasian Conference on
Information Systems. https://doi.org/10.48550/arXiv.2102.03994
Ciriello, R., Mathiassen, L., Risius, M., Cheong, M., Scheepers, H., & Vaast, E. (2025). Compassionate
Is Scholarship: Fostering Digital Innovation for the Common Good. Working Paper,
University of Sydney.
Ciriello, R. F., Hannon, O., Chen, A. Y., & Vaast, E. (2024). Ethical Tensions in Human-Ai
Companionship: A Dialectical Inquiry into Replika. Hawaii International Conference on
System Sciences (HICSS2024), https://hdl.handle.net/10125/106433
Ciriello, R. F., & Kögel, S. I. (2024). Pluralistic Digital Harm Analysis: Combining Ethical, Legal, and
Historical Views on Corporate Evil. Australasian Conference on Information Systems
(ACIS2024), Canberra, Australia.
Australasian Conference on Information Systems Chen, Ciriello, Rubinsztein, and Vaast
2024, Canberra Artificial Emotional Intelligence: A Scoping Review
13
Ciriello, R. F., & Mathiassen, L. (2022). Dialectical Inquiry in Information Systems Research: A
Synthesis of Principles. International Conference on Information Systems, Copenhagen,
Denmark. https://aisel.aisnet.org/icis2022/adv_methods/adv_methods/1/
D'Mello, S. K., & Kory, J. (2015). A Review and Meta-Analysis of Multimodal Affect Detection Systems.
ACM Computing Surveys, 47(3), Article 43. https://doi.org/10.1145/2682899
Dai, W., Han, D., Dai, Y., & Xu, D. (2015). Emotion Recognition and Affective Computing on Vocal
Social Media. Information and Management, 52(7), 777-788.
https://doi.org/10.1016/j.im.2015.02.003
Dakalbab, F., Abu Talib, M., Abu Waraga, O., Bou Nassif, A., Abbas, S., & Nasir, Q. (2022). Artificial
Intelligence & Crime Prediction: A Systematic Literature Review. Social Sciences &
Humanities Open, 6(1), 100342. https://doi.org/10.1016/j.ssaho.2022.100342
Deng, S., Huang, Z., Sinha, A. P., & Zhao, H. (2018). The Interaction between Microblog Sentiment
and Stock Returns: An Empirical Examination. MIS Quarterly, 42(3), 895-918.
https://doi.org/10.25300/MISQ/2018/14268
Depounti, I., Saukko, P., & Natale, S. (2023). Ideal Technologies, Ideal Women: Ai and Gender
Imaginaries in Redditors’ Discussions on the Replika Bot Girlfriend. Media, Culture & Society,
45(4), 720-736. https://doi.org/10.1177/01634437221119021
Eisikovits, N. (2023). Ai Isn’t Close to Becoming Sentient – the Real Danger Lies in How Easily We’re
Prone to Anthropomorphize It. The Conversation. https://theconversation.com/ai-isnt-close-
to-becoming-sentient-the-real-danger-lies-in-how-easily-were-prone-to-anthropomorphize-
it-200525
Ernst, M., Niederer, D., Werner, A. M., Czaja, S. J., Mikton, C., Ong, A. D., Rosen, T., Brähler, E., &
Beutel, M. E. (2022). Loneliness before and During the Covid-19 Pandemic: A Systematic
Review with Meta-Analysis. American Psychologist, 77(5), 660.
https://doi.org/10.1037/amp0001005
Gibson, J. J. (1979). The Ecological Approach to Visual Perception: Chapter 8- the Theory of
Affordances. Houghton Mifflin.
Gkinko, L., & Elbanna, A. (2022). Hope, Tolerance and Empathy: Employees' Emotions When Using
an Ai-Enabled Chatbot In a digitalised Workplace. Information Technology and People, 35(6),
1714-1743. https://doi.org/10.1108/ITP-04-2021-0328
Goleman, D. (2012). Emotional Intellignce: Why It Can Matter More Than Iq. Bloomsbury.
Hercig, T., Brychcín, T., Svoboda, L., Konkol, M., & Steinberger, J. (2016). Unsupervised Methods to
Improve Aspect-Based Sentiment Analysis in Czech. Computacion y Sistemas, 20, 365-375.
https://doi.org/10.13053/CyS-20-3-2469
Herm, L. V., Steinbach, T., Wanner, J., & Janiesch, C. (2022). A Nascent Design Theory for
Explainable Intelligent Systems. Electronic Markets. https://doi.org/10.1007/s12525-022-
00606-3
Hobson, J. L., Mayew, W. J., & Venkatachalam, M. (2012). Analyzing Speech to Detect Financial
Misreporting. Journal of Accounting Research, 50(2), 349-392.
Hollnagel, E. (2003). Is Affective Computing an Oxymoron? International Journal of Human
Computer Studies, 59(1-2), 65-70. https://doi.org/10.1016/S1071-5819(03)00053-3
Huang, M. H., Rust, R., & Maksimovic, V. (2019). The Feeling Economy: Managing in the Next
Generation of Artificial Intelligence (Ai). California Management Review.
https://doi.org/10.1177/0008125619863436
Hudlicka, E. (2003). To Feel or Not to Feel: The Role of Affect in Human-Computer Interaction.
International Journal of Human Computer Studies, 59(1-2), 1-32.
https://doi.org/10.1016/S1071-5819(03)00047-8
Ip, K. I., Miller, A. L., Wang, L., Felt, B., Olson, S. L., & Tardif, T. (2023). Emotion Regulation as a
Complex System: A Multi-Contextual and Multi- Level Approach to Understanding Emotion
Expression and Cortisol Reactivity among Chinese and Us Preschoolers. Developmental
Science. https://doi.org/10.1111/desc.13446
Jack, R. E., Garrod, O. G. B., Yu, H., Caldara, R., & Schyns, P. G. (2012). Beyond Darwin: Revealing
Culture-Specificities in the Temporal Dynamics of 4d Facial Expressions. F1000Research, 12,
971-971. https://doi.org/https://doi.org/10.1167/12.9.971
Jardine, J., Bowman, R., & Doherty, G. (2022). Digital Interventions to Enhance Readiness for
Psychological Therapy: Scoping Review. Journal of Medical Internet Research, 24(8), Article
e37851. https://doi.org/10.2196/37851
Kim, J., Kwon, S., Seo, S., & Park, K. (2014). Highly Wearable Galvanic Skin Response Sensor Using
Flexible and Conductive Polymer Foam. Annu Int Conf IEEE Eng Med Biol Soc, 2014, 6631-
6634. https://doi.org/10.1109/embc.2014.6945148
Australasian Conference on Information Systems Chen, Ciriello, Rubinsztein, and Vaast
2024, Canberra Artificial Emotional Intelligence: A Scoping Review
14
Kimani, E., & Bickmore, T. (2019). Addressing Public Speaking Anxiety in Real-Time Using a Virtual
Public Speaking Coach and Physiological Sensors. https://doi.org/10.1145/3308532.3329409
Knafo, D., & Bosco, R. L. (2023). Natural-Born Deviants: The Existential Escapades of Sex Tech.
American Imago, 80(4), 663-692. https://doi.org/10.1353/aim.2023.a918105.
Kramer, A. D. I., Guillory, J. E., & Hancock, J. T. (2014). Experimental Evidence of Massive-Scale
Emotional Contagion through Social Networks. Proceedings of the National Academy of
Sciences, 111(24), 8788-8790. https://doi.org/doi:10.1073/pnas.1320040111
Kratzwald, B., Ilić, S., Kraus, M., Feuerriegel, S., & Prendinger, H. (2018). Deep Learning for Affective
Computing: Text-Based Emotion Recognition in Decision Support. Decision Support Systems,
115, 24-35. https://doi.org/10.1016/j.dss.2018.09.002
Kubes, T. (2019). New Materialist Perspectives on Sex Robots. A Feminist Dystopia/Utopia? Social
Sciences, 8(8), 224.
Kukolja, D., Popović, S., Horvat, M., Kovač, B., & Ćosić, K. (2014). Comparative Analysis of Emotion
Estimation Methods Based on Physiological Measurements for Real-Time Applications.
International journal of human-computer studies, 72(10-11), 717-727.
Kümpel, M., Hawkin, A., Dech, J., & Beetz, M. (2023). Robotic Shopping Assistance for Everyone:
Dynamic Query Generation on a Semantic Digital Twin as a Basis for Autonomous Shopping
Assistance. Proceedings of the International Joint Conference on Autonomous Agents and
Multiagent Systems, AAMAS, https://doi.org/10.5555/3545946.3598989
Lacity, M. C., Schuetz, S. W., Kuai, L., & Steelman, Z. R. (2024). It’s a Matter of Trust: Literature
Reviews and Analyses of Human Trust in Information Technology. Journal of Information
Technology, 0(0), 02683962231226397. https://doi.org/10.1177/02683962231226397
Langcaster-James, M., & Bentley, G. R. (2018). Beyond the Sex Doll: Post-Human Companionship and
the Rise of the ‘Allodoll’. Robotics, 7(4), 62.
Lavanya, P., & Sasikala, E. (2021). Deep Learning Techniques on Text Classification Using Natural
Language Processing (Nlp) in Social Healthcare Network: A Comprehensive Survey. 2021 3rd
International Conference on Signal Processing and Communication (ICPSC), 603-609.
https://doi.org/10.1109/ICSPC51351.2021.9451752.
Le Glaz, A., Haralambous, Y., Kim-Dufor, D. H., Lenca, P., Billot, R., Ryan, T. C., Marsh, J., DeVylder,
J., Walter, M., Berrouiguet, S., & Lemey, C. (2021). Machine Learning and Natural Language
Processing in Mental Health: Systematic Review. Journal of Medical Internet Research, 23(5),
Article e15708. https://doi.org/10.2196/15708
Lee, W. Y., Jiang, C. X., & Indro, D. C. (2002). Stock Market Volatility, Excess Returns, and the Role of
Investor Sentiment. Journal of Banking and Finance, 26(12), 2277-2299.
https://doi.org/10.1016/S0378-4266(01)00202-3
Leite, I., Pereira, A., Mascarenhas, S., Martinho, C., Prada, R., & Paiva, A. (2013). The Influence of
Empathy in Human-Robot Relations. International Journal of Human Computer Studies,
71(3), 250-260. https://doi.org/10.1016/j.ijhcs.2012.09.005
Leswing, K. (2023). Microsoft’s Bing A.I. Is Producing Creepy Conversations with Users.
https://www.cnbc.com/2023/02/16/microsofts-bing-ai-is-leading-to-creepy-experiences-for-
users.html
Li, X., & Sung, Y. (2021). Anthropomorphism Brings Us Closer: The Mediating Role of Psychological
Distance in User–Ai Assistant Interactions. Computers in Human Behavior, 118, 106680.
https://doi.org/10.1016/j.chb.2021.106680
Limon, D., & Plaster, B. (2022). Can Ai Teach Us How to Become More Emotionally Intelligent?
Harvard Business Review. Retrieved 27 December 2022 from https://hbr.org/2022/01/can-
ai-teach-us-how-to-become-more-emotionally-intelligent
Lisetti, C. L., & Nasoz, F. (2004). Using Noninvasive Wearable Computers to Recognize Human
Emotions from Physiological Signals. EURASIP Journal on Advances in Signal Processing,
2004(11), 929414. https://doi.org/10.1155/S1110865704406192
Liu, H., Zhang, Y., Li, Y., & Kong, X. (2021). Review on Emotion Recognition Based on
Electroencephalography. Front Comput Neurosci, 15, 758212.
https://doi.org/10.3389/fncom.2021.758212
Liu, X., Shi, S. W., Teixeira, T., & Wedel, M. (2018). Video Content Marketing: The Making of Clips.
Journal of Marketing, 82(4), 86-101.
Liu-Thompkins, Y., Okazaki, S., & Li, H. (2022). Artificial Empathy in Marketing Interactions:
Bridging the Human-Ai Gap in Affective and Social Customer Experience. Journal of the
Academy of Marketing Science, 50(6), 1198-1218. https://doi.org/10.1007/s11747-022-
00892-5
Lu, S., Xiao, L., & Ding, M. (2016). A Video-Based Automated Recommender (Var) System for
Garments. Marketing Science, 35(3), 484-510.
Australasian Conference on Information Systems Chen, Ciriello, Rubinsztein, and Vaast
2024, Canberra Artificial Emotional Intelligence: A Scoping Review
15
McArthur, N., & Twist, M. L. C. (2017). The Rise of Digisexuality: Therapeutic Challenges and
Possibilities. Sexual and Relationship Therapy, 32(3-4), 334-344.
https://doi.org/10.1080/14681994.2017.1397950
McDuff, D., Rowan, K., Choudhury, P., Wolk, J., Pham, T., & Czerwinski, M. (2019). A Multimodal
Emotion Sensing Platform for Building Emotion-Aware Applications.
https://doi.org/10.48550/arXiv.1903.12133
Montemayor, C., Halpern, J., & Fairweather, A. (2022). In Principle Obstacles for Empathic Ai: Why
We Can’t Replace Human Empathy in Healthcare. AI and Society, 37(4), 1353-1359.
https://doi.org/10.1007/s00146-021-01230-z
Noroozi, F., Corneanu, C. A., Kamińska, D., Sapiński, T., Escalera, S., & Anbarjafari, G. (2018). Survey
on Emotional Body Gesture Recognition. IEEE Transactions on Affective Computing, 12(2),
505-523.
Pampouchidou, A., Simos, P. G., Marias, K., Meriaudeau, F., Yang, F., Pediaditis, M., & Tsiknakis, M.
(2019). Automatic Assessment of Depression Based on Visual Cues: A Systematic Review.
IEEE Transactions on Affective Computing, 10(4).
https://doi.org/10.1109/TAFFC.2017.2724035.
Pantic, M., & Patras, I. (2006). Dynamics of Facial Expression: Recognition of Facial Actions and Their
Temporal Segments from Face Profile Image Sequences. IEEE Transactions on Systems, Man,
and Cybernetics, Part B (Cybernetics), 36(2), 433-449.
Paré, G., Tate, M., Johnstone, D., & Kitsiou, S. (2016). Contextualizing the Twin Concepts of
Systematicity and Transparency in Information Systems Literature Reviews. European
Journal of Information Systems, 25(6), 493-508. https://doi.org/10.1057/s41303-016-0020-
3
Pentina, I., Hancock, T., & Xie, T. (2023). Exploring Relationship Development with Social Chatbots:
A Mixed-Method Study of Replika. Computers in Human Behavior, 140, 107600.
https://doi.org/10.1016/j.chb.2022.107600
Peter, M., & Ho, M. T. (2022). Why We Need to Be Weary of Emotional Ai. AI and Society.
https://doi.org/10.1007/s00146-022-01576-y
Picard, R. W. (1995). Affective Computing. M.I.T. Media Laboratory Perceptual Computing Section
Technical Report(321).
Picard, R. W. (2008). Toward Machines with Emotional Intelligence.
Picard, R. W. (2010). Affective Computing: From Laughter to Ieee. IEEE Transactions on Affective
Computing, 1, 11-17. https://doi.org/10.1109/T-AFFC.2010.10
Picard, R. W., Vyzas, E., & Healey, J. (2001). Toward Machine Emotional Intelligence: Analysis of
Affective Physiological State. IEEE transactions on pattern analysis and machine
intelligence, 23(10), 1175-1191.
Podoletz, L. (2022). We Have to Talk About Emotional Ai and Crime. AI and Society, 38(1), 1067-
1082. https://doi.org/10.1007/s00146-022-01435-w
PRISMA. (2020). Prisma 2020 Checklist. http://www.prisma-
statement.org/?AspxAutoDetectCookieSupport=1
Purabi, S. A., Rashed, R., Islam, M., Uddin, N., Naznin, M., & Islam, A. A. A. (2019). As You Are, So
Shall You Move Your Head: A System-Level Analysis between Head Movements and
Corresponding Traits and Emotions. Proceedings of the 6th International Conference on
Networking, Systems and Security, https://doi.org/10.1145/3362966.3362985
Ravi, K., & Ravi, V. (2015). A Survey on Opinion Mining and Sentiment Analysis: Tasks, Approaches
and Applications. Knowledge-Based Systems, 89, 14-46.
https://doi.org/10.1016/j.knosys.2015.06.015
Reinecke, K., & Bernstein, A. (2013). Knowing What a User Likes: A Design Science Approach to
Interfaces That Automatically Adapt to Culture. MIS Quarterly, 37, 427-454.
https://doi.org/10.5167/uzh-73183
Rouast, P. V., Adam, M. T., & Chiong, R. (2019). Deep Learning for Human Affect Recognition:
Insights and New Developments. IEEE Transactions on Affective Computing, 12(2), 524-543.
Rubinsztein, Z. A., & Ciriello, R. F. (2024). Sexbots and Rock&Roll: A Dialectical Inquiry into
Digisexuality between Progress and Regress. Australasian Conference on Information
Systems (ACIS2024), Canberra, Australia.
Sadler, D. (2019). Woolworths Trials in-Store Robot Latest Technology Being Trialled in Australian
Supermarkets. https://ia.acs.org.au/article/2019/woolworths-trials-in-store-robot.html
Salehan, M., & Kim, D. J. (2016). Predicting the Performance of Online Consumer Reviews: A
Sentiment Mining Approach to Big Data Analytics. Decision Support Systems, 81, 30-40.
https://doi.org/10.1016/j.dss.2015.10.006
Australasian Conference on Information Systems Chen, Ciriello, Rubinsztein, and Vaast
2024, Canberra Artificial Emotional Intelligence: A Scoping Review
16
Schiff, D. (2021). Out of the Laboratory and into the Classroom: The Future of Artificial Intelligence in
Education. AI and Society, 36(1), 331-348. https://doi.org/10.1007/s00146-020-01033-8
Schuller, B., & Pei, J. (2016). Using Computer Intelligence for Depression Diagnosis and
Crowdsourcing. IEEE Computer, 49(7), 8-9, Article 7503508.
https://doi.org/10.1109/MC.2016.206
Schuller, B. W. (2018). Speech Emotion Recognition: Two Decades in a Nutshell, Benchmarks, and
Ongoing Trends. Communications of the ACM, 61(5), 90-99. https://doi.org/10.1145/3129340
Schuller, D., & Schuller, B. W. (2018). The Age of Artificial Emotional Intelligence. IEEE Computer,
51(9), 38-46, Article 8481266. https://doi.org/10.1109/MC.2018.3620963
Seymour, M., Yuan, L., Dennis, A., & Riemer, K. (2021). Have We Crossed the Uncanny Valley?
Understanding Affinity, Trustworthiness, and Preference for Realistic Digital Humans in
Immersive Environments. Journal of the Association for Information Systems, 22(3).
https://doi.org/10.17705/1jais.00674
Shanahan, J. G., Qu, Y., & Wiebe, J. (2006). Computing Attitude and Affect in Text: Theory and
Applications (Vol. 20). Springer.
Shreetim, V. (2024). (Working Paper) Tracing the Adoption of Digital Technologies. Bank for
International Settlements. https://doi.org/ideas.repec.org/p/bis/biswps/1166.html
Slovák, P., & Fitzpatrick, G. (2015). Teaching and Developing Social and Emotional Skills with
Technology. ACM Transactions on Computer-Human Interaction, 22(4), Article 19.
https://doi.org/10.1145/2744195
Suhaimi, N. S., Mountstephens, J., & Teo, J. (2020). Eeg-Based Emotion Recognition: A State-of-the-
Art Review of Current Trends and Opportunities. Comput Intell Neurosci, 2020, 8875426.
https://doi.org/10.1155/2020/8875426
The-Guardian. (2022). Google Fires Software Engineer Who Claims Ai Chatbot Is Sentient. The
Gurdian. https://www.theguardian.com/technology/2022/jul/23/google-fires-software-
engineer-who-claims-ai-chatbot-is-sentient
Verma, P. (2022). These Robots Were Trained on Ai. They Became Racist and Sexist. The Washington
Post. https://www.washingtonpost.com/technology/2022/07/16/racist-robots-ai/
Walsh, C. G., Ribeiro, J. D., & Franklin, J. C. (2017). Predicting Risk of Suicide Attempts over Time
through Machine Learning. Clinical Psychological Science, 5(3), 457-469.
Webster, J., & Watson, R. T. (2002). Analyzing the Past to Prepare for the Future: Writing a Literature
Review. MIS Quarterly: Management Information Systems, 26(2), xiii-xxiii.
Werner, C., Li, Z. S., & Damian, D. (2019). Can a Machine Learn through Customer Sentiment?: A
Cost-Aware Approach to Predict Support Ticket Escalations. IEEE Software, 36(5), 38-46,
Article 8737665. https://doi.org/10.1109/MS.2019.2923408
Whelan, E., McDuff, D., Gleasure, R., & Vom Brocke, J. (2018). How Emotion-Sensing Technology Can
Reshape the Workplace. MIT Sloan Management Review, 59(3), 7-10.
Xiang, R., Li, J., Wan, M., Gu, J., Lu, Q., Li, W., & Huang, C. R. (2021). Affective Awareness in Neural
Sentiment Analysis. Knowledge-Based Systems, 226, Article 107137.
https://doi.org/10.1016/j.knosys.2021.107137
Yadollahi, A., Shahraki, A. G., & Zaiane, O. R. (2017). Current State of Text Sentiment Analysis from
Opinion to Emotion Mining. ACM Computing Surveys, 50(2), Article a25.
https://doi.org/10.1145/3057270
Yang, H. C., Rahmanti, A. R., Huang, C. W., & Jack Li, Y. C. (2022). How Can Research on Artificial
Empathy Be Enhanced by Applying Deepfakes? Journal of Medical Internet Research, 24(3),
Article e29506. https://doi.org/10.2196/29506
Zhou, F., Qu, X., Helander, M. G., & Jiao, J. (2011). Affect Prediction from Physiological Measures Via
Visual Stimuli. International Journal of Human Computer Studies, 69(12), 801-819.
https://doi.org/10.1016/j.ijhcs.2011.07.005
Zhou, X., Wei, Z., Xu, M., Qu, S., & Guo, G. (2020). Facial Depression Recognition by Deep Joint Label
Distribution and Metric Learning. IEEE Transactions on Affective Computing, 13, 1605-1618.
https://doi.org/10.1109/TAFFC.2020.3022732
Zhou, Y., Lu, S., & Ding, M. (2020). Contour-as-Face Framework: A Method to Preserve Privacy and
Perception. Journal of Marketing Research, 57(4), 617-639.
Zuboff, S. (2015). Big Other: Surveillance Capitalism and the Prospects of an Information Civilization.
Journal of Information Technology, 30(1), 75-89. https://doi.org/10.1057/jit.2015.5
Australasian Conference on Information Systems Chen, Ciriello, Rubinsztein, and Vaast
2024, Canberra Artificial Emotional Intelligence: A Scoping Review
17
8 Appendices
Appendix A- Selection of Literature Process following PRISMA (2020)
Appendix B - Cumulative papers from 1995 to 2022
0
10
20
30
40
50
60
1995
1999
2000
2001
2002
2003
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
No. of papers on AEI
Year
Trendline of AEI developement
Technology
Applications
Ethics
0 5 10 15 20 25 30
Policing & Crime
Body Sensors
Education
Banking & Finance
Facial Expression Recognition
Motion Detection
Discourse
Electroencephalography (EEG)
Workp la ce
Love & Sex
Retail
Marketing
Artificial Emotional Intelligence
Sentiment Analysis
Natural Language Processing
Ethics
Healthcare
Multimodal Affective Computing
No. of literature
Motion Detection
Workplace
Eth ic s
Technology
Business Applications
Ethics
Australasian Conference on Information Systems Chen, Ciriello, Rubinsztein, and Vaast
2024, Canberra Artificial Emotional Intelligence: A Scoping Review
18
Appendix C- Breakdown of papers per subtopic
Appendix D- Excerpt of Literature Codebook (https://tinyurl.com/3w2y25x5)
Copyright
Copyright © 2024 Angelina Chen, Raffaele Ciriello, Zara Rubinsztein, and Emmanuelle Vaast. This is
an open-access article licensed under a Creative Commons Attribution-Non-Commercial 4.0 Australia
License, which permits non-commercial use, distribution, and reproduction in any medium, provided
the original author and ACIS are credited.
0 5 10 15 20 25 30
Policing & Crime
Body Sensors
Education
Banking & Finance
Facial Expression Recognition
Motion Detection
Discourse
Electroencephalography (EEG)
Wor kp lac e
Love & Sex
Retail
Marketing
Artificial Emotional Intelligence
Sentiment Analysis
Natural Language Processing
Ethics
Healthcare
Multimodal Affective Computing
No. of literature
Motion Detection
Workplace
Ethi cs
Technology
Business Applications
Ethics