ArticlePDF Available

Creating an Adaptive Voice and Language Model Capable of Emotional Response and Self-Profiling to Emulate User Personality

Authors:

Abstract and Figures

This work aims to show a cheap alternative of implementing an adaptive voice and language model, which has the opportunity not only react to the interlocutors’ emotions but also adapt the personality of the bot to the personality of the user. Through moderated Framework and a feedback loop process the author explores the possibility of a system model. The conversational agent of this framework uses Natural Language Process (NLP) technique for the psychological profiling in order to be self-directed and self-aware. Moreover, this system can be kept current with successive activity sequences with gradual enhancement in the identification of the user personality, which determines the manner in which the model interacts with the user. The author also presents a plan on how to design such a model supported by theory, method, and feasibility and possible uses
Content may be subject to copyright.
CREATING AN ADAPTIVE VOICE AND
LANGUAGE MODEL CAPABLE OF EMOTIONAL
RESPONSE AND SELF-PROFILING TO
EMULATE USER PERSONALITY
CREACIÓN DE UN MODELO DE VOZ Y LENGUAJE
ADAPTATIVO CAPAZ DE RESPUESTA EMOCIONAL Y
AUTOPROFILIADO PARA EMULAR LA PERSONALIDAD DEL
USUARIO
Hernan Isaac Ocana Flores
University of Queensland
pág. 3454
DOI: https://doi.org/10.37811/cl_rcm.v9i1.16093
Creating an Adaptive Voice and Language Model Capable of Emotional
Response and Self-Profiling to Emulate User Personality
Hernan Isaac Ocana Flores1
hi.ocana@uq.net.au
https://orcid.org/0000-0001-6258-3828
University of Queensland
Brisbane - Australia
ABSTRACT
This work aims to show a cheap alternative of implementing an adaptive voice and language model,
which has the opportunity not only react to the interlocutors’ emotions but also adapt the personality of
the bot to the personality of the user. Through moderated Framework and a feedback loop process the
author explores the possibility of a system model. The conversational agent of this framework uses
Natural Language Process (NLP) technique for the psychological profiling in order to be self-directed and
self-aware. Moreover, this system can be kept current with successive activity sequences with gradual
enhancement in the identification of the user personality, which determines the manner in which the
model interacts with the user. The author also presents a plan on how to design such a model supported
by theory, method, and feasibility and possible uses
Keywords: adaptive voice model, psychological profiling, natural language processing (nlp), emotionally
intelligent systems
1
Autor principal
Correspondencia: hi.ocana@uq.net.au
pág. 3455
Creación de un Modelo de Voz y Lenguaje Adaptativo Capaz de Respuesta
Emocional y Autoprofiliado para Emular la Personalidad del Usuario
RESUMEN
Esta autoría tiene como objetivo mostrar una alternativa económica para implementar un modelo de voz y
lenguaje adaptativo, que no solo tenga la capacidad de reaccionar a las emociones de los interlocutores,
sino también de adaptar la personalidad del bot a la personalidad del usuario. A través de un marco
moderado y un proceso de bucle de retroalimentación, el autor explora la posibilidad de un modelo de
sistema. El agente conversacional de este marco utiliza técnicas de Procesamiento del Lenguaje Natural
(PLN) para el perfilado psicológico con el fin de ser autodirigido y autoconsciente. Además, este sistema
puede mantenerse actualizado mediante secuencias sucesivas de actividad, logrando una mejora gradual
en la identificación de la personalidad del usuario, lo que determina la forma en que el modelo interactúa
con él. El autor también presenta un plan sobre cómo diseñar dicho modelo, respaldado por teoría,
método, viabilidad y posibles usos.
Palabras clave: modelo de voz adaptativo, perfilado psicológico, procesamiento del lenguaje natural
(pln), sistemas emocionalmente inteligentes
Artículo recibido 05 enero 2025
Aceptado para publicación: 10 febrero 2025
pág. 3456
INTRODUCTION
The integration of two modalities of emotional intelligence and personality flexibility into conversational
agents is a promising subfield in artificial intelligence research. The recent achievements in the domains
of large language models reached levels of excellence for natural language comprehension and
production (Flores and Luna, 2024) (Ocana et al., 2023a) (Pellert et al., 2024), but these systems currently
lack sincere emotional intelligence and matched personality personalities as those. Thus, to achieve this
the author proposes a framework how to create an ‘’emotional’’ dialogue system building from already
existing cost-efficient language models such to provide an effective and affordable cost.
The improvement of conversation has taken the place of the worry that these conversational agents only
generate automatic responses (Poggi and Pelachaud, 2000) and that it integrated key psychological
models. In Saha´s et al. (2024), research it was identified that current dialog systems even have the ability
to comprehend and apperceive affective information, while Matsumoto et al., (2022) regard the emotion
to a system such that a change coincides with other studies, and recognising that interacting with an
artificial intelligence system is not only a linguistic activity, but also affects and interacting with a stable
personality (Dolgikh, 2024)(Flores and Luna, 2024)(Flores, 2023)(Sonlu et al., 2021).
Thus, personality adaptation has become a factor in which is very important for developing more realistic
interfaces for AI systems. Müller et al., (2019) reveals that users are accepting of systems that have
personalities that are manmade and can alter preference as the user would wish. The same was also seen
by Guo et al., (2024) who state that, as compared to other dialogue systems, the personality-oriented
methods offer considerably improved engagement levels and user satisfaction rates. There is a strong
interaction effect between personality modeling and the added element of emotional intelligence
according to recent studies conducted by Ma et al., (2024), Flores (2024), and Wen et al. (2024).
These adaptive systems therefore use advanced machine learning architectures and are fully reliant on the
model structures. As seen in Rathi et al., (2022) through Psychometric profiling via natural language
processing, and in Flores (2023) on frameworks of deep learning-based personality assessment, the work
in the discipline was initiated. As Flores (2023) discusses, these technological advancements have been
accompanied on one side by research in reinforcement learning approaches where these systems have
been introduced in continuing interaction and proved to learn and improve their own emotional reactions.
pág. 3457
However, one has been as equally challenging the other as has been an attempt to develop genuinely
adaptable and emotionally intelligent systems. As Flores, Luna (2024) and Llanes et al., (2024) noted this
is so because for human emotion and personality modeling, there are wide differences of margin between
the human response patterns and behaviors machine expectations. Moreover, maintaining similar
emotional consistency across such different interaction contexts is not obvious as perceived by the system
erratic behaviour this was also addressed in the recent work of Flores (2023). AI advancement integrates
all of these components of emotional intelligence, personality adaptation and machine learning in a single
work. Despite the encouragement of mastery of the components of Fan et al., (2017) within artificial
agents, noted that emotional AI is quite desirable for an effective interaction between people and AI.
Kossack and Unger., (2023) also agree with this point by showing that emotion-aware chatbots are ideal
for enhancing user interaction and satisfaction.
This proposed framework builds from previous stablished foundations by building a robust
system and incorporating real time personality adaptation and emotional intelligence through the addition
of the mirroring approach in the feedback loop (Flores, 2023). With the latest advances in natural
language processing is possible to create a complete system for more human centred (Lee et aL, 2023)
(Flores and Luna, 2024) and personality assessment techniques according to Sikström et al., (2024).
Flores, (2023) proposed how such systems can learn and adapt their emotional responses through
continued interaction.
Although, it’s difficult to create emotional and adaptive systems, but it is possible. Compared to other
modelling approaches, such as (Llanes et al. 2024), which are simpler and less capable of handling the
small differences in user behavior and patterns of their responses to content, human emotion and
personality demand sophisticated modeling methods, as Flores and Luna (2024) and Llanes et al. (2024)
point out. Building upon these foundations, the proposed framework is derived as a robust approach to
real time personality adaptation in combination with emotional intelligence based on feedback
incorporating mirroring features (Flores 2023) (Lee et al. 2023). Thus, the use of personality assessment
techniques (Flores and Luna 2024) (Sikström et al. 2024), will serve to build robust system previous
research on how frameworks are extended is complicated by the fact that the system needs also to keep
pág. 3458
consistency between different interaction contexts, an issue that has been pointed out in recent work by
Flores (2023).
By integrating emotional intelligence and personality adaptation with machine learning into a holistic
design, is meant to take a significant challenge but also progress in AI. Fan et al., (2017) suggest that
emotional intelligence in artificial agents no longer amounts to just a desirable trait but is a requirement
required for true and productive human AI interaction supported by Kossack and Unger (2023)
METHODOLOGY
The methodology of this work is base system capable to update the model continuously and, after the
analysis of the information, the result should be saved, processed and distinguished to complete. This, as
Flores (2023), Sikström et al., (2024) and Saha et al., (2024) have demonstrated in interactive-user-
feedback-adaptive systems are much more engaging and reliable. This makes it possible for the language
model to change its response dependant on the user feedback data as proposed by Guo et al., (2024) and
Wen et al., (2024) on personality-based conversation.
The system consists of four major components:
User Profiling
The foundation of this research work is based on the reliance of the deep learning model Chat GPT that
uses inputs from the users in synch with time to build a psychological and behavioural profile. Prior work
of Flores (2023), Müller et al., 2017, and Rathi et al., (2022), shows that applied artificial intelligence to
utilisation of personality detection and analysis is feasible and advisable. Furthermore, advanced
strategies presented by Flores and Luna (2024) and Ocaña, et al., (2023b) points out, the personality
styles can be estimated accurately and reliably with the help of NLP methods.
Emotional Intelligence
Based on the study by Dolgikh, (2024) and Wang et al., (2024) the frameworks of affective
computing in conversational systems should be applied to create or emulate emotional intelligence. This
is done such as to complement the enhanced sentiment analysis methods Flores (2023), Matsumoto
etal:2022, and Ma et al.,2024 suggested that systems that emulates such emotions can gather deeper
insight of user emotions. The ability to perform the accurate sentiment analysis is enabled through use of
advance techniques endorsed by Flores (2023), Matsumoto et al., (2022),, and Ma et al., (2024). Also, the
pág. 3459
earlier work done by Kossack and Unger, 2023, Fan et al., 2017, and Bilquise et al., 2022 depicted these
same aspects enhancing the mechanism of trust and overall user engagement significantly.
Personality Emulation
Based on the study by Dolgikh, et al., (2024) and Wang et al., (2024), this introduced the
frameworks of affective computing in conversational systems. To complement the enhanced sentiment
analysis methods of Flores (2023), Matsumoto et al., (2022), and Ma et al., (2024) this provides a deeper
insight of user emotions. The ability to perform the accurate sentiment analysis is enabled through use of
advance techniques endorsed by Flores (2023), Matsumoto et al., (2022), and Ma et al., (2024). Also, the
earlier work done by Kossack and Unger, 2023, Fan et al., 2017, and Bilquise et al., 2022 depicted herein
revealed that Responses enhance the mechanism of trust and overall user engagement significantly.
Feedback Loop and Iteration
Based on the study by Dolgikh, (2024) and Wang et al., (2024), this introduced the frameworks of
affective computing in conversational systems. To complement the enhanced sentiment analysis methods
of Flores (2023), Matsumoto et al., (2022), and Ma et al., (2024) this provides a deeper insight of user
emotions. The ability to perform the accurate sentiment analysis is enabled through use of advance
techniques endorsed by Flores (2023), Matsumoto et al., (2022),, and Ma et al., (2024). Also, the earlier
work done by Kossack and Unger, (2023), Fan et al., (2017), and Bilquise et al., (2022) depicted herein
revealed that Responses enhance the mechanism of trust and overall user engagement significantly.
Basing this component the initial process flow of these components are presented as follows in Figure 1.
Figure 1.- System overview for profile creation and emulation
RESULTS
Following stipulated stablished framework this study found that the proposed system must be built upon
with the component below:
pág. 3460
User Interaction
1.-Text Inputs: Linguistic analysis of user-generated content to detect sentiment,
emotions, and underlying personality traits.
2.-Interaction Patterns: How the user responds to specific conversational prompts, how they
engage with certain topics, and the pacing of their responses.
3.-Psychological Models: Adding primary psychological models including the Five F actor
Personality Model or the Myers Briggs Type Indicator to arrive at a theoretical personality profile.
As stipulated before language models including unsupervised clustering are used to analyse the user’s
response and activities to recognise patterns (Dolgikh, 2024), Fan et al. (2017), Flores, Luna (2024) and
Ocaña (2023b). These patterns then can be related to a set of psychological characteristics in which
further behaviour will affect the model response (Lee et al., 2024).
Emotional Intelligence Integration - EMI
The developed model of EMI is itself an example of a high level of creative advancement in human-
computer interaction based on several levels of affective computing (Dolgikh, 2024) (Flores, 2023).
According to Ocaña et al. (2023a), AI systems aimed for emotional detection can be used in situations
where emotional responses are significant, for example in diagnosis of autism spectrum disorder. The
identification of the sentiment of the inputs shows that the system incorporates specialised sentiment
analysis procedures which Flores and Lunda (2024), Kossack and Unger (2023), Ocaña et al. (2023b) has
been useful in studying users’ behaviour or affective feedback in computerised contexts. assessments,
especially in specific domains including autism spectrum disorder (ASD) where emotions are particularly
important.
The detection of minor emotions from user inputs is a complex feature of the system which uses
sentiment analysis to determine emotions; techniques embraced by Flores and Luna (2024), Kossack and
Unger (2023), Ocaña et al. (2023b) while analysing users’ interactions and feelings in digital platforms.
This emotional awareness is further boosted through the incorporation of social emotional development
theories as noted in Fan et al. (2017), Le, et al. (2024) & Ocaña et al. (2021) autism spectrum disorder
assessment, where emotional recognition plays a crucial role. The system's ability to detect emotional
undertones in user inputs leverages advanced sentiment analysis techniques, which Flores and Lunda
pág. 3461
(2024), Kossack and Unger (2023), Ocaña et al. (2023b) have shown to be effective in analysing user
interactions and emotional expressions in digital environments.
This emotional awareness is further enhanced through the integration of social-emotional development
frameworks, as highlighted in Fan et al. (2017), Le, et al. (2024) Ocaña et al. (2021a) in which the system
retrieves what was described as indicator of emotion involvement by Guo et al. (2024), Ma et al. (2024),
Llanes et al. (2024), Velagaleti, et al., (2024).
Moreover, Flores and Luna (2024), Kossack and Unger (2023), and Ocaña et al. (2023b) have effectively
described the methodologies used in the system for conducting adaptive sentiment analysis. At their core,
these methodologies allow for the perceiving of user emotions in the context of number-based digital
interactions thus serving as a basis in this research work. To enrich this perception of user emotions, the
system has also been endowed with the capability for socially and emotionally intelligent interactions, in
line with the approaches of Fan et al. (2017), Le et al. (2024), and Ocaña et al. (2021a). This endowment
allows for a level of engagement with users that goes beyond responding to them at the level of basic
emotions. Finally, Guo et al. (2024), Ma et al. (2024), Llanes et al. (2024), Ocaña et al. (2021b), and Wen
et al. (2024) have described the system design as one that understands emotional engagement signals.
These are "signals" that indicate the user is expressing specific traits of emotions like sadness, happiness,
anger, or fear.
Emotional response is noticed more in cyberspace, which Le et al. (2024) and Ocaña et al. (2023c) found
to be a place with positive pathways for interaction when it comes to emotional appeals. To do this, the
response generator built upon these emotional insights to create a shift in conversation tenor.
Transformation, from less empathetic interactions to more empathetic ones. Timing was essential to
sustain the empathically relevant part of the interaction (the human bridge part of the conversation). This
was something that Ocaña et al. (2021c) gave high marks when interacting in user experience on
cyberspace. It was also something that Le et al. (2024) said was especially crucial for "learning moments¨
and implemented also by Yu et al., (2024) and Yadav et al., (2020).
Guo et al. (2024), Ma et al. (2024), Llanes et al. (2024), Ocaña et al. (2021b), Wen et al. (2024) describe
emotional involvement indicators that the system uses to categorise and respond to essential emotions
such as happiness, sadness, anger, fear. For instance, in virtual environments, Le, et al. (2024) and Ocaña
pág. 3462
et al. (2023c) showed that emotionally engaging interface interactions are effective in increasing users
engagement and learning outcomes. For example Le et al. (2024), Ocaña et al. (2021c) have shown that
emotional resonance in which model's communication can vary to an empathetic style when user faces
frustration, and maintaining an intention to convey the emotion as intended during entire the interaction
time, are key success factors for user experience and learning effectiveness supported by Le, et al. (2024)
and Ocaña et al. (2023c) as had demonstrated the effectiveness of emotionally engaging interactions in
improving user engagement and learning outcomes. The model's response generator utilizes these
emotional insights to adjust its communication style, showing empathy during moments of user
frustration and maintaining appropriate emotional resonance throughout the interaction, a capability that
Le et al. (2024), Ocaña et al. (2021c) had shown to significantly impact user experience and learning
effectiveness.
Personality Emulation
After the creation of the user profile, the next goal for the system is to replicate the user personality
characteristics. This is done by manipulation of the language model’s response patterns in a dynamic
nature (Flores, 2023)(Kubjana, 2024)(Velagaleti et al., 2024). For example, if the system recognizes that
the user is an open and extrovert personality, then the system interacts with the user in an energetic way
or uses more energetic intonation to express the messages.
This paper proposes that the component leverages a personality modulation where:
1.- The model adjusts its communication style (formal, casual, direct,
nuanced).
2.- The model tries to tailor its emotion delivery (more upbeat, reserved,supportive).
3.- The model could simulate cognitive patterns such as decision-making,
preferences, or certain habitual language patterns based on the user’s psychological
profile.
4-Emotion Detection: Using natural language process tools like sentiment lexicons
and deep learning models (e.g., BERT, OPEN AI) to detect underlying emotions in the
text.
pág. 3463
5.-Emotion Response Generation: Selecting from a range of pre-trained emotional
response templates that match the detected emotional state.
3.4 Feedback Loop and Iteration
Interactions become progressive with the system updating and modifying the user profile. Thus, it can be
called adaptive (Flores, 2023)(Ðula et al., 2024), due to such learning helps the model to develop the
ability to recognise the user’s personality and psychological state (Döring et al., 2024). At each
interaction, the model evaluates:
- Has the user become more positive, negative, or remain the same?
There are emerging personality traits that were not seen before and they have been into existence for
some time now.
- What types of behaviours including topics of conversation, style of speaking have
become more or less common?
The behaviour modification used in this process is known as reinforcement learning to fine tune the
model response to the changes in the profile. model becomes increasingly adept at understanding the
nuances of the user’s personality and emotional state. At each interaction, the model evaluates:
- Has the user’s emotional state shifted significantly?
- Are there emerging personality traits that were previously underrepresented?
- What patterns of behaviour (e.g., topic preferences, language style) have become
more prominent
pág. 3464
Figure 2.- Resultant Diagram Flow for the Interactive Implemented System Module
DISCUSSION
System Implementation
Having shown the efficacy of the AI industry on creating programs for conversing with people in natural
language such as ChatGPT from Open AI, and well known and studied models therefore this served as a
foundation for the proposed model implementation as previously stablished in HCI literature for the basis
of system models (Chakriswaran, et al., 2019)(Yadav, et al., 2020)(Yu et al., 2024) and specially of
pág. 3465
Flores, (2023) due to the establishment of a model in creating ai proliferation of human traits and in his
subsequent research (Luna and Flores, 2024).
Thus, the system implementation will be a specific description for this proposed implementation and is as
follows:
User Interface: The user can either give commands vocally or type them out for system interaction. The
user interacts with the system at two levels, above and below the system layer. The system layer receives
the input coming from the user layer. Input and express output are stored in the memory of the system for
nearly instantaneous processing.
User Input Layer: This layer is concerned with the user commands about speech and sentiment.
Also, any contextual information that might be necessary to work with the command is included,
just in case it's needed to understand the command better. For the most part, everything is
collected and processed in real-time so that the system can be responsive to the user without any
noticeable delay.
Analysis Layer: The layer for understanding what the commands mean and working out a plan for
executing them. Machine learning tools and emotionally competent electronics strapped to the system do
the work of interpreting sentiment and working out the commands' meaning.
User Engagement: The system is a "not-a-human" based on the past interactions with the current user
and with other users. These interactions refine and update the user profile, which makes it almost lifelike.
The system "thinks" using past interactions, helping it with cognitive and communication-style analysis.
And yet, it’s a profile in personality, which makes it almost humanistic in nature.
Emotional Intelligence. The system notices emotional changes in the input from the user. Based on this
analysis, the system always tries to "stay with the user," meaning it aligns its tone and content with the
user’s emotional state. Six: Deciding What to "Say" and What Not to "Say". The system has to be good at
both aspectssaying the right things and not saying the wrong things.
Feedback Loop and Iteration: The system keeps on gathering feedback from every interaction it has
with the user, which it then uses to continuously refine the user profile. - Continuous interaction between
the system and user provides a good base of data from which the user profile can be enhanced. In a
system as complex and adaptive as this, poor starting data can quickly be turned around into a profile
pág. 3466
that's decent and usable. There are two layers of the system that come into play when we're talking about
the profile.
Response Generation: The system uses the data it gathers from the layers before this one to create an
output that is personalized and therefore more likely to elicit the kind of response that we need to engage
the user. - The system can render the response in either text or voice, depending on which sort of input the
user provided to the system to initiate the interaction.
User Feedback: After the system's output has been rendered and the interaction is, in theory, complete,
the user provides the system with fresh databe it verbal or non-verbalthat is going to have a
subsequent impact on the next interaction.
Continuous Adjustment and Learning: The model continuously learns and adjusts its user profile
based on the most recent interactionsreverberations of which can be heard in the next command issued
by the user. Using reinforcement learning, the system becomes increasingly more responsive and
anticipatory concerning user needs whilst imitating what the user is feeling.
CONCLUSION
This paper has primarily been a mapping of the HCI basis, elements and proposed theories, detailing the
potential use cases for adaptive conversational agentsin the author's view, mostly promising ones. The
efficient, smarts applications of adaptive conversational agents do not amount to a revolution. What this
paper really wants to strive is to give foundation to the implementation of given system and how these
technologies might, through their use in entirely novel capacities, turn that efficiency dial down for the
greater good of humanity. This could lead to the secret, of self-aware AI, if these technologies are to have
a transformative adaptive conversational agent which currently they do not possess.
Adaptive conversational agents represent the new foundational world of AI, theres still questions to ask
such as Why are they "social"? Because they interact with us in a way that only humans have up until
now. At least, that's the ideal. Chen and Xiao (2024) assert that the pairing of these agents with large
language models is changing the landscape of AI. But true socialise sues might require extensive
experimentation in the language’s models. This means that if humanity takes statement at face value, then
the main significance of these agents doing something that a LLM (Large Language Model) by itself
doesn't do is "acting human." Or, to put it another hand, "interacting socially" (i.e., in a human-like
pág. 3467
manner and without the necessity of a human on the other end). However, a chatty agent that works with
a LLM is still just a powerful agent on various task even without the emotional capabilities. Why?
Because the LLM is essentially an enormous statistical machine that finds, with varying levels of success,
the "next best word" to produce, given the "previous words" and "context" it's been cued into seeing as a
result of being trained on an unfathomable amount of "relationship data.
The ethical issues that might arise not only in the part from the user data collection but also over the
emotionality of AI as Flores and Luna (2024) described extensively, however, this is a novel proposal,
that overlies the fields of Human Computer Interaction with behavioural and perhaps even political
science as to the ¨humanization¨ of AI will bring even harder existential challenges for humanity. The
changes on the adjustment of AI modelling might come over as potential invasion to autonomous AI. For
now, this paper encourages future research for the proposed testing to first achieve an accurate perhaps
not accurate but definitely emotional AI such that of humans.
REFERENCES
Bilquise, G., Ibrahim, S., & Shaalan, K. (2022). Emotionally intelligent chatbots: A systematic literature
review. Human Behavior and Emerging Technologies, 2022
https://doi.org/10.1155/2022/9601630
Chakriswaran, P., Vincent, D. R., Srinivasan, K., & Sharma, V. (2019). Emotion AI-driven sentiment
analysis: A survey, future research directions, and open issues. Applied Sciences.
https://doi.org/10.3390/app9245462
Chen, Y., & Xiao, Y. (2024). Recent advancement of emotion cognition in large language models. _arXiv
preprint,_arXiv:2409.13354. https://doi.org/10.48550/arXiv.2409.13354
Dolgikh, S. (2024). Self-awareness in natural and artificial intelligent systems: A unified information-
based approach. Evolutionary Intelligence. https://doi.org/10.1007/s12065-024-00974-z
Döring, N., Le, T. D., Vowels, L. M., & Vowels, M. J. (2024). The Impact of Artificial Intelligence on
Human Sexuality: A Five-Year Literature Review 20202024. Current Sexual Health Reports.
https://doi.org/10.1007/s11930-024-00397-y
pág. 3468
Ðula, I., Berberena, T., & Keplinger, K. (2024). From challenges to opportunities: navigating the human
response to automated agents in the workplace. Humanities and Social Sciences
Communications, 11(1). https://doi.org/10.1057/s41599-024-03962-x
Guo, Y., Smith, A. B., & Johnson, M. L. (2024). Personality prediction from task-oriented and open-
domain humanmachine dialogues. _Scientific Reports, 14_(1), Article 53989.
https://doi.org/10.1038/s41598-024-53989-y
Kossack, S., & Unger, H. (2023). Emotion-aware chatbots: Understanding, reacting, and adapting to
human emotions. In _Advances in Human-Computer Interaction_ (pp. 123145). Springer.
https://doi.org/10.1007/978-3-031-61418-7_8
Kubjana, L., Adekunle, P., & Aigbavboa, C. (2024). Analyzing the Impact of Emotional Intelligence on
Leadership in Construction 4.0. Proceedings of the Future Technologies Conference.
https://doi.org/10.1007/978-3-031-73128-0_30
Le, U. P. N., Nguyen, A. T. T., Nguyen, A. V., Huynh, V. K., Bui, C. T. L., & Nguyen (2024). How do
emotional support and emotional exhaustion affect the relationship between incivility and
students’ subjective well-being? In _Disruptive Technology and Business Continuity:
Proceedings of the 5th International Conference on Business (ICB 2023)_(pp. 237248).
Springer. https://doi.org/10.1007/978-981-97-5452-6_18
Lee, J., Park, S., & Kim, H. (2023). A paradigm shift from 'human writing' to 'machine generation' in
personality test development. _Journal of Business and Psychology, 38_(1), 4562.
https://doi.org/10.1007/s10869-022-09864-6
Fan, X., Luo, Y., Zhao, Y., & Li, J. (2017). Do we need emotionally intelligent artificial agents? In
_International Conference on Human-Computer Interaction_ (pp. 194205). Springer.
https://doi.org/10.1007/978-3-319-67401-8_15
Lee, G. H., Lee, K. J., Jeong, B., & Kim, T. K. (2024). Developing Personalized Marketing Service Using
Generative AI. IEEE Access. https://doi.org/10.1109/ACCESS.2024.3361946
Flores, H. I. O., & Luna, A. (2024). AI for psychological profiles: Advances, challenges, and future
directions. _Ciencia Latina Revista Científica Multidisciplinar, 8_(3), 1059210609.
https://doi.org/10.37811/cl_rcm.v8i3.12221
pág. 3469
Flores, H. I. O. (2023). Human computer interaction’s insights into the recognition of love: A
comprehensive framework. _Tierra Infinita, 9_(1), 228245.
https://doi.org/10.32645/26028131.1254
Llanes, J., García-Sánchez, F., Moreno-Izquierdo, M., & Torres-Ramos, R. (2024). Developing
conversational virtual humans for social emotion elicitation. _Expert Systems with Applications,
231,_ Article 122647. https://doi.org/10.1016/j.eswa.2024.122647
Ma, L., Chen, Y., & Zhao, X. (2024). Personality-enhanced emotion generation modeling for dialogue
systems. _Cognitive Computation, 16_(1), 4562. https://doi.org/10.1007/s12559-023-10204-w
Matsumoto, S., Washburn, A., & Riek, L. D. (2022). A framework to explore proximate human-robot
coordination. _ACM Transactions on Human-Robot Interaction (THRI, 11_(3), Article 32, 134.
https://doi.org/10.1145/3526101
Müller, L., Mattke, J., Maier, C., & Weitzel, T. (2019). Chatbot acceptance: A latent profile analysis on
individuals' trust in conversational agents. Proceedings of the 2019 ACM SIGCHI Conference, 1-
5. https://doi.org/10.1145/3322385.3322392
Ocaña, M., Villamarín, A., Chumaña, J., Narváez, M., Guallichico, G., Luna A.,(2023a). Artificial
Intelligence in the Detection of Autism Spectrum Disorders (ASD): A Systematic Review. In
Intelligent Vision and Computing. Springer, Cham. https://doi.org/10.1007/978-3-031-71388-0_3
Ocaña, M., Luna, A., Guallichico, G. (2023b). Aplicaciones móviles en el desarrollo del lenguaje Un
enfoque comparativo entre padres y educadores. Revista ALPHA OMEGA. Retrieved From:
https://link.springer.com/chapter/10.1007/978-3-030-68083-1_30
Ocaña, M., Mejía, R., Larrea, C., Analuisa, C. (2021a). Informal learning in social networks during the
COVID-19 pandemic: an empirical analysis. Artificial Intelligence. Retrieved from:
https://link.springer.com/chapter/10.1007/978-3-030-68083-1_30
Ocaña, M., Mejía, R., Larrea, C., Cruz, E., Santana, L. (2021b). Investigating the importance of student
location and time spent online in academic performance and self-regulation. Artificial
Intelligence. Retrieved from: https://link.springer.com/chapter/10.1007/978-3-030-68083-1_31
pág. 3470
Ocaña, M., Luna, A., Jeada, V.Y., Carrillo, H.C. (2023c). Are VR and AR really viable in military
education?: A position paper. Advances in Computing. https://doi.org/10.1007/978-981-19-7689-
6_15
Ocaña, M., Almeida, E., Albán, S. (2021c). How Did Children Learn in an Online Course During
Lockdown?: A Piagetian Approximation. XV Multidisciplinary International Congress. Retrieved
from: https://link.springer.com/chapter/10.1007/978-3-030-96046-9_20
Pellert, M., Lechner, C. M., & Strohmaier, M. (2024). AI psychometrics: Assessing the psychological
profiles of large language models through psychometric inventories. _Perspectives on
Psychological Science, 19_(5). https://doi.org/10.1177/17456916231214460
Poggi I., & Pelachaud C.(2000). Emotion and personality in a conversational agent. In J. Cassell (Ed.),
Embodied conversational agents (pp. 189-219). MIT Press.
https://doi.org/10.7551/mitpress/2697.003.0008. Psychometric profiling of individuals using
Twitter profiles: A psychological natural language processing-based approach. _Concurrency and
Computation: Practice and Experience, 34_(15), Article e7029. https://doi.org/10.1002/cpe.7029
Romero, P., Fitz, S., & Nakatsuma, T. (2024). Do GPT language models suffer from split personality
disorder? The advent of substrate-free psychometrics. _arXiv preprint,_ arXiv:2408.07377.
https://doi.org/10.48550/arXiv.2408.07377
Saha, R., Neogi, D., & Chaudhuri, R. (2024). L-BFGS optimization-based human body posture
rectificationA smart interaction for computer-guided workout. In _Proceedings of the 12th
International Conference on Soft Computing for Problem Solving_ (pp. 6176). Springer.
https://doi.org/10.1007/978-981-97-3292-0_4
Sikström, S., Johansson, B., & Larsson, M. (2024). Personality in just a few words: Assessment using
natural language processing. _SSRN Electronic Journal._ https://doi.org/10.2139/ssrn.4933048
Sonlu, S., Güdükbay, U., & Durupinar, F. (2021). A conversational agent framework with multi-modal
personality expression. _ACM Transactions on Graphics (TOG, 40_(1), Article 7, 116.
https://doi.org/10.1145/3439795
pág. 3471
Velagaleti, S. B., Choukaier, D., & Nuthakki, R. (2024). Empathetic Algorithms: The Role of AI in
Understanding and Enhancing Human Emotional Intelligence. Journal of Electrical and Computer
Engineering. Retrieved from:
https://www.proquest.com/openview/ebdccf03c2979c138444061f01dd87df/1?pq-
origsite=gscholar&cbl=4433095
Wen, Q., Li, J., & Xu, P. (2024). Personality-affected emotion generation in dialog systems.
_Proceedings of the ACM on Interactive, Mobile, Wearable, and Ubiquitous Technologies, 8_(1),
Article 7, 124. https://doi.org/10.48550/arXiv.2404.07229
Yadav, A., & Vishwakarma, D. K. (2020). Sentiment analysis using deep learning architectures: A
review. Artificial Intelligence Review. Retrieved from Springer. https://doi.org/10.1007/s10462-
019-09794-5
Yu, J., Dickinger, A., So, K. K. F., & Egger, R. (2024). Artificial intelligence-generated virtual
influencer: Examining the effects of emotional display on user engagement. Journal of Retailing
and Consumer Services. https://doi.org/10.1016/j.jretconser.2023.103560
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Purpose of Review Millions of people now use generative artificial intelligence (GenAI) tools in their daily lives for a variety of purposes, including sexual ones. This narrative literature review provides the first scoping overview of current research on generative AI use in the context of sexual health and behaviors. Recent Findings The review includes 88 peer-reviewed English language publications from 2020 to 2024 that report on 106 studies and address four main areas of AI use in sexual health and behaviors among the general population: (1) People use AI tools such as ChatGPT to obtain sexual information and education. We identified k = 14 publications that evaluated the quality of AI-generated sexual health information. They found high accuracy and completeness. (2) People use AI tools such as ChatGPT and dedicated counseling/therapy chatbots to solve their sexual and relationship problems. We identified k = 16 publications providing empirical results on therapists’ and clients’ perspectives and AI tools’ therapeutic capabilities with mixed but overall promising results. (3) People use AI tools such as companion and adult chatbots (e.g., Replika) to experience sexual and romantic intimacy. We identified k = 22 publications in this area that confirm sexual and romantic gratifications of AI conversational agents, but also point to risks such as emotional dependence. (4) People use image- and video-generating AI tools to produce pornography with different sexual and non-sexual motivations. We found k = 36 studies on AI pornography that primarily address the production, uses, and consequences of – as well as the countermeasures against – non-consensual deepfake pornography. This sort of content predominantly victimizes women and girls whose faces are swapped into pornographic material and circulated without their consent. Research on ethical AI pornography is largely missing. Summary Generative AI tools present new risks and opportunities for human sexuality and sexual health. More research is needed to better understand the intersection of GenAI and sexuality in order to a) help people navigate their sexual GenAI experiences, b) guide sex educators, counselors, and therapists on how to address and incorporate AI tools into their professional work, c) advise AI developers on how to design tools that avoid harm, d) enlighten policymakers on how to regulate AI for the sake of sexual health, and e) inform journalists and knowledge workers on how to report about AI and sexuality in an evidence-based manner.
Article
Full-text available
Workers are increasingly embracing Artificial Intelligence (AI) to optimise various aspects of their operations in the workplace. While AI offers new opportunities, it also presents unintended challenges that they must carefully navigate. This paper aims to develop a deeper understanding of workers’ experiences with interactions with automated agents (AA) in the workplace and provide actionable recommendations for organisational leaders to achieve positive outcomes. We propose and test a simulation model that quantifies and predicts workers’ experiences with AA, shedding light on the interplay of diverse variables, such as workload, effort and trust. Our findings suggest that lower-efficiency AA might outperform higher-efficiency ones due to the constraining influence of trust on adoption rates. Additionally, we find that lower initial trust in AA could lead to increased usage in certain scenarios and that stronger emotional and social responses to the use of AA may foster greater trust but result in decreased AA utilisation. This interdisciplinary research blends a systems dynamics approach with management theories and psychological concepts, aiming to bridge existing gaps and foster the sustainable and effective implementation of AA in the workplace. Ultimately, our research endeavour contributes to advancing the field of human-AI interaction in the workplace.
Chapter
Full-text available
The growth and optimization of different algorithms applied within Artificial Intelligence (AI) has allowed the creation of several fields of research, one of the most revolutionary being artificial vision techniques and neural networks related to autism spectrum disorder (ASD). This systematic review focused on identifying key approaches in autism research in children using these technologies. After applying the PRISMA protocol, 28 studies were selected from the Web of Science and Scopus databases (2018–2023). The results highlight the relevance of AI for diagnosis and support in ASD, addressing from early detection based on nonverbal, verbal and brain cues, to the assessment of socioemotional development and the creation of tools to study ASD in children. Future directions focused on early detection and social-emotional development of children with ASD using AI applications are explored.
Article
Full-text available
This framework considers complex emotions that can be analysed by various current technological techniques, measures, and resources available nowadays. The author postulates an evolving feedback mechanism between man’s psyche and technology development which defines man’s psychological condition as well. The dimensions of the dynamic feedback loop comprise of affective communication via technology, recognition of emotions through technology, and moderation of emotions using technology. Thought the exploration of love other possibilities for theoretical frames of system development, and systems’ needs for other emotions. Using emotion recognition technology (e.g., facial expression analysis) and sentiment analysis, these devices can recognise the users’ emotions, thereby making it possible for designers to develop user-centred designs. What makes emotional technologies exist is the requirement of combining the technologies which are highly precise and can understand the complicated traits of life. Finally, the importance of emotional technologies advancements is discussed as t is now essential to unify these technologies to reach unprecedented accuracies and studying the highest complexity of life phenomena.
Article
Full-text available
Self-awareness is a subject that has intrigued the research community for a long time and is an object of extensive experimental research. Over the years, multiple studies and experiments probed different aspects, functions and manifestations of self-awareness in animals and humans. From these results, several intriguing questions arose: is self-awareness a general phenomenon based on some common laws and principles of organization and processing information about sensory environments? Are different manifestations of self-awareness demonstrated in multiple experiments interrelated on some common basis, or unrelated and disparate? In this work, we set out to approach these questions on the basis of the perspective of organization and processing of the information about the sensory environment of an intelligent system, external and internal. We conjecture that a possible basis for understanding the development and evolution of self-awareness can be the theory of complex coordinated intelligent responses that require a coordination of multiple “atomic” intelligent actions to achieve the best possible outcome for the intelligent system. In this approach, self-awareness can emerge naturally as an information model of the sensory environments that facilitates the production of effective responses via selection by the fitness to the environment, measured by the success of empirical trials. We consider different types and levels of self-awareness examined in studies and experiments and discuss how they can be connected with the organization and functions of the information model of the environment involved in the formation of intelligent responses. The information approach to self-awareness is used then to discuss possible directions of evolution of the information models in the natural and artificial intelligent systems.
Chapter
Full-text available
Human pose estimation possesses a significant potential in reducing the cases of injuries sustained during strenuous physical activities like gym workout sessions. Pose estimation aims toward locating human body joints accompanied by visual inputs. Major challenges in perceiving inputs from a cluttered background have been addressed in this work. The paper presents an end-to-end methodology, based on two different algorithms, MediaPipe Holistic Pipeline and Modified BlazePose, that perform real-time pose detection and warn users regarding incorrect body movements. The reason for choosing these models over more popular and robust Human Pose Estimation models like OpenPose is the lightness of the models. A comprehensive review of the two models has been illustrated in this research work for 2D as well as 3D pose estimation. The research piece has also incorporated the subsequent methodology applied to identify faulty body joint angles by comparing them with the optimum ones. Considering an uncontrolled environment, the testing of the models have been done. The models have confirmed the identification of human Region Of Interest as foreground from background objects and calculated the pose angle of joints. A systematic study pertaining to the visibility of joints for each model is also presented in the paper for better reference. The deviation of the measured angle of joints from the optimal ones is properly annotated followed by optical analysis.
Chapter
Incivility is increasingly receiving attention because of its increasing frequency, especially in educational environments. It is considered the leading cause of academic ineffectiveness, in addition to affecting the spiritual and psychological values of the individual. With the desire to understand the relationship between incivility and subjective well-being of college students in Vietnam through two mediating effects, including emotional exhaustion and emotional support, this study was conducted. The partial least squares structural equation modeling (PLS-SEM) was used to process the data set of 600 participants to produce specific figures. The results reveal that emotional exhaustion significantly mediates the relationship between incivility and subjective well-being, as does emotional support. However, the non-existence of a relationship between incivility and subjective well-being is a surprising result that emerged from this study, accompanied by arguments, explanations, and suggestions for universities. This study underscores the importance of addressing incivility in educational environments and fostering emotional support to mitigate emotional exhaustion and promote students’ subjective well-being.