Fig 3 - uploaded by Félix Ramos
Content may be subject to copyright.
Source publication
It has been recognized that human behavior is an observable consequence of the interactions between cognitive and affective functions. This perception has motivated the study of human emotions in disciplines such as psychology and neuroscience and led to the formulation of a number of theories and models that attempt to explain the mechanisms under...
Context in source publication
Context 1
... main contribution of dimensional theories to CMEs is that they provide a suitable framework for representing emotions from a structural perspective [30,50]. This psy- chological approach establishes that emotions can be dif- ferentiated on the basis of dimensional parameters, such as arousal and valence. Russell [55] proposes a two-dimen- sional framework consisting of pleasantness (pleasure/dis- pleasure) and activation (arousal/non-arousal) to characterize a variety of affective phenomena such as emo- tions, mood, and feelings (see Fig. 3). In this model, in order to generate emotions, perceived events are evaluated and represented in terms of their level of pleasantness and acti- vation and situated within a two-dimensional space in which a number of emotion labels have been previously identified. In this manner, the particular position of the events in the two-dimensional space defines the emotion to be derived. Russell [55] suggests the concept of core affect to explain and represent the combination of these two dimensions. The core affect is considered the essence of all affective experi- ence; it is defined as a consciously accessible neurophysio- logical state that continuously represents the feelings generated by the assessment of the individual-environment relationship. Furthermore, Russell's model comprises other aspects related to the core affect, such as affect regulation, affective quality, and attributed affect, and considers emo- tions as emotional episodes that consist of causative events, including the antecedent event, the core affect, attributions, psychological and expressive changes, subjective conscious experiences, and emotion regulation [56,57]. Another instance of this psychological approach is the three-dimensional framework proposed by Russell and Mehrabian [58]. This model describes emotions based on their level of pleasantness, arousal, and dominance. In this model, known as the PAD model, a series of emotions are also identified within a three-dimensional space formed by the three mentioned dimensions. For example, happiness is located at (P = .81, A = .51, D = .46) and anger at (P = - .51, A = .59, and D = .25). In this manner, when perceived events are evaluated in terms of the PAD dimensions, they can be mapped into the PAD space in order to trigger a corresponding emotion. Mehrabian [50] employs the PAD model to additionally represent temperament scales, which makes it possible to define and describe personality types. In this context, points in the PAD space determine indi- vidual traits, regions define personality types, and lines that cross the intersection of the axes define particular dimen- sions of personality. Furthermore, by separating each axis into positive and negative, the eight resulting regions in ...
Similar publications
Cognitive Robotics aims to develop robots that can perform tasks, learn from experiences, and adapt to new situations using cognitive skills. Rooted in neuroscience theories, Cognitive Robotics provides a unique opportunity for NeuroIS researchers to theorize and imagine intelligent autonomous agents as natural cognitive systems. By translating Cog...
Citations
... Emotions are linked to transient physiological changes in the body, driven by underlying cognitive processes and emotional responses. The capacity to accurately detect and interpret these emotional states is critically vital across various domains, including psychology [2], physiology [3], healthcare [4], safe driving [5], education [6], and marketing [7]. cephalogram (EEG) [25], electrocardiogram (ECG) [26], electromyogram (EMG) [27], and galvanic skin response (GSR) [22]. ...
Our research systematically investigates the cognitive and emotional processes revealed through eye movements within the context of virtual reality (VR) environments. We assess the utility of eye-tracking data for predicting emotional states in VR, employing explainable artificial intelligence (XAI) to advance the interpretability and transparency of our findings. Utilizing the VR Eyes: Emotions dataset (VREED) alongside an extra trees classifier enhanced by SHapley Additive ExPlanations (SHAP) and local interpretable model agnostic explanations (LIME), we rigorously evaluate the importance of various eye-tracking metrics. Our results identify significant correlations between metrics such as saccades, micro-saccades, blinks, and fixations and specific emotional states. The application of SHAP and LIME elucidates these relationships, providing deeper insights into the emotional responses triggered by VR. These findings suggest that variations in eye feature patterns serve as indicators of heightened emotional arousal. Not only do these insights advance our understanding of affective computing within VR, but they also highlight the potential for developing more responsive VR systems capable of adapting to user emotions in real-time. This research contributes significantly to the fields of human-computer interaction and psychological research, showcasing how XAI can bridge the gap between complex machine-learning models and practical applications, thereby facilitating the creation of reliable, user-sensitive VR experiences. Future research may explore the integration of multiple physiological signals to enhance emotion detection and interactive dynamics in VR.
... We therefore need to ensure some ethical alignment first, even if these agents do not share some affective components of pain or other negative emotions. Besides this, research in neuroscience and artificial intelligence continues to strive for understanding and, as part of this, recreating feelings and emotions [116]. In fact, artificial agents with the capacities of empathy may be of high clinical relevance, as revealed by therapeutic bots, artificial pets, or our hypothetical enTwin. ...
Critics of Artificial Intelligence posit that artificial agents cannot achieve consciousness even in principle, because they lack certain necessary conditions for consciousness present in biological agents. Here we highlight arguments from a neuroscientific and neuromorphic engineering perspective as to why such a strict denial of consciousness in artificial agents is not compelling. We argue that the differences between biological and artificial brains are not fundamental and are vanishing with progress in neuromorphic architecture designs mimicking the human blueprint. To characterise this blueprint, we propose the conductor model of consciousness (CMoC) that builds on neuronal implementations of an external and internal world model while gating and labelling information flows. An extended Turing test (eTT) lists criteria on how to separate the information flow for learning an internal world model, both for biological and artificial agents. While the classic Turing test only assesses external observables (i.e., behaviour), the eTT also evaluates internal variables of artificial brains and tests for the presence of neuronal circuitries necessary to act on representations of the self, the internal and the external world, and potentially, some neural correlates of consciousness. Finally, we address ethical issues for the design of such artificial agents, formulated as an alignment dilemma: if artificial agents share aspects of consciousness, while they (partially) overtake human intelligence, how can humans justify their own rights against growing claims of their artificial counterpart? We suggest a tentative human-AI deal according to which artificial agents are designed not to suffer negative affective states but in exchange are not granted equal rights to humans.
... Agents are increasingly able to work closely with humans in a variety of ways (Cila 2022). The norms operate on the behavior of a human (Frantz and Pigozzi 2018) as well as the emotions (Rodríguez and Ramos 2014). Moral norms play an essential role in the regulation of human interactions (Malle et al. 2015). ...
... The behavioral architecture developed in this research is based on the cognitive agent architecture (Bratman 1991), the normative life cycle (Frantz and Pigozzi 2018) and the life cycle of emotional functioning (Rodríguez and Ramos 2014) applied in evacuation situation. The human agent can accept a norm, thus becoming an agent of execution of the norm itself. ...
Agent-based modeling and simulation can provide a powerful test environment for crisis management scenarios. Human agent interaction has limitations in representing norms issued by an agent to a human agent that has emotions. In this study, we present an approach to the interaction between a virtual normative agent and a human agent in an evacuation scenario. Through simulation comparisons, it is shown that the method used in this study can more fully simulate the real-life out come of an emergency situation and also improves the au thenticity of the agent interaction.
... Furthermore, although the literature reports a number of proposals aimed at facilitating the construction of CMEs, there is still no agreement on what issues need to be addressed in the short, medium, and long term. In particular, the main challenges in the construction of CMEs are related to their dual nature: theoretical and computational aspects (Rodríguez & Ramos, 2014). While its computational basis ensures the technical quality of the system and the inclusion of algorithms for emotional information processing (Hudlicka, 2011;Plaut, 2000), the theoretical aspects are interpreted by researchers to design, compose and validate the emotional processes within the system, as well as the cognitive processes that are external to the system (Marsella et al., 2010;Fellous & Arbib, 2005;Samsonovich, 2013). ...
Computational models of emotion (CMEs) are software systems designed to emulate the diverse aspects of the human emotion process. This type of model is commonly incorporated into cognitive agent architectures to provide mechanisms underlying affective behavior. The construction of CMEs involve theories that explain human emotion as well as computational artifacts related to the design and implementation of software systems. Although most CMEs reported in the literature provide details on their theoretical foundations, it is uncommon to find details about the computational practices and artifacts utilized during their design and implementation phases. This paper presents and discusses some challenges associated with the computational nature of this type of model: (i) Software quality attributes in CMEs, (ii) Interoperability between CMEs and cognitive components, (iii) Formal procedures for the design of CMEs, and (iv) Reference schemes to validate CMEs. Software engineering is used as a reference to propose and discuss these challenges. In addition, a reference architecture designed by following software engineering practices and artifacts is proposed to discuss the implications of addressing these challenges. The present research is at the intersection of human emotion modeling and software engineering to contribute to the software development process of affective systems.
... These emotions can be gathered for instance by facial expression configurations (Ko, 2018) displayed during interaction. As such, if implemented to virtual agents, they can adapt their behavior, discourse, and facial expressions according to the user's mood and induced emotional states, allowing them to adapt their behavior autonomously to motivate them and obtain a more engaging social interaction (Rodríguez and Ramos, 2014;Barrett et al., 2019) which could have more successful therapeutic outcomes. ...
While the debate regarding the embodied nature of human cognition is still a research interest in cognitive science and epistemology, recent findings in neuroscience suggest that cognitive processes involved in social interaction are based on the simulation of others' cognitive states and ours as well. However, until recently most research in social cognition continues to study mental processes in social interaction deliberately isolated from each other following 19th century's scientific reductionism. Lately, it has been proposed that social cognition, being emerged in interactive situations, cannot be fully understood with experimental paradigms and stimuli which put the subjects in a passive stance towards social stimuli. Moreover, social neuroscience seems to concur with the idea that a simulation process of possible outcomes of social interaction occurs before the action can take place. In this "perspective" article, we propose that in the light of past and current research in social neuroscience regarding the implications of mirror neuron system and empathy altogether, these findings can be interpreted as a framework for embodied social cognition. We also propose that if the simulation process for the mentalization network works in ubiquity with the mirror neuron system, human experimentations for facial recognition and empathy need a new kind of stimuli. After a presentation of embodied social cognition, we will discuss the future of methodological prerequisites of social cognition studies in this area. On the matter, we will argue that the affective and reactive virtual agents are at the center in conducting such research.
... The perception system collects important features from the environment. The motivation system and the perception system, keep an internal state in the form of emotions and drives (Rodríguez and Ramos, 2014). The attention system shapes the signals based upon motivation and perception and provides motivation to the behavior system, that implements multiple types of behaviors, and the motor system becomes fully aware of these behaviors in the form of motor skills and facial expressions. ...
Research in the last few decades focused on the aim to develop intelligent agents capable of simulating human behavior. Artificial General intelligence (AGI) made it possible to build such an agent and still the ongoing research in this area is mainly focusing to utilize cognitive abilities. Cognitive model is a system with processes like perception, attention, emotions, memory, metacognition, reasoning, learning etc. Emotions are cognitively based states that provide heuristic solutions along with automatic solutions to certain problems and mediate between plans, motivation, goal and attention shifting and may play a vital part in metacognition. Metacognition is an executive controller that is used for the regulation of cognition just to for the sake to get intelligent behavior. Research regarding mutual influence of emotion and metacognition is very limited. Researchers have had their main focus on psychopathology (emotion disorders) and on generation of emotions. Keeping in view this relationship and due to lack of effort focusing specifically on emotion and metacognition, an "Emotion Based Regulatory for Metacognition" has been designed. The cognitive model discuses mainly the relation between emotion and metacognition and more specifically role of emotion in metacognition.
... If the technology for recognition of the guide lines can be improved, the extraction of other information will be solved [3]. In addition, the identification and extraction of guide wires can not only provide motion guidance for AGV, but can also be applied to the active safety control system of other manned vehicles [4]. However, with the nonuniformity of pavement structures, damaged and stained guide wires, and changes in light, the detection of guide wires has become more and more difficult. ...
During actual operations, Automatic Guided Vehicles (AGV) will inevitably encounter the phenomena of overexposure or shadowy areas, and unclear or even damaged guide wires, which interfere with the identification of guide wires. Therefore, this paper aims to solve the shortcomings of existing technology at the software level. Firstly, a Fast Guide Filter (FGF) is adopted with the two-dimensional gamma function with variable parameters, and an image preprocessing algorithm in a complex illumination environment is designed to get rid of the interference of illumination. Secondly, an ant colony edge detection algorithm is proposed, and the guide wire is accurately extracted by secondary screening combined with the guide wire characteristics; A variable universe Fuzzy Sliding Mode Control (FSMC) algorithm is designed as a lateral motion control method to realize the accurate tracking of AGV. Finally, the experimental platform is used to comprehensively verify the series of algorithms designed in this paper. The experimental results show that the maximum deviation can be limited to 1.2 mm, and the variance of the deviation is less than 0.2688 mm2.
... Today the literature on social intelligent agents, emotional intelligent virtual agents, social robots, etc., is immense. The variety of approaches combine cognitive models, in particular, cognitive architectures [16][17][18][19][20] and connectionist methods. This is why general theories of emotions become so important today [21]. ...
A cognitive model of the virtual clownery paradigm is developed, in which two clowns perform on a virtual stage, interacting with each other. Agents controlling virtual clown behavior were designed and implemented based on the eBICA cognitive architecture and embedded into the virtual environment, where they controlled two avatars. Clownery action was generated using the paradigm of ad hoc improvisation, without any given scenario. In this study, the output was produced as text, and the action was not visualized. In parallel, a number of scenarios were written by a group of experts using the same paradigm and behavioral repertoire. The two sets of scenarios were compared to each other in a number of characteristics. Results show which particular features of the model result in believable and socially attractive behavior. Implications concern socially-emotional intelligent agents for practically useful domains.
... This type of computational model is usually developed to be included in the cognitive architecture of virtual agents so that this type of intelligent system is capable of exhibiting affective behaviors in specific application domains (Caro et al., 2019;Rath et al., 2021). In general, CMEs are designed and implemented to provide virtual agents with mechanisms for evaluating a stimulus, eliciting synthetic emotions, and generating emotional behaviors (Huang et al., 2017;Rodríguez & Ramos, 2014). It is common practice that the internal mechanisms of CMEs are inspired by theories about human emotions originated in areas such as psychology and neuroscience. ...
... Second, computational artifacts and practices from areas such as software engineering are utilized to achieve a working computational software of such a human emotion model and ensure a correct technical functioning. The development process of contemporary CMEs reported in the literature follows, in general, the procedure depicted in Figure 1, which reflects an effort of researchers in obtaining the requirements from emotion theories and the generation of a functional model (Rodríguez & Ramos, 2014). ...
Computational models of emotion (CMEs) are software systems designed to emulate specific aspects of the human emotions process. The underlying components of CMEs interact with cognitive components of cognitive agent architectures to produce realistic behaviors in intelligent agents. However, in contemporary CMEs, the interaction between affective and cognitive components occurs in ad-hoc manner, which leads to difficulties when new affective or cognitive components should be added in the CME. This paper presents a framework that facilitates taking into account in CMEs the cognitive information generated by cognitive components implemented in cognitive agent architectures. The framework is designed to allow researchers define how cognitive information biases the internal workings of affective components. This framework is inspired in software interoperability practices to enable communication and interpretation of cognitive information and standardize the cognitive-affective communication process by ensuring semantic communication channels used to modulate affective mechanisms of CMEs
... The theory-driven approach uses observations, theories, and models from various disciplines, e.g., psychology, social science, and creates computational models, as it has been done in various related approaches for the related topic of computational models of a ect and empathy since the 1980s [102]. For overviews, check [83,111]. Concerning the emotional behavior of agents that consider cultural values, first systems have emerged [43,107]. ...
This chapter examines how notions of trust and empathy can be applied to human–robot interaction and how it can be used to create the next generation of emphatic agents, which address some of the pressing issues in multicultural aging societies: to harness cutting-edge AI research to develop solutions and new services. First, some examples of possible future services are given. Then it is discussed how existing descriptive models of trust can be operationalized to allow robot agents to elicit trust in the human partners and how the robot agents can detect and alter trust levels to have transparency to be included in the design of interaction. Next, with regard to future computational models of empathy, fundamental challenges are formulated, and psychological models of empathy are discussed for their appliance in computational models and their use of interactive empathic agents. For trust and empathy, a research agenda for the next steps is sketched, including an overview of suitable evaluation methods. The chapter finally concludes with a set of research questions raised.