Recent publications
Humans perceive the physical properties of objects through active touch to acquire information that is unavailable by passive observation (e.g., pinching an object to estimate its stiffness). Previous functional neuroimaging studies have investigated neural representations of multisensory shape and texture perception using active touch and static images of objects. However, in active visuo-haptic perception, in addition to static visual information from the object itself, dynamic visual feedback of exploratory actions is a crucial cue. To integrate multisensory signals into a unitary percept, the brain must determine whether somatosensory sensation and dynamic visual feedback are congruent and caused by the same exploratory action. The influence of dynamic visual feedback during exploratory actions has not yet been examined, and the neural substrates for multisensory stiffness perception are still unknown. Here, we developed a functional magnetic resonance imaging-compatible device that enables users to perceive the stiffness of a virtual spring by pinching two finger plates and obtaining real-time visual feedback from the finger plate movements. After confirming the integration of visual and haptic cues in behavioral experiments, we investigated neural regions for multisensory stiffness signal processing and action-feedback congruency separately in two functional magnetic resonance imaging experiments. Modulating the stiffness level and contrasting bimodal/unimodal conditions revealed that multisensory stiffness information converged to the bilateral superior parietal lobules and supramarginal gyri, while congruent action-feedback conditions elicited significantly stronger neural responses in the left mid-cingulate cortex, left postcentral gyrus, and bilateral parietal opercula. Further analysis using dynamic causal modeling suggested top-down modulatory connections from the left mid-cingulate cortex to the bilateral parietal opercula and left postcentral gyrus when visual feedback was consistent with finger movements. Our results shed light on the neural mechanisms involved in estimating object properties from active touch and dynamic visual feedback, which may involve two distinct neural networks: one for multisensory signal processing and the other for action-feedback congruency judgment.
Using social robots is a promising approach for supporting senior citizens in the context of super-aging societies. The essential design factors for achieving socially acceptable robots include effective emotional expressions and cuteness. Past studies have reported the effectiveness of robot-initiated touching behaviors toward interacting partners on these two factors in the context of interaction with adults, although the effects of such touch behaviors on them are unknown in seniors. Therefore, in this study, we investigated the effects of robot-initiated touch behaviors on perceived emotions (valence and arousal) and the feeling of kawaii, a common Japanese adjective for expressing cute, lovely, or adorable. In experiments with Japanese participants (adults: 21–49, seniors: 65–79) using a baby-type robot, our results showed that the robot’s touch significantly increased the perceived valence regardless of the expressed emotions and the ages of the participants. Our results also showed that the robot’s touch was effective in adults in the context of arousal and the feeling of kawaii, but not in seniors. We discussed the differential effects of robot-initiated touch between adults and seniors by focusing on emotional processing in the latter. The findings of this study have implications for designing social robots that have the capability of physical interaction with seniors.
Pain management is a critical challenge in healthcare, often exacerbated by loneliness and emotional distress. This study investigated the potential of a communication robot, Moffuly, to reduce pain perception and influence hormonal responses in a controlled experimental setting. Nineteen healthy participants underwent heat pain stimulation under two conditions: with and without robotic interaction. Pain levels were assessed using the Short-form McGill Pain Questionnaire and the Visual Analogue Scale, while mood and mental states were evaluated through established questionnaires including the Profile of Mood States, Hospital Anxiety and Depression Scale, and Self-Rating Depression Scale. Hormonal changes, including cortisol, growth hormone, oxytocin, estradiol, and dehydroepiandrosterone-sulfate, were measured from blood samples collected at key time points. The results demonstrated significant reductions in subjective pain and improvements in mood following robotic interaction. These effects were accompanied by favorable hormonal changes, including increased oxytocin and decreased cortisol and growth hormone levels. The findings suggest that robotic interaction may serve as an innovative approach to pain management by addressing both physiological and psychological factors. This study highlights the potential of robotics to complement traditional therapies in alleviating pain and enhancing emotional well-being. By mitigating emotional distress and loneliness, robotic interventions may enhance existing pain therapies and offer innovative solutions for resource-limited healthcare systems.
Individual differences in pain sensitivity are thought to relate to personality traits, but the underlying mechanisms remain unclear. Exercise influences hormonal secretion via the hypothalamic–pituitary system, which may link personality, hormonal responses, and pain perception. This study investigated these relationships in 14 healthy participants (3 females, 11 males, aged 20–50 years, mean 28 ± 9.25 years). Participants rated thermal pain stimuli and completed the NEO Personality Inventory-Revised (NEO-PI-R) to identify their personality. Each participant engaged in personal and group training sessions, with blood samples collected to measure cortisol, growth hormone, and other indicators. Participants were clustered into cortisol hypersecretors and hyposecretors based on their hormonal response. Hypersecretors exhibited significantly lower neuroticism scores and pain ratings than hyposecretors. These findings suggest a potential association between cortisol responsiveness during exercise, neuroticism, and pain sensitivity. This study highlights potential links between personality traits and reactive hormonal patterns, offering insights into the psychophysiological mechanisms underlying pain expression.
There is a growing expectation that deep reinforcement learning will enable multi-degree-of-freedom robots to acquire policies suitable for real-world applications. However, a robot system with a variety of components requires many learning trials for each different combination of robot modules. In this study, we propose a hierarchical policy design to segment tasks according to different robot components. The tasks of the multi-module robot are performed by skill sets trained on a component-by-component basis. In our learning approach, each module learns reusable skills, which are then integrated to control the whole robotic system. By adopting component-based learning and reusing previously acquired policies, we transform the action space from continuous to discrete. This transformation reduces the complexity of exploration across the entire robotic system. We validated our proposed method by applying it to a valve rotation task using a combination of a robotic arm and a robotic gripper. Evaluation based on physical simulations showed that hierarchical policy construction improved sample efficiency, achieving performance comparable to the baseline with 46.3% fewer samples.
Autonomous robots that rely on sensors for operation require fail-soft strategies to continue tasks despite partial sensor failures. We propose a sensor anomaly detection method that monitors changes in sensor data correlations. Our method eliminates the need for pre-defined programming to determine abnormal states for each individual sensor. Furthermore, real-time anomaly detection is possible through sparse structure learning. In the experiment, we evaluated this method on a quadruped robot in a simulated environment. We perturbed the sensor readings by adding two types of large or small noise at one of the robot’s leg joints. When an anomaly was detected, the robot estimates the actual value of the noisy joint using a pre-trained multiple regression model. With our proposed anomaly detection method, the robot successfully completed the walking task in most trials. Specifically, without anomaly detection, adding a large noise to any of the twelve joints resulted in a 0 % success rate. However, with anomaly detection, the success rate improved to over 89 % in seven of the twelve joints.
Introduction
This study focused on the psychological evaluation of an avatar robot in two distinct regions, Dubai in the Middle East and Japan in the Far East. Dubai has experienced remarkable development in advanced technology, while Japan boasts a culture that embraces robotics. These regions are distinctively characterized by their respective relationships with robotics. In addition, the use of robots as avatars is anticipated to increase, and this research aimed to compare the psychological impressions of people from these regions when interacting with an avatar as opposed to a human.
Methods
Considering that avatars can be presented on screens or as physical robots, two methodologies were employed: a video presentation survey (Study 1, Dubai: n = 120, Japan: n = 120) and an experiment involving live interactions with a physical robot avatar (Study 2, Dubai: n = 28, Japan: n = 30).
Results and discussion
Results from the video presentations indicated that participants from Dubai experienced significantly lower levels of discomfort towards the avatar compared to their Japanese counterparts. In contrast, during live interactions, Japanese participants showed a notably positive evaluation towards a Japanese human operator. The findings suggest that screen-presented avatars may be more readily accepted in Dubai, while humans were generally preferred over avatars in terms of positive evaluations when physical robots were used as avatars. The study also discusses the implications of these findings for the appropriate tasks for avatars and the relationship between cultural backgrounds and avatar evaluations.
After the COVID-19 pandemic, the adoption of distance learning has been accelerated in educational institutions in multiple countries. In addition to using a videoconferencing system with camera images, avatars can also be used for remote classes. In particular, an android avatar with a sense of presence has the potential to provide higher quality education than a video-recorded lecture. To investigate the specific educational effects of android avatars, we used a Geminoid. an android with the appearance of a specific individual, and conducted both laboratory experiment and large-scale field experiment. The first compared the android avatar lecture with a videoconferencing system. We found that the use of an android avatar for the lecture led to the significantly higher subjective feelings of being seen, feeling more motivated, and focused on the lecture compared to the video lecture. We further conducted a large-scale field experiment with an android avatar to clarify what contributes to such educational effects. The results suggest that the students’ perception of android’s anthroppomorphism and competence has a positive impact, and discomfort has a negative impact on the subjective experence of educational effect. These results indicate the role of embodied anthropomorphization in positive educational experience. The important point of this study is that both the laboratory experiment and the large-scale experiment were conducted to clarify the educational effects of androids. These results support several related studies and are clarified in detail. Based on these results, the potential for the future usage of androids in education is discussed.
This study investigated the differences between human and robot gaze in influencing preference formation, and examined the role of Theory of Mind (ToM) abilities in this process. Human eye gaze is one of the most important sources of information for social interaction and research has demonstrated its effectiveness in influencing people's preference. With increasing technological development, we will interact with robots that can exhibit gaze behavior and influence people's preference. It is unclear whether there are any differences between humans and robots in this process. The present study aimed to analyze the role of the gaze of a robot and a human in influencing the ascription of a preference to the gazer and the participants' preference. Furthermore, we examined the role of ToM abilities in preference formation. The results showed that the gaze has a greater effect on the gazer preference compared to participants' preference regardless of the agent (human or robot). In addition, ToM abilities predict both gazer and individual preferences in the robot's condition only even though different socio-cognitive mechanisms are involved. The study suggests that adults are cognitively able to process the gaze of a robot similar to a human, recognizing the underlying mental state. However, only for the robot, different cognitive mechanisms are involved in the gazer (i.e., perspective taking) and participants' preference formation (i.e., advanced ToM).
Mainstream use of AI has enabled human-like conversations with machines. Despite the perceived naturalness of these interactions, they are limited to 2D screens. Androids offer us an opportunity to explore the third dimension, touch, with a human-centered approach. In this work we developed and evaluated a system that lays some of the groundwork for future dialogues with these androids. While there have been robot-to-human studies that focus on behavior modification, handshakes, and hugs, many do not use human-looking androids, do not adapt to humans' positions, or do not explore face-touch. We demonstrate these shortcomings are surmountable with a system that can direct an android to touch along someone's arm or cheek. Our research goals were to create a system capable of subjective accuracy approaching human touch perception while maintaining participant rated naturalness scores no worse than in nonadaptive android-to-human touch systems. To evaluate the system, 25 participants selected 30 touch locations while seated in front of a realistic, male-looking android. Coders found our mean contact-to-target arm accuracy to be 4.9 cm, which is only 4 mm over the 4.5 cm upper range for two-point discrimination tests of adults' arms. Participants rated the naturalness of our touches as 3.9/7, which is no worse than the 3.0/7 found in a previous work with a mechanically identical android that nonadaptively touched people's forearms. With both goals being met, we have demonstrated that natural adaptive touch with a realistic android, even for the sensitive face, is within reach of scientific studies.
In this study, we propose a multitask reinforcement learning algorithm for foundational policy acquisition to generate novel motor skills. Learning the rich representation of the multitask policy is a challenge in dynamic movement generation tasks because the policy needs to cope with changes in goals or environments with different reward functions or physical parameters. Inspired by human sensorimotor adaptation mechanisms, we developed the learning pipeline to construct the encoder-decoder networks and network selection to facilitate foundational policy acquisition under multiple situations. First, we compared the proposed method with previous multitask reinforcement learning methods in the standard multi-locomotion tasks. The results showed that the proposed approach outperformed the baseline methods. Then, we applied the proposed method to the ball heading task using a monopod robot model to evaluate skill generation performance. The results showed that the proposed method was able to adapt to novel target positions or inexperienced ball restitution coefficients but to acquire a foundational policy network, originally learned for heading motion, which can generate an entirely new overhead kicking skill.
Foreseeing the future outcomes is the art of decision-making. Substantial evidence shows that, during choice deliberation, the brain can retrieve prospective decision outcomes. However, decisions are seldom made in a vacuum. Context carries information that can radically affect the outcomes of a choice. Nevertheless, most investigations of retrieval processes examined decisions in isolation, disregarding the context in which they occur. Here, we studied how context shapes prospective outcome retrieval during deliberation. We designed a decision-making task where participants were presented with object–context pairs and made decisions which led to a certain outcome. We show during deliberation, likely outcomes were retrieved in transient patterns of neural activity, as early as 3 s before participants decided. The strength of prospective outcome retrieval explains participants’ behavioral efficiency, but only when context affects the decision outcome. Our results suggest context imparts strong constraints on retrieval processes and how neural representations are shaped during decision-making.
This study explores the potential benefits of robots having the capability to anticipate people’s mental states in an exercise context. We designed 80 utterances for a robot with associated gestures that exhibit a range of emotional characteristics and then performed a 23-person data collection to investigate the effects of these robot behaviors on human mental states during exercise. The results of cluster analysis revealed that 1) utterances with similar meanings had the same effect and 2) the effects of a certain cluster on different people depend on their emotional state. On the basis of these findings, we proposed a robotic system that anticipates the effect of utterances on the individual’s future mental state, thereby choosing utterances that can positively impact the individual. This system incorporates three main features: 1) associating the relevant events detected by sensors with a user’s emotional state; 2) anticipating the effects of robot behavior on the user’s future mental state to choose the next behavior that maximizes the anticipated gain; and 3) determining appropriate times to provide coaching feedback, using predefined rules in the motion module for timing decisions. To evaluate the proposed system’s overall performance comprehensively, we compare robots equipped with the system’s unique features to those lacking these features. We design the baseline condition that lacks these unique features, opting for periodic random selection of utterances for interaction based on the current context. We conducted a 21-person experiment to evaluate the system’s performance. We found that participants perceived the robot to have a good understanding of their mental states and that they enjoyed the exercises more and put in more effort due to the robot’s encouragement.
In this study, we developed a light source position estimation system for controlling the wireless-powered drone. An algorithm was designed to estimate the light source position by minimizing an error function consisting of the difference between the measured and estimated values of light-receiving intensity. The received light intensity distribution of a light source with a beam angle of 2.5° was modeled using Gaussian and inverse square functions based on actual measurements. A detection device was fabricated by a photodiode array substrate and a Raspberry Pi. The detection time of the developed system is 97 ms, real-time light source position estimation was confirmed.
While it is crucial for human-like avatars to perform co-speech gestures, existing approaches struggle to generate natural and realistic movements. In the present study, a novel transformer-based denoising diffusion model is proposed to generate co-speech gestures. Moreover, we introduce a practical sampling trick for diffusion models to maintain the continuity between the generated motion segments while improving the within-segment motion likelihood and naturalness. Our model can be used for online generation since it generates gestures for a short segment of speech, e.g., 2 s. We evaluate our model on two large-scale speech-gesture datasets with finger movements using objective measurements and a user study, showing that our model outperforms all other baselines. Our user study is based on the Metahuman platform in the Unreal Engine, a popular tool for creating human-like avatars and motions.
Background: As the Internet of Things (IoT) expands, it enables new forms of communication, including interactions mediated by teleoperated robots like avatars. While extensive research exists on the effects of these devices on communication partners, there is limited research on the impact on the operators themselves. This study aimed to objectively assess the psychological and physiological effects of operating a teleoperated robot, specifically Telenoid, on its human operator. Methods: Twelve healthy participants (2 women and 10 men, aged 18–23 years) were recruited from Osaka University. Participants engaged in two communication sessions with a first-time partner: face-to-face and Telenoid-mediated. Telenoid is a minimalist humanoid robot teleoperated by a participant. Blood samples were collected before and after each session to measure hormonal and oxidative markers, including cortisol, diacron reactive oxygen metabolites (d-ROMs), and the biological antioxidat activity of plasma (BAP). Psychological stress was assessed using validated questionnaires (POMS-2, HADS, and SRS-18). Results: A trend of a decrease in cortisol levels was observed during Telenoid-mediated communication, whereas face-to-face interactions showed no significant changes. Oxidative stress, measured by d-ROMs, significantly increased after face-to-face interactions but not in Telenoid-mediated sessions. Significant correlations were found between oxytocin and d-ROMs and psychological stress scores, particularly in terms of helplessness and total stress measures. However, no significant changes were observed in other biomarkers or between the two conditions for most psychological measures. Conclusions: These findings suggest that cortisol and d-ROMs may serve as objective biomarkers for assessing psychophysiological stress during robot-mediated communication. Telenoid’s minimalist design may help reduce social pressures and mitigate stress compared to face-to-face interactions. Further research with larger, more diverse samples and longitudinal designs is needed to validate these findings and explore the broader impacts of teleoperated robots.
Author summary
Single cell RNA sequencing (scRNA-seq) is a powerful method to unveil gene expression landscape with single-cell resolution. However, scRNA-seq, in particular for the analysis of highly heterogeneous solid organs, fails to account for the apparent heterogeneity of cellular RNA contents across different cell-types. In addition, the cell dissociation-induced cryptic gene-expression is often problematic. To overcome such shortcomings, herein, we describe a concept of “cell type-specific weighting-factor (cWF)” and a computational method to calculate cWFs of diverse-cell types using intact (i.e., without cell dissociation) whole-organ RNA-seq. Importantly, we show that cWFs are necessary for the accurate reconstitution of the whole-organ RNA-seq data using their composite scRNA-seq data and also deconvolution of the whole-organ RNA-seq data into their composite scRNA-seq data. We also show that cWFs quantitatively reflect the experimentally determined differential cellular RNA contents. These benchmarks demonstrate that cWFs indeed represent differential cellular RNA contents and/or offset the cell dissociation-induced cryptic gene-expression. Furthermore, we illustrate a medical application of cWFs by showing that the differential cWFs can effectively predict an aging-clock. In conclusion, our study reports an important methodology to solve critical limitations of scRNA-seq analysis, and also its potential diagnostic application.
Introduction
The new ICD-11 code for chronic pain indicates a direction to divide chronic pain into two categories: chronic secondary pain, which has a clear underlying disease, and chronic primary pain, which is associated with significant emotional distress or functional disability and cannot be explained by another chronic condition. Until now, epidemiological studies have been hampered by the lack of a clear classification, but we believe that this new code system will provide a new perspective on the diagnosis and treatment of chronic pain, and we have begun work on this code system.
Methods
We studied 2,360 patients at Aichi Medical University, the largest pain center in Japan, and asked them to answer questionnaires on pain severity (NRS), pain-related functional impairment (PDAS, Locomo25), quality of life (EQ-5D), and psychological state and pain cognition (HADS, PCS, PSEQ, AIS) while their attending physicians were giving diagnoses according to ICD-11 and the results of the study were used to determine the coding of pain severity.
Results and discussion
The ratio of primary to chronic secondary pain was almost 50%, and the group of patients with MG30.01 classification, which included fibromyalgia, had the highest severity among chronic primary pain. The MG30.01 classification of patients was also found to experience more severe pain compared to other classifications of chronic primary pain patients. The classification of patients with a major psychiatric component was not always clear, and some patients in the secondary category also had a clear psychiatric component, suggesting the need to develop complementary tools to support pain diagnosis.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
Information
Address
Kyoto, Japan
Website