Chuang YuUniversity College London | UCL · Department of Computer Science
Chuang Yu
Doctor of Engineering
Research Fellow @ University College London
About
41
Publications
5,957
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
188
Citations
Introduction
Postdoc Researcher at Prof. Angelo Cangelosi's Lab-Cognitive Robotics Lab (COROLAB) of University of Manchester on the project UKRI TAS Node in Trust (funding).
Before I come UoM, I received my Ph.D. degree at ASR Group (Autonomous Systems and Robotics) of U2IS, ENSTA-Paris, IP-Paris (Institut Polytechnique de Paris) supervised by Prof. Adriana TAPUS. My past works in Paris are motsly about multimodal human behaviors understanding and robot behaviors synthesis with deep learning.
Additional affiliations
September 2014 - March 2017
Publications
Publications (41)
Constrained by the lack of model interpretability and a deep understanding of human movement in traditional movement recognition machine learning methods, this study introduces a novel representation learning method based on causal inference to better understand human joint dynamics and complex behaviors. We propose a two-stage framework that combi...
Theory of Mind (ToM) is a fundamental cognitive architecture that endows humans with the ability to attribute mental states to others. Humans infer the desires, beliefs, and intentions of others by observing their behavior and, in turn, adjust their actions to facilitate better interpersonal communication and team collaboration. In this paper, we i...
In situations where both deaf and non-deaf individuals are present in a public setting, it would be advantageous for a robot to communicate using both sign and natural languages simultaneously. This would not only address the needs for diverse users but also pave the way for a richer and more inclusive spectrum of human-robot interactions. To achie...
The fashion industry's negative impact and overcon-sumption require urgent action to improve and reduce fashion consumption. Tactile gesture plays a vital role in understanding, selecting, and feeling attached to clothes. In this paper, we introduce the FabricTouch II dataset with multimodal infromation, which focuses on fabric assessment touch ges...
Speech-enabled interaction between human users and artificial agents (e.g., social robots) has become more common. To improve the effectiveness of human-robot interaction, researchers have focused on the design of the robot's voice, language and non-verbal behaviour. In the last regard, researchers are interested in how nonverbal sounds, such as em...
Theory of mind (ToM) corresponds to the human ability to infer other people's desires, beliefs, and intentions. Acquisition of ToM skills is crucial to obtain a natural interaction between robots and humans. A core component of ToM is the ability to attribute false beliefs. In this paper, a collaborative robot tries to assist a human partner who pl...
Human skeleton-based gesture classification plays a dominant role in social robotics. Learning the variety of human skeleton-based gestures can help the robot to continuously interact in an appropriate manner in a natural human-robot interaction (HRI). In this paper, we proposed a Flow-based model to classify human gesture actions with skeletal dat...
In uncertain social scenarios, the self-awareness of facial expressions helps a person to understand, predict, and control his/her states better. Self-awareness gives animals the ability to distinguish self from others and to self-recognize themselves. For cognitive robots, the ability to be aware of their actions and the effects of actions on self...
The natural co-speech facial action as a kind of non-verbal behavior plays an essential role in human communication, which also leads to a natural and friendly human-robot interaction. However, a lot of previous works for robot speech-based behaviour generation are rule-based or handcrafted methods, which are time-consuming and with limited synchro...
Facial expressions are one of the most practical and straightforward ways to communicate emotions. Facial Expression Recognition has been used in lots of fields such as human behaviour understanding and health monitoring. Deep learning models can achieve excellent performance in facial expression recognition tasks. As these deep neural networks hav...
The ability to express semantic co-speech gestures in an appropriate manner of the robot is needed for enhancing the interaction between humans and social robots. However, most of the learning-based methods in robot gesture generation are unsatisfactory in expressing the semantic gesture. Many generated gestures are ambiguous, making them difficult...
Most deep learning-based acoustic scene classification (ASC) approaches identify scenes based on acoustic features converted from audio clips containing mixed information entangled by polyphonic audio events (AEs). However, these approaches have difficulties in explaining what cues they use to identify scenes. This paper conducts the first study on...
In uncertain social scenarios, the self-awareness of facial expressions helps a person to understand, predict, and control his/her states better. Self-awareness gives animals the ability to distinguish self from others and to self-recognize themselves. For cognitive robots, the ability to be aware of their actions and the effects of actions on self...
Most existing deep learning-based acoustic scene classification (ASC) approaches directly utilize representations extracted from spectrograms to identify target scenes. However, these approaches pay little attention to the audio events occurring in the scene despite they provide crucial semantic information. This paper conducts the first study to i...
Robots with multimodal social cues can be widely applied for natural human-robot interaction. The physical presence of those robots can be used to explore whether or how the robot can relieve the loneliness and social isolation of older adults. Natural and trustworthy interpersonal communication involves multimodal social cues with verbal and nonve...
In uncertain social scenarios, the self-awareness of facial expressions helps a person to understand, predict, and control his/her states better. Self-awareness gives animals the ability to distinguish self from others and to self-recognize themselves. For cognitive robots, the ability to be aware of their actions and the effects of actions on self...
Facial expressions are one of the most practical and straightforward ways to communicate emotions. Facial Expression Recognition has been used in lots of fields such as human behaviour understanding and health monitoring. Deep learning models can achieve excellent performance in facial expression recognition tasks. As these deep neu-ral networks ha...
The natural co-speech facial action as a kind of non-verbal behavior plays an essential role in human communication, which also leads to a natural and friendly human-robot interaction. However, a lot of previous works for robot speech-based behaviour generation are rule-based or handcrafted methods, which are time-consuming and with limited synchro...
The ability to express semantic co-speech gestures in an appropriate manner of the robot is needed for enhancing the interaction between humans and social robots. However, most of the learning-based methods in robot gesture generation are unsatisfactory in expressing the semantic gesture. Many generated gestures are ambiguous, making them difficult...
Humidity measurement has been of extreme importance in both conventional environment monitoring and emerging digital health management. State-of-the-art flexible humidity sensors with combined structures, however, lack sensing reliability when they subject to high humidity with condensation and/or liquid water invasion. Here, we report a free-stand...
In this paper, we mainly focus on two research questions: 'What is self-awareness in cognitive sciences?' and 'How to achieve self-awareness in robots?'. A definition of self-awareness and its influence in cognitive sciences is presented. Our methodology proposes the first attempt towards robot facial expression self-awareness. We consider four abi...
The pipeline of gender-free robot speech synthesis. The text-to-speech (TTS) synthesizer takes the genderless speech style embedding and text as inputs to output the gender-free voice, which can be used in the genderless robot, for example, the Pepper robot. The genderless speech style embedding is a function of the male and female speech style emb...
With the improvement of computing power and the availability of large datasets, deep learning models can achieve excellent performance in facial expression recognition tasks. As these deep neural networks have very complex nonlinear structures, when the model makes a prediction, it is difficult to understand what is the basis for the model's predic...
Having a natural interaction makes a significant difference in a successful human-robot interaction (HRI). The natural HRI refers to both human multimodal behavior understanding and robot verbal or non-verbal behavior generation. Humans can naturally communicate through spoken dialogue and non-verbal behaviors. Hence, a robot should perceive and un...
Purpose
Many work conditions require manipulators to open cabinet doors and then gain access to the desired workspace. However, after opening, the unlocked doors can easily close, interrupt a task and potentially break the operating end-effectors. This paper aims to address a manipulator's behavior planning problem for responding to a dynamic works...
Telemanipulation in power stations commonly require robots first to open doors and then gain access to a new workspace. However, the opened doors can easily close by disturbances, interrupt the operations, and potentially lead to collision damages. Although existing telemanipulation is a highly efficient master–slave work pattern due to human-in-th...
The human gestures occur spontaneously and usually they are aligned with speech, which leads to a natural and expressive interaction. Speech-driven gesture generation is important in order to enable a social robot to exhibit social cues and conduct a successful human-robot interaction. In this paper, the generation process involves mapping acoustic...
Human emotion detection is an important aspect in social robotics and HRI. In this paper, we propose a vision-based multimodal emotion recognition method based on gait data and facial thermal images designed for social robots. Our method can detect 4 human emotional states (i.e., neutral, happiness, anger, and sadness). We gathered data from 25 par...
Interaction plays a critical role in skills learning for natural communication. In human-robot interaction (HRI), robots can get feedback during the interaction to improve their social abilities. In this context, we propose an interactive robot learning framework using multimodal data from thermal facial images and human gait data for online emotio...
Interaction plays a critical role in skills learning for natural communication. In human-robot interaction (HRI), robots can get feedback during the interaction to improve their social abilities. In this context, we propose an interactive robot learning framework using mul-timodal data from thermal facial images and human gait data for online emoti...
I got the "Best Poster Prize" in "Journee de L'ED Interfaces", where all second-year Ph.D students from 4 universities (École Polytechnique, ENSTA-ParisTech, Centrale Supélec and UVSQ - Université de Versailles Saint-Quentin-en-Yvelines ) and about 3 majors (Information and Computer Science, Biology and Chemistry) joined the poster part.
Human emo...
Emotion detection is very important for humanrobot
interaction (HRI) in social contexts. In this paper, we
use both RGB-D and thermal images to get gait and facial
thermal data. The joint angle features from the gait data and
the facial regions features are used to help the robot understand
the human emotions i.e., neutral, happy, angry and sad sta...
The invention discloses a lower limb gait rehabilitation assessment system based on visual acquisition equipment, which comprises a gait assessment algorithm program, a gait assessment human-computerinteraction interface and a gait assessment database, wherein the gait assessment algorithm portion mainly comprises original gait data acquisition, da...
The invention discloses an intelligent sensing shoe gait analysis system based on plantar pressure. The intelligent sensing shoe gait analysis system based on plantar pressure mainly comprises an intelligent sensing insole capable of measuring the plantar pressure of a human body, a controller capable of continuously collecting plantar pressure dat...
Questions
Question (1)
are there some database of dancing with music?
it is used for music-driven dancing action generation