Yi Yang's research while affiliated with University of Macau and other places
What is this page?
This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
Publications (10)
italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Emotion analysis has been employed in many fields such as human-computer interaction, rehabilitation, and neuroscience. But most emotion analysis methods mainly focus on healthy controls or depression patients. This paper aims to classify the emotional...
Recent research on emotion recognition suggests that deep network-based adversarial learning has an ability to solve the cross-subject problem of emotion recognition. This study constructed a hearing-impaired electroencephalography (EEG) emotion dataset containing three emotions (positive, neutral, and negative) in 15 subjects. The emotional domain...
In this work, the Feature Pyramid Network (FPN) is proposed for improving the performance of electroencephalography (EEG) emotion recognition. Differential Entropy (DE) is extracted from each EEG channel as the basic feature. Then the feature matrix was constructed through biharmonic spline interpolation to obtain the correlation information betwee...
With the rapid development of Human-computer interaction, automatic emotion recognition based on multichannel electroencephalography (EEG) signals has attracted much attention in recent years. However, many existing studies on EEG-based emotion recognition ignore the correlation information between different EEG channels and cannot fully capture th...
Zekun Tian Dahua Li Yu Song- [...]
Yi Yang
In recent years, many researchers have explored different methods to obtain discriminative features for electroencephalogram-based (EEG-based) emotion recognition, but a few studies have been investigated on deaf subjects. In this study, we have established a deaf EEG emotion data set, which contains three kinds of emotion (positive, neutral, and n...
Emotion recognition based on electro-encephalography (EEG) signals has become an interesting research topic in the field of neuroscience, psychology, neural engineering, and computer science. However, the existing studies are mainly focused on normal or depression subjects, and few reports on deaf subjects. In this work, we have collected the EEG s...
With the development of sensor technology and learning algorithms, multimodal emotion recognition has attracted widespread attention. Many existing studies on emotion recognition mainly focused on normal people. Besides, due to hearing loss, deaf people cannot express emotions by words, which may have a greater need for emotion recognition. In this...
Emotion recognition has received increasing attention in human-computer interaction (HCI) and psychological assessment. Compared with single modal emotion recognition, the multimodal paradigm has an outperformance because of introducing complementary information for emotion recognition. However, current research is mainly focused on normal people,...
Citations
... For example, the difference of the feature distribution between source and target domains is narrowed by the deep domain confusion model for cross subject recognition [176]. Recent algorithms like Deep CORAL [177], Deep Adaptation Networks [178], Deep Subdomain Associate Adaptation Network (DSAAN) [179] and Emotional Domain Adversarial Neural Network (EDANN) [180] are recommended to reduce domain differences. Despite the advances in DA models, few algorithms were applied to medical data [175]. ...
... Due to the loss of a key channel during the process of emotion communication, the individuals with hearing impairment can only compensate for changes in the outside world through senses such as vision and touch. Therefore, the individuals with hearing impairment are more sensitive to emotional perception, and may have differences in recognition of emotion from healthy controls [18] - [20]. ...
... However, the base learners of Boosting and Bagging ensemble learning are generally generated by the same learning algorithm, which cannot reflect the advantages of different algorithms. Stacking ensemble methods are applied in android malware detection [11] and emotion recognition [12], etc., which can effectively integrate different kinds of base learners, thus effectively improving the prediction accuracy. However, the selection of base learners has a large impact on the prediction results of Stacking ensemble models, and the performance of poor base learners can easily affect the combined results if the performance of base learners differs significantly. ...
... Timedomain features comprise simple statistical features [32][33][34] such as the mean, standard deviation, skewness, and kurtosis. In addition, they include more complex features such as the Hjorth parameters [5,32,[35][36][37][38][39][40][41], High Order Crossings (HOC) [5,33,38,40,42], Fractal Dimensions [43][44][45], Recurrence Quantification Analysis (RQA) [46,47], in addition to entropy-based features [5,34,35,45,48]. (2) Frequency-domain features are also handcrafted features, yet they are computed from the EEG signal's frequency representation. ...
... Due to the loss of a key channel during the process of emotion communication, the individuals with hearing impairment can only compensate for changes in the outside world through senses such as vision and touch. Therefore, the individuals with hearing impairment are more sensitive to emotional perception, and may have differences in recognition of emotion from healthy controls [18] - [20]. ...
... The non-physiological signals include facial expression [4], vocal pattern, and text data [5][6][7]. Emotional status identification via facial expressions requires uninterrupted facial cues within each frame of the video. The overall framework includes face detection, localization, feature extraction, and facial landmark tracking using a certain machine learning mechanism with adequate training data and a stable evaluation strategy [8]. ...
... WPD treeGenerally, the depth of decomposition has to be chosen based on the level of frequency components of the processed signal. Here, we selected a 5 level decomposition to achieve the frequency resolution required to construct the EEG frequency bands, delta (0 -4 Hz), theta (4 -8 Hz), alpha(8)(9)(10)(11)(12)(13), beta(13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30) and gamma(30)(31)(32)(33)(34)(35)(36)(37)(38)(39)(40)(41)(42)(43)(44)(45)(46)(47)(48)(49). Thus, for delta and theta bands subspace U 0 4 and U 1 4 considered. ...