Zemin Mao's research while affiliated with Tianjin University of Technology and other places
What is this page?
This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
Publications (10)
italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Emotion analysis has been employed in many fields such as human-computer interaction, rehabilitation, and neuroscience. But most emotion analysis methods mainly focus on healthy controls or depression patients. This paper aims to classify the emotional...
In recent years, emotion recognition based on electroencephalography (EEG) signals has attracted plenty of attention. Most of the existing works focused on normal or depressed people. Due to the lack of hearing ability, it is difficult for hearing-impaired people to express their emotions through language in their social activities. In this work, w...
Emotion recognition based on electro-encephalography (EEG) signals has become an interesting research topic in the field of neuroscience, psychology, neural engineering, and computer science. However, the existing studies are mainly focused on normal or depression subjects, and few reports on deaf subjects. In this work, we have collected the EEG s...
With the development of sensor technology and learning algorithms, multimodal emotion recognition has attracted widespread attention. Many existing studies on emotion recognition mainly focused on normal people. Besides, due to hearing loss, deaf people cannot express emotions by words, which may have a greater need for emotion recognition. In this...
Emotion recognition has received increasing attention in human-computer interaction (HCI) and psychological assessment. Compared with single modal emotion recognition, the multimodal paradigm has an outperformance because of introducing complementary information for emotion recognition. However, current research is mainly focused on normal people,...
Citations
... While the second are more frequently related to passive methods, like mono-vision [21], stereovision [22], multi-camera [23]. One already known method that uses both principles of active and passive methods is based on structured light [24]. From this fundaments, specialized methods emerge, each of them develop or applying techniques such as signal, image and data processing through algorithms, filtering, enhancement, sharpening, restoration, segmentation, object detection, compression, manipulation, augmentation, registration, clustering and outliers removal, to mention some. ...
... However, the base learners of Boosting and Bagging ensemble learning are generally generated by the same learning algorithm, which cannot reflect the advantages of different algorithms. Stacking ensemble methods are applied in android malware detection [11] and emotion recognition [12], etc., which can effectively integrate different kinds of base learners, thus effectively improving the prediction accuracy. However, the selection of base learners has a large impact on the prediction results of Stacking ensemble models, and the performance of poor base learners can easily affect the combined results if the performance of base learners differs significantly. ...
... Due to the loss of a key channel during the process of emotion communication, the individuals with hearing impairment can only compensate for changes in the outside world through senses such as vision and touch. Therefore, the individuals with hearing impairment are more sensitive to emotional perception, and may have differences in recognition of emotion from healthy controls [18] - [20]. ...
... The non-physiological signals include facial expression [4], vocal pattern, and text data [5][6][7]. Emotional status identification via facial expressions requires uninterrupted facial cues within each frame of the video. The overall framework includes face detection, localization, feature extraction, and facial landmark tracking using a certain machine learning mechanism with adequate training data and a stable evaluation strategy [8]. ...