About
383
Publications
85,778
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
16,450
Citations
Introduction
Transfer learning, Multimodal learning, Multimodal emotion recognition, Multimodal vigilance estimation, Affective brain-computer interface and its applications
Skills and Expertise
Current institution
Additional affiliations
September 2002 - present
Publications
Publications (383)
Emotion is one of the main psychological factors that affect human behaviour. Using a neural network model trained with Electroencephalography (EEG)-based frequency features has been widely used to accurately recognize human emotions. However, utilizing EEG-based spatial information with popular two-dimensional kernels of convolutional neural netwo...
Electroencephalogram (EEG) brain network embodies the brain’s coordination and interaction mechanism, and the transformations of emotional states are usually accompanied with changes in brain network spatial topologies. To effectively characterize emotions, in this work, we propose a cognition-inspired graph embedding model in the L1-norm space (L1...
While electroencephalogram (EEG) based brain-computer interface (BCI) has been widely used for medical diagnosis, health care, and device control, the safety of EEG BCI has long been neglected. In this paper, we propose Professor X, an invisible and robust "mind-controller" that can arbitrarily manipulate the outputs of EEG BCI through backdoor att...
Recent advancements for large-scale pre-training with neural signals such as electroencephalogram (EEG) have shown promising results, significantly boosting the development of brain-computer interfaces (BCIs) and healthcare. However, these pre-trained models often require full fine-tuning on each downstream task to achieve substantial improvements,...
Retrosynthesis analysis is pivotal yet challenging in drug discovery and organic chemistry. Despite the proliferation of computational tools over the past decade, AI-based systems often fall short in generalizing across diverse reaction types and exploring alternative synthetic pathways. This paper presents BatGPT-Chem, a large language model with...
With the rapid advancement in machine learning, the recognition and analysis of brain activity based on EEG and eye movement signals have attained a high level of sophistication. Utilizing deep learning models for learning EEG and eye movement features proves effective in classifying brain activities. A focused state indicates intense concentration...
The current electroencephalogram (EEG) based deep learning models are typically designed for specific datasets and applications in brain-computer interaction (BCI), limiting the scale of the models and thus diminishing their perceptual capabilities and generalizability. Recently, Large Language Models (LLMs) have achieved unprecedented success in t...
Recognizing emotions from physiological signals is a topic that has garnered widespread interest, and research continues to develop novel techniques for perceiving emotions. However, the emergence of deep learning has highlighted the need for comprehensive and high-quality emotional datasets that enable the accurate decoding of human emotions. To s...
Emotion recognition is a fundamental part of affective computing, obtaining performance gain from multimodal methods. Electroencephalography (EEG) and eye movements are extensively used as they contain complementary information. However, the inconvenient acquisition of EEG is hindering the extensive adoption of multimodal emotion recognition in dai...
Most of the existing graph-based clustering models performed clustering by adopting a two-stage strategy which first completes the spectral embedding from a given fixed graph and then resorts to other clustering methods such as
$k$
means to achieve discrete cluster results. On one hand, such a discretization operation easily causes that the obtai...
The emergence of domain adaptation has brought remarkable advancement to EEG-based emotion recognition by reducing subject variability thus increasing the accuracy of cross-subject tasks. A wide variety of materials have been employed to elicit emotions in experiments, however, artistic works that aim to evoke emotional resonance of observers are r...
Emotion recognition based on electroencephalography (EEG) is attracting more and more interest in affective computing. Previous studies have predominantly relied on manually extracted features from EEG signals. It remains largely unexplored in the utilization of raw EEG signals, which contain more temporal information but present a significant chal...
Emotion recognition in affective brain-computer interfaces (aBCI) has emerged as a prominent research area. However, existing experimental paradigms for collecting emotional data often rely on stimuli-based elicitation, which may not accurately reflect emotions experienced in everyday life. Moreover, these paradigms are limited in terms of stimulus...
Objective. Sex differences in emotions have been widely perceived via self-reports, peripheral physiological signals and brain imaging techniques. However, how sex differences are reflected in the electroencephalography (EEG) neural patterns of emotions remains unresolved. In this paper, we detect sex differences in emotional EEG patterns, investig...
A brain–computer interface (BCI) enables a user to communicate directly with a computer using only the central nervous system. An affective BCI (aBCI) monitors and/or regulates the emotional state of the brain, which could facilitate human cognition, communication, decision-making, and health. The last decade has witnessed rapid progress in aBCI re...
Seeing is believing, however, the underlying mechanism of how human visual perceptions are intertwined with our cognitions is still a mystery. Thanks to the recent advances in both neuroscience and artificial intelligence, we have been able to record the visually evoked brain activities and mimic the visual perception ability through computational...
Current advanced deep neural networks can greatly improve the performance of emotion recognition tasks in affective Brain-Computer Interfaces (aBCI). Basic human emotions could be induced and electroencephalographic (EEG) signals could be simultaneously recorded. While data of basic common emotions are easier to collect, some complex emotions are l...
Decision confidence can reflect the correctness of people’s decisions to some extent. To measure the reliability of human decisions in an objective way, we introduce a spectral-spatial-temporal adaptive graph convolutional neural network (SST-AGCN) for recognizing decision confidence levels based on EEG signals in this paper. The advantage of our p...
Since Electroencephalogram (EEG) is resistant to camouflage, it has been a reliable data source for objective emotion recognition. EEG is naturally multi-rhythm and multi-channel, based on which we can extract multiple features for further processing. In EEG-based emotion recognition, it is important to investigate whether there exist some common f...
Fuzzy
$k$
-means (FKM) is a popular clustering method by assigning data points into respective clusters with uncertainty measured by the membership degree. Usually, FKM performs clustering according to the distance between data points in the original space, which might contain undesirable noises and redundant features; therefore, the underlying d...
Due to the weak and non-stationary properties, Electroencephalogram (EEG) data presents significant individual differences. To align data distributions of different subjects, transfer learning showed promising performance in cross-subject EEG emotion recognition. However, most of the existing models sequentially learned the domain-invariant feature...
Recently, Electroencephalogram (EEG) has been receiving increasing attention in driving fatigue attention because it is generated by the neural activities of central nervous system and has been regarded as the gold standard to measure fatigue. However, most existing studies for EEG-based driving fatigue detection have some common limitations such a...
Affective Brain-computer Interface has achieved considerable advances that researchers can successfully interpret labeled and flawless EEG data collected in laboratory settings. However, the annotation of EEG data is time-consuming and requires a vast workforce which limits the application in practical scenarios. Further more, daily collected EEG d...
Traditional electroencephalograph (EEG)-based emotion recognition requires a large number of calibration samples to build a model for a specific subject, which restricts the application of the affective brain computer interface (BCI) in practice. We attempt to use the multi-modal data from the past session to realize emotion recognition in the case...
Electroencephalography (EEG) signals can effectively measure the level of human decision confidence. However, it is difficult to acquire EEG signals in practice due to the ex-pensive cost and complex operation, while eye movement signals are much easier to acquire and process. To tackle this problem, we propose a cross-modality deep learning method...
Previous studies have demonstrated the existence of sex differences in emotion recognition by comparing the performance of same-sex and cross-sex training strategies. However, the EEG properties behind the sex differences have not been fully explored. To fill this research gap, we aim to investigate the sex differences in key frequency bands and ch...
Most previous affective studies use facial expression pictures, music or movie clips as emotional stimuli, which are either too simplified without contexts or too dynamic for emotion annotations. In this work, we evaluate the effectiveness of oil paintings as stimuli. We develop an emotion stimuli dataset with 114 oil paintings selected from subjec...
Though Electroencephalogram (EEG) could objectively reflect emotional states of our human beings, its weak, non-stationary, and low signal-to-noise properties easily cause the individual differences. To enhance the universality of affective brain-computer interface systems, transfer learning has been widely used to alleviate the data distribution d...
Recently, electroencephalogram (EEG)-based emotion recognition has attracted increasing interests in research community. The weak, non-stationary, multi-rhythm and multi-channel properties of EEG data easily cause the extracted EEG samples and features contribute differently in recognizing emotional states. However, existing studies either failed t...
Multimodal signals are powerful for emotion recognition since they can represent emotions comprehensively. In this article, we compare the recognition performance and robustness of two multimodal emotion recognition models: 1) deep canonical correlation analysis (DCCA) and 2) bimodal deep autoencoder (BDAE). The contributions of this article are th...
In recent years, the research on dependency parsing focuses on improving the accuracy of the domain-specific (in-domain) test datasets and has made remarkable progress. However, there are innumerable scenarios in the real world that are not covered by the dataset, namely, the out-of-domain dataset. As a result, parsers that perform well on the in-d...
Electroencephalogram (EEG) signals are generated from central nervous system which are difficult to disguise, leading to its popularity in emotion recognition. Recently, semi-supervised learning exhibits promising emotion recognition performance by involving unlabeled EEG data into model training. However, if we first build a graph to characterize...
Emotion recognition from electroencephalogram (EEG) data has been a research spotlight in both academic and industrial communities, which lays a solid foundation to achieve harmonic human–machine interaction. However, most of the existing studies either directly performed classification on primary EEG features or employed a two-stage paradigm of “f...
Objective. Cultures have essential influences on emotions. However, most studies on cultural influences on emotions are in the areas of psychology and neuroscience, while the existing affective models are mostly built with data from the same culture. In this paper, we identify the similarities and differences among Chinese, German, and French indiv...
Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.
Objective. Previous studies on emotion recognition from electroencephalography (EEG) mainly rely on single-channel-based feature extraction methods, which ignore the functional connectivity between brain regions. Hence, in this paper, we propose a novel emotion-relevant critical subnetwork selection algorithm and investigate three EEG functional co...
The combination of eye movements and electroencephalography (EEG) signals, representing the external subconscious behaviors and internal physiological responses, respectively, has been proved to be a dependable approach with high interpretability. However, EEG is unfeasible to be put into practical applications due to the inconvenience of data acqu...
Most of the studies on decision confidence are from the fields of neuroscience and cognitive science, and existing studies based on deep neural networks do not exploit the topology of multi-channel EEG signals. In this paper, we propose an attentive simple graph convolutional network (ASGC) for EEG-based human decision confidence measurement. ASGC...
In Electroencephalography (EEG)-based affective brain-computer interfaces (aBCIs), there is a consensus that EEG features extracted from different frequency bands and channels have different abilities in emotion expression. Besides, EEG is so weak and non-stationary that easily causes distribution discrepancies for EEG data collected at different t...
Recently, cross-subject emotion recognition attracts widespread attention. The current emotional experiments mainly use video clips of different emotions as stimulus materials, but the videos watched by different subjects are the same, which may introduce the same noise pattern in the collected data. However, the traditional experiment settings for...
Many psychiatric disorders are accompanied with sleep abnormalities, having significant influence on emotions which might worsen the disorder conditions. Previous studies discovered that the emotion recognition task with objective physiological signals, such as electroencephalography (EEG) and eye movements, provides a reliable way to figure out th...
Standard neural machine translation (NMT) is on the assumption that the document-level context is independent. Most existing document-level NMT approaches are satisfied with a smattering sense of global document-level information, while this work focuses on exploiting detailed document-level context in terms of a memory network. The capacity of the...
This paper explores the application of multimodal affective brain-computer interfaces(aBCI) in the diagnosis based on the objective assessment of depression and treatment of deep brain stimulation for refractory depression. In the objective assessment of depression, the traditional depression scales are transformed into the interactive affective ta...
Simplified Molecular Input Line Entry System (SMILES) provides a text-based encoding method to describe the structure of chemical species and formulize general chemical reactions. Considering that chemical reactions have been represented in a language form, we present a symbol only model to generally predict the yield of organic synthesis reaction...
Human emotion decoding in affective brain-computer interfaces suffers a major setback due to the inter-subject variability of electroencephalography (EEG) signals. Existing approaches usually require amassing extensive EEG data of each new subject, which is prohibitively time-consuming along with poor user experience. To tackle this issue, we divid...
Objective. Adaptive deep brain stimulation (aDBS) based on subthalamic nucleus (STN) electrophysiology has recently been proposed to improve clinical outcomes of DBS for Parkinson’s disease (PD) patients. Many current models for aDBS are based on one or two electrophysiological features of STN activity, such as beta or gamma activity. Although thes...
Current graph neural networks (GNNs) lack generalizability with respect to scales (graph sizes, graph diameters, edge weights, etc..) when solving many graph analysis problems. Taking the perspective of synthesizing graph theory programs, we propose several extensions to address the issue. First, inspired by the dependency of the iteration number o...
The phenomenon of increasing accidents caused by reduced vigilance does exist. In the future, the high accuracy of vigilance estimation will play a significant role in public transportation safety. We propose a multimodal regression network that consists of multichannel deep autoencoders with subnetwork neurons (MCDAE
$_{sn}$
). After we define tw...
Standard neural machine translation (NMT) is on the assumption of document-level context independent. Most existing document-level NMT methods are satisfied with a smattering sense of brief document-level information, while this work focuses on exploiting detailed document-level context in terms of multiple forms of document embeddings, which is ca...
A brain-computer interface (BCI) enables a user to communicate with a computer directly using brain signals. The most common non-invasive BCI modality, electroencephalogram (EEG), is sensitive to noise/artifact and suffers between-subject/within-subject non-stationarity. Therefore, it is difficult to build a generic pattern recognition model in an...
With the quick development of dry electrode electroencephalography (EEG) acquisition technology, EEG-based sleep quality evaluation attracts more attention for its objective and quantitative merits. However, there hasn't been a standard experimental paradigm. This situation hinders the development of sleep quality evaluation method and technique. I...
The data scarcity problem in emotion recognition from electroencephalography (EEG) leads to difficulty in building an affective model with high accuracy using machine learning algorithms or deep neural networks. Inspired by emerging deep generative models, we propose three methods for augmenting EEG training data to enhance the performance of emoti...
Compared with the rich studies on the motor brain-computer interface (BCI), the recently emerging affective BCI presents distinct challenges since the brain functional connectivity networks involving emotion are not well investigated. Previous studies on emotion recognition based on electroencephalography (EEG) signals mainly rely on single-channel...
In recent years, sleepiness during driving has become a main cause for traffic accidents. However, the fact is that we know very little yet about the electrophysiological marker for assessing diver sleepiness. Previous studies and our researches have shown that alpha blocking phenomenon and alpha wave attenuation-disappearance phenomenon represent...
Various reports have shown that the rate of road traffic accidents has increased due to reduced driver vigilance. Therefore, an accurate estimation of the driver's alertness status plays an important part. To estimate vigilance, we adopt a novel strategy that is a deep autoencoder with subnetwork nodes (DAE
<sub xmlns:mml="http://www.w3.org/1998/Ma...
Lu Gan Wei Liu Yun Luo- [...]
Bao-Liang Lu
In this paper, we aim to investigate the similarities and differences of multimodal signals between Chinese and French on three emotions recognition task using deep learning. We use videos including positive, neutral and negative emotions as stimuli material. Both Chinese and French subjects wear electrode caps and eye tracking glass while doing ex...
A major obstacle in generalizing brain-computer interface (BCI) systems to previously unseen subjects is the subject variability of electroencephalography (EEG) signals. To deal with this problem, the existing methods focus on domain adaptation with subject-specific EEG data, which are expensive and time consuming to collect. In this paper, domain...
Standard neural machine translation (NMT) is on the assumption of document-level context independent. Most existing document-level NMT methods only focus on briefly introducing document-level information but fail to concern about selecting the most related part inside document context. The capacity of memory network for detecting the most relevant...
Various studies have shown that the temporal information captured by conventional long-short-term memory (LSTM) networks is very useful for enhancing multimodal emotion recognition using encephalography (EEG) and other physiological signals. However, the dependency among multiple modalities and high-level temporal-feature learning using deeper LSTM...
Multimodal signals are more powerful than unimodal data for emotion recognition since they can represent emotions more comprehensively. In this paper, we introduce deep canonical correlation analysis (DCCA) to multimodal emotion recognition. The basic idea behind DCCA is to transform each modality separately and coordinate different modalities into...
People generally agree that emotion processing differs between male and female. However, current hypothesis of sex differences needs more objective evidence and quantitative assessment. In this paper, we investigate the sex difference in classifying ve emotions from eletroencephalograph and eye movement signals. We adopt two neural-network-based cl...