Jianling Tan’s research while affiliated with Chongqing University of Posts and Telecommunications and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (6)


Top-down modulation of DLPFC in visual search: a study based on fMRI and TMS
  • Article

January 2024

·

28 Reads

·

1 Citation

Cerebral Cortex

·

Congming Tan

·

Jianling Tan

·

[...]

·

Yi Tang

Effective visual search is essential for daily life, and attention orientation as well as inhibition of return play a significant role in visual search. Researches have established the involvement of dorsolateral prefrontal cortex in cognitive control during selective attention. However, neural evidence regarding dorsolateral prefrontal cortex modulates inhibition of return in visual search is still insufficient. In this study, we employed event-related functional magnetic resonance imaging and dynamic causal modeling to develop modulation models for two types of visual search tasks. In the region of interest analyses, we found that the right dorsolateral prefrontal cortex and temporoparietal junction were selectively activated in the main effect of search type. Dynamic causal modeling results indicated that temporoparietal junction received sensory inputs and only dorsolateral prefrontal cortex →temporoparietal junction connection was modulated in serial search. Such neural modulation presents a significant positive correlation with behavioral reaction time. Furthermore, theta burst stimulation via transcranial magnetic stimulation was utilized to modulate the dorsolateral prefrontal cortex region, resulting in the disappearance of the inhibition of return effect during serial search after receiving continuous theta burst stimulation. Our findings provide a new line of causal evidence that the top-down modulation by dorsolateral prefrontal cortex influences the inhibition of return effect during serial search possibly through the retention of inhibitory tagging via working memory storage.


The FC different regions between aHipp and pHipp. (a) Statistical results in the left hemisphere. (b) Statistical results in the right hemisphere. The red parts indicated that the FC of aHipp was significantly greater than that of pHipp. The blue parts indicated that the FC of pHipp was significantly greater than that of aHipp (p<0.05, FWE corrected).
The FC different regions between aHipp and pHipp. (a) Statistical results in the left hemisphere. (b) Statistical results in the right hemisphere. The red parts indicated that the FC of aHipp was significantly greater than that of pHipp. The blue parts indicated that the FC of pHipp was significantly greater than that of aHipp (p<0.05, FWE corrected).
Trends of hippocampus white matter structural FA with age. The inverted U-shaped trajectories of FA included aHipp.L, pHipp.L, and pHipp.R. Orange plots represented the aHipp and blue plots the pHipp.
Trends of hippocampal functional connectivity with age. (a) The FC results of GLM in the left hemisphere. The U-shaped trajectories of FC included aHipp.L-ACC.L, pHipp.L-ACC.L, and aHipp.L-Calcarine.L. The linear downward trajectory of FC included pHipp.L-Calcarine.L. (b) The FC results of GLM in the right hemisphere. The inverted U-shaped trajectories of FC included pHipp.R-MCC.R and pHipp.R-FFA.R. The linear upward trajectories of FC included aHipp.R-lOFC.R and pHipp.R-lOFC.R. The linear downward trajectory of FC included pHipp.R-mOFC.R. Orange plots represented the aHipp and blue plots the pHipp (“.L”: “left hemisphere”; “.R”: “right hemisphere”).
Trends of hippocampal functional connectivity with age. (a) The FC results of GLM in the left hemisphere. The U-shaped trajectories of FC included aHipp.L-ACC.L, pHipp.L-ACC.L, and aHipp.L-Calcarine.L. The linear downward trajectory of FC included pHipp.L-Calcarine.L. (b) The FC results of GLM in the right hemisphere. The inverted U-shaped trajectories of FC included pHipp.R-MCC.R and pHipp.R-FFA.R. The linear upward trajectories of FC included aHipp.R-lOFC.R and pHipp.R-lOFC.R. The linear downward trajectory of FC included pHipp.R-mOFC.R. Orange plots represented the aHipp and blue plots the pHipp (“.L”: “left hemisphere”; “.R”: “right hemisphere”).

+3

Alterations in Human Hippocampus Subregions across the Lifespan: Reflections on White Matter Structure and Functional Connectivity
  • Article
  • Full-text available

March 2023

·

59 Reads

·

2 Citations

During growth and aging, the role of the hippocampus in memory depends on its interactions with related brain regions. Particularly, two subregions, anterior hippocampus (aHipp) and posterior hippocampus (pHipp), play different and critical roles in memory processing. However, age-related changes of hippocampus subregions on structure and function are still unclear. Here, we investigated age-related structural and functional characteristics of 106 participants (7-85 years old) in resting state based on fractional anisotropy (FA) and functional connectivity (FC) in aHipp and pHipp in the lifespan. The correlation between FA and FC was also explored to identify the coupling. Furthermore, the Wechsler Abbreviated Scale of Intelligence (WASI) was used to explore the relationship between cognitive ability and hippocampal changes. Results showed that there was functional separation and integration in aHipp and pHipp, and the number of functional connections in pHipp was more than that in aHipp across the lifespan. The age-related FC changes showed four different trends (U-shaped/inverted U-shaped/linear upward/linear downward). And around the age of 40 was a critical period for transformation. Then, FA analyses indicated that all effects of age on the hippocampal structures were nonlinear, and the white matter integrity of pHipp was higher than that of aHipp. In the functional-structural coupling, we found that the age-related FA of the right aHipp (aHipp.R) was negatively related to the FC. Finally, through the WASI, we found that the age-related FA of the left aHipp (aHipp.L) was positively correlated with verbal IQ (VERB) and vocabulary comprehension (VOCAB.T), the FA of aHipp.R was only positively correlated with VERB, and the FA of the left pHipp (pHipp.L) was only positively correlated with VOCAB.T. These FC and FA results supported that age-related normal memory changes were closely related to the hippocampus subregions. We also provided empirical evidence that memory ability was altered with the hippocampus, and its efficiency tended to decline after age 40.

Download

Diagram of the experimental composition of one session
Architecture of EEGNet in the present study. White boxes displayed the information of the corresponding layers. The number before “@”: the number of filters per layer. The size after “@”: input data size for per layer. The rectangle indicated the convolution and the circle indicated the result after flattening. Different sizes represented different input sizes of these layers. Different colors represented different convolution kernel parameters were set
The within-subject average saliency maps. a Saliency map in non-social scenarios before joint attention training. b Saliency map in non-social scenarios after joint attention training. c Saliency map in social scenarios before joint attention training. d Saliency map in social scenarios after joint attention training. The maps showed the positive gradients in red and the negative gradients in blue
The results of the quantified spatio-temporal properties. a Spatial distribution of discriminative electrodes in decoding. The color bar represented average gradient value. The darker the color, the larger the gradient value, i.e., the most discriminatory of that electrode. b Temporal properties before and after attention training in two scenarios. c ERP of Pz electrode in two scenarios
The most varied interval of the gradient. a The 100 ms interval in non-social scenarios. b The 100 ms interval in social scenarios. c Correlation between training times and P300 latency in non-social scenarios. d Correlation between training times and P300 latency in social scenarios. The dashed area was the 95% confidence interval
EEG decoding for effects of visual joint attention training on ASD patients with interpretable and lightweight convolutional neural network

March 2023

·

79 Reads

·

5 Citations

Cognitive Neurodynamics

Visual joint attention, the ability to track gaze and recognize intent, plays a key role in the development of social and language skills in health humans, which is performed abnormally hard in autism spectrum disorder (ASD). The traditional convolutional neural network, EEGnet, is an effective model for decoding technology, but few studies have utilized this model to address attentional training in ASD patients. In this study, EEGNet was used to decode the P300 signal elicited by training and the saliency map method was used to visualize the cognitive properties of ASD patients during visual attention. The results showed that in the spatial distribution, the parietal lobe was the main region of classification contribution, especially for Pz electrode. In the temporal information, the time period from 300 to 500 ms produced the greatest contribution to the electroencephalogram (EEG) classification, especially around 300 ms. After training for ASD patients, the gradient contribution was significantly enhanced at 300 ms, which was effective only in social scenarios. Meanwhile, with the increase of joint attention training, the P300 latency of ASD patients gradually shifted forward in social scenarios, but this phenomenon was not obvious in non-social scenarios. Our results indicated that joint attention training could improve the cognitive ability and responsiveness of social characteristics in ASD patients.


Figure 1. Classification accuracy for 42 subjects, the black line represents a chance level of 28.8%.
Figure 2. Confusion matrix showing the classification results for four conditions.
Figure 3. Visualization of deep learning four categories. (a) Saliency map with time. The greater the brightness, the greater the classification contribution of the current channel. The x-axis is 154 time points: À200-1000 ms, and the y-axis is 1-60 channels. (b) Topography of saliency map during 300-450 ms. Take the average value of the 300-450 ms time period of the saliency map to draw scalp topography. Scalp topography shows that the threshold value of 0.75 is used to obtain the channel with the largest contribution after normalization, and the channel classification such as FCz, FC1, FC2, and Fz contributes the most. (c) ERP waveform of the largest contributing channel. Red line indicates MDD, blue line indicates HS, solid line indicates correct feedback, and dashed line indicates incorrect feedback. The shaded part in the figure is the most significant time period of the channel, 300-450 ms after the stimulus appeared. (d) Mean EEG scalp topography of the four categories over the most significant time period 300-450 ms.
Using deep learning to decode abnormal brain neural activity in MDD from single-trial EEG signals

May 2022

·

74 Reads

·

5 Citations

Brain-Apparatus Communication A Journal of Bacomics

Objectives The application of electroencephalography (EEG) to the study of major depressive disorder (MDD) is a common approach. However, there is no one-to-one correspondence between EEG and brain neural activity, and it is unclear whether single-trial EEG signals detect the cognitive neural activity of MDD. Methods Here, we used deep learning to explore this issue. Deep learning adopted in this paper was an end-to-end classification method named EEGNet which could update model parameters automatically based on the characteristics of the data and classify MDD and healthy subjects (HS) from single-trial EEG signals to obtain classification results above the chance level. Furthermore, the saliency map was used to analyze the neural network model and visualize the channels and time periods that contributed the most. Results Finally, EEGNet achieved an average classification accuracy of 61.4% in the four categories, and the result of feature visualization was consistent with the cognitive neural interpretation of existing studies. Conclusion The findings suggested that deep learning could help cognitive neuroscience explore neural activity.


Figure 1. Design of each block of the experiment.
Figure 2. BLCNN model structure.
Figure 6. Visualization of features extracted from laboratory dataset by BLCNN.
Training results and evaluation in laboratory dataset (%).
Training results and evaluation in DEAP dataset (%).
Deep Learning with Convolutional Neural Networks for EEG-based Music Emotion Decoding and Visualization

May 2022

·

80 Reads

·

12 Citations

Brain-Apparatus Communication A Journal of Bacomics

Purpose: Emotion is the reflection of individual's perception and understanding of various things, which needs the synergy of various brain regions. A large number of emotion decoding methods based on electroencephalogram (EEG) have been proposed. But extracting the most discriminative and cognitive features to construct a model is yet to be determined. This paper aims to construct a model that can extract the most discriminative and cognitive features. Materials and methods: Here, we collected EEG signals from 24 subjects in a musical emotion induction experiment. Then, an end-to-end branch LSTM-CNN (BLCNN) was used to extract emotion features from the laboratory dataset and DEAP dataset for emotion decoding. Finally, the extracted features were visualized on the laboratory dataset using saliency map. Result: The classification results showed that the accuracy of the three classification of the laboratory dataset was 95.78% ± 1.70%, and the accuracy of the four classification of the DEAP dataset was 80.97% ± 7.99%. We found that the discriminating features of positive emotion were distributed in the left hemisphere, at the same time, negative emotion features were distributed in the right hemisphere, where mainly in the frontal, parietal and occipital lobes. Conclusion: In this paper, we proposed a neural network model, namely BLCNN. The model obtained good results in laboratory dataset and DEAP dataset. Through the visual analysis of the features extracted by BLCNN, it was found that the features were consistent with emotional cognition. Therefore, this paper provided a new perspective for the practical application of human-computer emotional interaction.


Citations (4)


... The human brain undergoes substantial transformations throughout the lifespan, with each phase characterized by unique functional connectivity (FC) patterns. These stages, closely associated with cognitive development and structural changes in the brain (Song et al. 2014;Tan et al. 2023), have not been thoroughly described through electrophysiology. While previous works have primarily investigated FC changes using functional magnetic resonance imaging (fMRI), few research works employing electrophysiology have focused on specific age transitions within limited sample sizes. ...

Reference:

Functional connectivity across the lifespan: a cross-sectional analysis of changes
Alterations in Human Hippocampus Subregions across the Lifespan: Reflections on White Matter Structure and Functional Connectivity

... The model assumes that there is a great relevance between temporal and spatial domains, and we further propose that the relevance between spectral and time domains is even greater. Specifically for visual attention tasks, it was shown that both time domain [12,13] and spectral domain features are used [14]. Although there is great progress made on neglect detection using both event-related potentials (ERPs) and resting-state data [15], there hasn't been significant work done on neglect severity mapping. ...

EEG decoding for effects of visual joint attention training on ASD patients with interpretable and lightweight convolutional neural network

Cognitive Neurodynamics

... Brain-computer interface (BCI) is an emerging field attracting research institutions and industries in the areas of motor imagery classification [34], emotion recognition [29], disease diagnosis and detection [33], music imagery [32,38], and other tasks [3]. As one of the BCIs, non-invasive electroencephalography (EEG) has become popular and is commonly used own to its convenience and mobility. ...

Deep Learning with Convolutional Neural Networks for EEG-based Music Emotion Decoding and Visualization

Brain-Apparatus Communication A Journal of Bacomics

... Classifying ALS on a single-trial basis involves training a machine learning model with multiple samples/trials of a quantifiable objective marker that can efficiently predict a sample/trial as ALS or healthy after proper training. Single-trial detection using machine learning has shown great potential in several neural disorders including major depressive disorder (MDD) (Liu et al., 2022), autism spectrum disorder (ASD) (Ezabadi and Moradi, 2021), post-traumatic stress disorder (PTSD) (Georgopoulos et al., 2010), schizophrenia (Xu et al., 2013), amongst other neurologic disorders (Aoe et al., 2019). ...

Using deep learning to decode abnormal brain neural activity in MDD from single-trial EEG signals

Brain-Apparatus Communication A Journal of Bacomics