Yanan Zhou’s research while affiliated with Beijing Foreign Studies University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (9)


Corrigendum: Injecting competition into online programming and Chinese- English translation classrooms
  • Article
  • Full-text available

June 2025

Yinjia Wan

·

Jian Lian

·

Yanan Zhou
Download

The leader-board in the online learning platform.
Injecting competition into online programming and Chinese- English translation classrooms

September 2024

·

31 Reads

The introduction of competition has the potential to enhance the efficacy of students' learning performance. Nevertheless, there have been contradictory findings about the impact of intergroup competition on students' learning performance and engagement. Therefore, further comprehensive investigations for this problem are necessary. In order to bridge this gap, the present study seeks to ascertain the efficacy of intergroup competition in relation to students' academic performance and motivation. Consequently, we present the concept of intergroup competition and implement it within the context of an online programming course and an online Chinese-English translation course. The participants of this study consist of sophomore students majoring in Computer Science and English. Initially, a total of 108 sophomore students majoring in Computer Science participated. Then, a total of 100 sophomore students majoring in English participated. A quasi-experimental study was subsequently undertaken to compare students from two courses, which are online programming and Chinese-English translation, assigning them to an experimental group and a comparison group, respectively. Then, we conducted independent samples t-tests to measure the difference between the academic performance of the two group of students from two courses. The results indicate that both groups of students who were exposed to the intergroup competition mechanism demonstrated considerably higher levels of academic performance and engagement compared to the other group of students. The findings indicate that the competition mechanism, has the potential to be a beneficial instrument for enhancing both students' learning performance and motivation.




The data collecting process for classifying music-evoked emotions using an EEG equipment based on the 10–20 system (Homan et al., 1987).
The architectural design of the proposed transformer model. The symbol L is used to describe the number of encoder blocks in the proposed model. The proposed model has several variants, with L(.) being either 12, 18, or 24. The abbreviation GAP is used to denote global average pooling, while FC represents fully-connected.
The encoder block utilized in the transformer model under consideration. The acronym MSA refers to multi-head self attention.
The MLP block used in the proposed transformer model. GELU denotes the activation function (Lee, 2023).
The suggested model's inaccuracy in (Top) binary classification and (Bottom) ternary classification.
Music-evoked emotions classification using vision transformer in EEG signals

April 2024

·

74 Reads

·

4 Citations

Introduction The field of electroencephalogram (EEG)-based emotion identification has received significant attention and has been widely utilized in both human-computer interaction and therapeutic settings. The process of manually analyzing electroencephalogram signals is characterized by a significant investment of time and work. While machine learning methods have shown promising results in classifying emotions based on EEG data, the task of extracting distinct characteristics from these signals still poses a considerable difficulty. Methods In this study, we provide a unique deep learning model that incorporates an attention mechanism to effectively extract spatial and temporal information from emotion EEG recordings. The purpose of this model is to address the existing gap in the field. The implementation of emotion EEG classification involves the utilization of a global average pooling layer and a fully linked layer, which are employed to leverage the discernible characteristics. In order to assess the effectiveness of the suggested methodology, we initially gathered a dataset of EEG recordings related to music-induced emotions. Experiments Subsequently, we ran comparative tests between the state-of-the-art algorithms and the method given in this study, utilizing this proprietary dataset. Furthermore, a publicly accessible dataset was included in the subsequent comparative trials. Discussion The experimental findings provide evidence that the suggested methodology outperforms existing approaches in the categorization of emotion EEG signals, both in binary (positive and negative) and ternary (positive, negative, and neutral) scenarios.


The effect of music training on students’ mathematics and physics development at middle schools in China: A longitudinal study

March 2024

·

20 Reads

·

1 Citation

Heliyon

As a descriptive-inferential study, this research aimed at revealing the relationship between music training and academic development with the Chinese middle school students' academic performance of mathematics and physics skills. The participants of this study consisted of the students from two different middle schools located at two cities in Shandong province, China. From each school 250 students were selected, and the statistics was used to analyze both the academic performance of the students and the data obtained from the scale designed by the authors. The research results show that the non-music students outperformed music students on both mathematics and physics development. In addition, music training did not contribute to the academic achievement independently but rather integrated with several factors like parents’ education and out-of-school engagement. The findings suggest the positive influence on non-musical cognitive learning, and it has potential implications for the Chinese middle school education.


Identification of emotions evoked by music via spatial-temporal transformer in multi-channel EEG signals

July 2023

·

86 Reads

·

3 Citations

Introduction Emotion plays a vital role in understanding activities and associations. Due to being non-invasive, many experts have employed EEG signals as a reliable technique for emotion recognition. Identifying emotions from multi-channel EEG signals is evolving into a crucial task for diagnosing emotional disorders in neuroscience. One challenge with automated emotion recognition in EEG signals is to extract and select the discriminating features to classify different emotions accurately. Methods In this study, we proposed a novel Transformer model for identifying emotions from multi-channel EEG signals. Note that we directly fed the raw EEG signal into the proposed Transformer, which aims at eliminating the issues caused by the local receptive fields in the convolutional neural networks. The presented deep learning model consists of two separate channels to address the spatial and temporal information in the EEG signals, respectively. Results In the experiments, we first collected the EEG recordings from 20 subjects during listening to music. Experimental results of the proposed approach for binary emotion classification (positive and negative) and ternary emotion classification (positive, negative, and neutral) indicated the accuracy of 97.3 and 97.1%, respectively. We conducted comparison experiments on the same dataset using the proposed method and state-of-the-art techniques. Moreover, we achieved a promising outcome in comparison with these approaches. Discussion Due to the performance of the proposed approach, it can be a potentially valuable instrument for human-computer interface system.


Aurora Classification in All-Sky Images via CNN–Transformer

May 2023

·

85 Reads

·

10 Citations

An aurora is a unique geophysical phenomenon with polar characteristics that can be directly observed with the naked eye. It is the most concentrated manifestation of solar–terrestrial physical processes (especially magnetospheric–ionospheric interactions) in polar regions and is also the best window for studying solar storms. Due to the rich morphological information in aurora images, people are paying more and more attention to studying aurora phenomena from the perspective of images. Recently, some machine learning and deep learning methods have been applied to this field and have achieved preliminary results. However, due to the limitations of these learning models, they still need to meet the requirements for the classification and prediction of auroral images regarding recognition accuracy. In order to solve this problem, this study introduces a convolutional neural network transformer solution based on vision transformers. Comparative experiments show that the proposed method can effectively improve the accuracy of aurora image classification, and its performance has exceeded that of state-of-the-art deep learning methods. The experimental results show that the algorithm presented in this study is an effective instrument for classifying auroral images and can provide practical assistance for related research.


The pipeline from the online music learning platform with VR and IoT sensors to the non-Euclidean data structure.
The structure of the proposed GNN.
The optimal threshold value set in the proposed GNN.
The experimental outcome (sensitivity (%), specificity (%), and accuracy (%)) of the proposed approach for each student.
The comparison between the state-of-the-art and our work in terms of sensitivity (%), specificity (%), and accuracy (%).
Virtual Reality and Internet of Things-Based Music Online Learning via the Graph Neural Network

October 2022

·

62 Reads

·

6 Citations

Computational Intelligence and Neuroscience

Virtual reality and the Internet of Things have shown their capability in a variety of tasks. However, their availability in online learning remains an unresolved issue. To bridge this gap, we propose a virtual reality and Internet of Things-based pipeline for online music learning. The one graph network is used to generate an automated evaluation of learning performance which traditionally was given by the teachers. To be specific, a graph neural network-based algorithm is employed to identify the real-time status of each student within an online class. In the proposed algorithm, the characteristics of each student collected from the multisensors deployed on their bodies are taken as the input feature for each node in the presented graph neural network. With the adoption of convolutional layers and dense layers as well as the similarity between each pair of students, the proposed approach can predict the future circumstance of the entire class. To evaluate the performance of our work, comparison experiments between several state-of-the-art algorithms and the proposed algorithm were conducted. The result from the experiments demonstrated that the graph neural network-based algorithm achieved competitive performance (sensitivity 91.24%, specificity 93.58%, and accuracy 89.79%) over the state-of-the-art.

Citations (5)


... During training, both accuracy and loss on the training and validation datasets are continuously monitored. Similar methodologies have been applied in prior research to ensure effective learning without overfitting [34]. Before training, the hyperparameters were carefully determined to achieve optimal performance. ...

Reference:

Implementation of Vision Transformer for Early Detection of Autism Based on EEG Signal Heatmap Visualization
Music-evoked emotions classification using vision transformer in EEG signals

... Boyd (2013) conducted a study on the relationship between music participation and mathematics achievement in middle school students and found no significant difference in the means of mathematics scores between students who had taught using song comparison to students who were taught without songs activities. Lianb, and Zhoua, (2024) found a significant influence of song on the students' mathematics development. Holmes, and Hallam, (2017) conducted a study on the impact of participation of music on learning mathematics and found a significant influence of song on the mathematics achievement. ...

The effect of music training on students’ mathematics and physics development at middle schools in China: A longitudinal study
  • Citing Article
  • March 2024

Heliyon

... In recent years, scholars have discovered that the spatiotemporal characteristics of EEG play a crucial role in emotion recognition. Many studies have introduced novel spatiotemporal features based on self-attention mechanisms (Zhou and Lian, 2023). As our comprehension of the neural mechanisms underlying emotional responses deepens, these new features are critical for enhancing the accuracy of emotion recognition. ...

Identification of emotions evoked by music via spatial-temporal transformer in multi-channel EEG signals

... However, it is worth noting that the inherent disadvantages of CNNs, limited by local receptive fields, may limit further performance improvement, whereas Transformer shows potential to overcome this limitation. Lian et al. [26] combined Transformer with Inception-Resnet-V2, which can identify auroral borealis images by combining their local receptive field and global receptive fields. Their performance outperformed the current state-of-the-art deep learning methods. ...

Aurora Classification in All-Sky Images via CNN–Transformer

... It supports the development of smart music education and recommendation systems based on real-time audio analysis. The subject of graph neural networks (GNNs) powering adaptive learning environments in VR-based music education is approached by Lian et al. (2022). This paper introduces an online music education platform using graph neural networks (GNNs) for adaptive feedback. ...

Virtual Reality and Internet of Things-Based Music Online Learning via the Graph Neural Network

Computational Intelligence and Neuroscience