Yong Wang’s research while affiliated with China University of Geosciences and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (3)


GEVEHAN architecture diagram, including five components: (1) construct a heterogeneous graph with two types of nodes and three types of edges based on LBSN user check-in trajectory information; (2) learn POI embeddings using the Skip-Gram model, and user embeddings using Lite-GRU, to initialize the nodes; (3) use the VAE module for dimensionality reduction and denoising; (4) apply edge-level attention, multi-head attention mechanism, and residual connections for feature aggregation and update; (5) perform friend link prediction using cosine similarity. The relationships are as follows: a, b, and c are friends; a, t, c, and t are not friends.
Schematic of user trajectories. Different colors represent different users, while solid and dashed lines represent different sub-trajectories of the same user. On one day, a went to school, b went to the shop and bar, and c went to school. On another day, a went to the gym, b went to the restaurant and gym, and c went to the hospital and company, t hasn’t gone anywhere in the past two days.
Basic framework of VAE, consisting of three parts: encoder, latent variables, and decoder. The encoder maps the input data to the latent space, the decoder generates reconstructed data based on the latent variables, and the entire process is achieved by optimizing the variational lower bound.
Performance comparison of eight recommendation models on six different datasets—analysis of AUC, AP, and TOP@10 metrics. (a) Shows the AUC performance of the eight recommendation models across six datasets. (b) Shows the AP performance of each model on the same datasets. (c) Shows the TOP@10 hit rate performance of these models across six datasets. With these metrics, we can comprehensively evaluate the strengths and weaknesses of each recommendation model in different scenarios.
Performance evaluation metrics of GEVEHGAN and its derived models on the NYC and TKY datasets. (a) Shows the AUC scores of GEVEHGAN and its derived models on the NYC and TKY datasets; (b) displays the average precision scores of these models on the two city datasets; (c) presents the TOP@10 hit rate of the models on the NYC and TKY datasets. Through these detailed metric comparisons, we can gain a deeper understanding of the performance differences of GEVEHGAN and its variants in different data environments.

+6

The Application of Lite-GRU Embedding and VAE-Augmented Heterogeneous Graph Attention Network in Friend Link Prediction for LBSNs
  • Article
  • Full-text available

April 2025

·

2 Reads

Ziteng Yang

·

Boyu Li

·

Yong Wang

·

Aoxue Liu

Friend link prediction is an important issue in recommendation systems and social network analysis. In Location-Based Social Networks (LBSNs), predicting potential friend relationships faces significant challenges due to the diversity of user behaviors, along with the high dimensionality, sparsity, and complex noise in the data. To address these issues, this paper proposes a Heterogeneous Graph Attention Network (GEVEHGAN) model based on Lite Gate Recurrent Unit (Lite-GRU) embedding and Variational Autoencoder (VAE) enhancement. The model constructs a heterogeneous graph with two types of nodes and three types of edges; combines Skip-Gram and Lite-GRU to learn Point of Interest (POI) and user node embeddings; introduces VAE for dimensionality reduction and denoising of the embeddings; and employs edge-level attention mechanisms to enhance information propagation and feature aggregation. Experiments are conducted on the publicly available Foursquare dataset. The results show that the GEVEHGAN model outperforms other comparative models in evaluation metrics such as AUC, AP, and Top@K accuracy, demonstrating its superior performance in the friend link prediction task.

Download

Friend Link Prediction Method Based on Heterogeneous Multigraph and Hierarchical Attention

February 2025

·

16 Reads

With the rapid growth of location-based social network (LBSN), rich data comprising social behaviors and location information among users has emerged. Predicting potential friendships accurately from abundant information has become a pivotal research area. While graph neural network (GNN) have shown significant promise in prediction, existing approaches often fail to fully exploit the heterogeneous data characteristics in LBSN. Key challenges include inadequate modeling of the intricate relationships between users and points of interest (POI), overlooking the significance of spatial-temporal information in user trajectories, and underutilizing rich edge features. To address these challenges, we design a novel GRU-enhanced Heterogeneous Multigraph Attention Network (GEHMAN), which is a GNN model enhanced by GRU. We construct a heterogeneous multigraph to comprehensively capture user-POI relationships. We then employ a skip-gram model to embed POI nodes from user sub-trajectories and use RNN with GRU units to embed user nodes. GEHMAN utilize hierarchical attention mechanism to consolidate node information by aggregating diverse types of neighboring nodes and connecting edges. Experiments on six real city datasets show that compared with the best performance of six benchmark methods including LBSN2vec++, Metapath2vec and HAN, the average improvement percentages of GEHMAN in AUC, AP, and Top@K are 2.225%, 1.948%, and 6.353%, respectively.


A Spatio-Temporal Encoding Neural Network for Semantic Segmentation of Satellite Image Time Series

November 2023

·

179 Reads

Remote sensing image semantic segmentation plays a crucial role in various fields, such as environmental monitoring, urban planning, and agricultural land classification. However, most current research primarily focuses on utilizing the spatial and spectral information of single-temporal remote sensing images, neglecting the valuable temporal information present in historical image sequences. In fact, historical images often contain valuable phenological variations in land features, which exhibit diverse patterns and can significantly benefit from semantic segmentation tasks. This paper introduces a semantic segmentation framework for satellite image time series (SITS) based on dilated convolution and a Transformer encoder. The framework includes spatial encoding and temporal encoding. Spatial encoding, utilizing dilated convolutions exclusively, mitigates the loss of spatial accuracy and the need for up-sampling, while allowing for the extraction of rich multi-scale features through a combination of different dilation rates and dense connections. Temporal encoding leverages a Transformer encoder to extract temporal features for each pixel in the image. To better capture the annual periodic patterns of phenological phenomena in land features, position encoding is calculated based on the image’s acquisition date within the year. To assess the performance of this framework, comparative and ablation experiments were conducted using the PASTIS dataset. The experiments indicate that this framework achieves highly competitive performance with relatively low optimization parameters, resulting in an improvement of 8 percentage points in the mean Intersection over Union (mIoU).