Jianping Yin’s research while affiliated with Dongguan University of Technology and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (9)


The unlabeled images are entered into the model to obtain the prediction results. Each bar chart shows the prediction results for different images.The individual class probabilities for an image are represented using different colors. Compare the maximum prediction to a fixed threshold, γ . If the prediction exceeds γ , the class associated with the maximum prediction is kept as a pseudo-label.
(a) Shows three typical mapping functions: linear, convex, and concave. We can discover the changing pattern of the curve and choose the appropriate mapping function to adjust the threshold. (b) Is our mapping function, as shown in Equation (11), with different λ . (c) Represents the threshold case when applying the mapping function of (b).
FldtMatch: DSSL with Self-adaptive Dynamic Threshold. The yellow dashed box on the right takes a labeled image and an unlabeled image as an example. The model can use any Convolutional Neural Network as a backbone network.
The change in learning difficulty σ and dynamic thresholds Γ throughout the model training period under the FlexMatch method. The mapping function M ( x ) is x / ( 2 − x ) and γ = 0.95 .
(a) Shows more threshold curves under our mapping function with different λ . When λ is excessively high, there’s a slight variation in the thresholds regardless of the changes in λ . (b) Illustrates the macro F1-Score for DFUC2021 under varying λ values, with optimal outcomes attained at λ = 10.

+6

FldtMatch: Improving Unbalanced Data Classification via Deep Semi-Supervised Learning with Self-Adaptive Dynamic Threshold
  • Article
  • Full-text available

January 2025

·

7 Reads

·

1 Citation

Xin Wu

·

Jingjing Xu

·

Kuan Li

·

[...]

·

Jian Xiong

Among the many methods of deep semi-supervised learning (DSSL), the holistic method combines ideas from other methods, such as consistency regularization and pseudo-labeling, with great success. This method typically introduces a threshold to utilize unlabeled data. If the highest predictive value from unlabeled data exceeds the threshold, the associated class is designated as the data’s pseudo-label. However, current methods utilize fixed or dynamic thresholds, disregarding the varying learning difficulties across categories in unbalanced datasets. To overcome these issues, in this paper, we first designed Cumulative Effective Labeling (CEL) to reflect a particular class’s learning difficulty. This approach differs from previous methods because it uses effective pseudo-labels and ground truth, collectively influencing the model’s capacity to acquire category knowledge. In addition, based on CEL, we propose a simple but effective way to compute the threshold, Self-adaptive Dynamic Threshold (SDT). It requires a single hyperparameter to adjust to various scenarios, eliminating the necessity for a unique threshold modification approach for each case. SDT utilizes a clever mapping function that can solve the problem of differential learning difficulty of various categories in an unbalanced image dataset that adversely affects dynamic thresholding. Finally, we propose a deep semi-supervised method with SDT called FldtMatch. Through theoretical analysis and extensive experiments, we have fully proven that FldtMatch can overcome the negative impact of unbalanced data. Regardless of the choice of the backbone network, our method achieves the best results on multiple datasets. The maximum improvement of the macro F1-Score metric is about 5.6% in DFUC2021 and 2.2% in ISIC2018.

Download

Exploring feature sparsity for out-of-distribution detection

November 2024

·

31 Reads

Out-of-distribution (OOD) detection is a crucial problem in practice, especially, for the safe deployment of machine learning models in industrial settings. Previous work has used free energy as a score function and proposed a fine-tuning method that utilized OOD data in the training phase of the classification model, which achieves a higher performance on the OOD detection task compared with traditional methods. One key drawback, however, is that the loss function parameters are highly dependent on involved datasets, which means it cannot be dynamically adapted and implemented in others settings; in other words, the general ability of the energy score is considerably limited. In this work, our point of departure is to enlarge distinguishability between in-distribution features and OOD data. Consequently, we present a simple yet effective sparsity-regularized (SR) tuning framework for this purpose. Our framework has two types of workflows depending on if external OOD data is available, the complexity of the original training loss is sharply reduced by adopting this modification, meanwhile, the adapted ability and detection performance are enhanced. Also, we contribute a mini dataset as a light and efficient alternative of the previous large-scale one. In the experiments, we verify the effectiveness of our framework in a wide range of typical datasets along with common network architectures.





MS-UNet-v2: Adaptive Denoising Method and Training Strategy for Medical Image Segmentation with Small Training Data

September 2023

·

29 Reads

Models based on U-like structures have improved the performance of medical image segmentation. However, the single-layer decoder structure of U-Net is too "thin" to exploit enough information, resulting in large semantic differences between the encoder and decoder parts. Things get worse if the number of training sets of data is not sufficiently large, which is common in medical image processing tasks where annotated data are more difficult to obtain than other tasks. Based on this observation, we propose a novel U-Net model named MS-UNet for the medical image segmentation task in this study. Instead of the single-layer U-Net decoder structure used in Swin-UNet and TransUnet, we specifically design a multi-scale nested decoder based on the Swin Transformer for U-Net. The proposed multi-scale nested decoder structure allows the feature mapping between the decoder and encoder to be semantically closer, thus enabling the network to learn more detailed features. In addition, we propose a novel edge loss and a plug-and-play fine-tuning Denoising module, which not only effectively improves the segmentation performance of MS-UNet, but could also be applied to other models individually. Experimental results show that MS-UNet could effectively improve the network performance with more efficient feature learning capability and exhibit more advanced performance, especially in the extreme case with a small amount of training data, and the proposed Edge loss and Denoising module could significantly enhance the segmentation performance of MS-UNet.




Hybrid Pre-training Based on Masked Autoencoders for Medical Image Segmentation

December 2022

·

20 Reads

·

1 Citation

Communications in Computer and Information Science

The application of deep learning in the field of medical images has been gaining attention in recent years. However, due to the small number of medical images, the inability to generate them manually, and the high cost of annotation, it is difficult to train them. Masked Autoencoders (MAE), a recent hot new approach in the field of self-supervision, is based on Vision Transformer (ViT) architecture coding to improve the learning difficulty by adding masking to the cut non-overlapping image patches to obtain deeper image representations. In this paper, we employ three different small-scale pre-training datasets using MAE’s pre-training method to pre-train and select the medical segmentation task as its performance test task. We experimentally find that the new hybrid dataset pre-trained with the training set of the hybrid downstream task and other datasets has good performance, and the results are improved compared to those without pre-training, and better than those obtained with a single dataset, showing the potential of the self-supervised method hybrid pre-training for medical segmentation tasks.KeywordsMasked autoencodersHybrid pre-trainingMedical image segmentation

Citations (4)


... Non-OOD-Exposure OOD Detection. OOD detection is a fundamental task extensively studied in diverse machine learning domains (Lee et al., 2018a;Lafon et al., 2023;Jiang et al., 2023;Huang et al., 2022c;Hendrycks & Gimpel, 2017b;Lee et al., 2018b;Koo et al., 2024;Papadopoulos et al., 2021;Chen et al., 2023). A representative line of work that relies on purely ID data is based on the model's output including using softmax score (Hendrycks & Gimpel, 2017a;Liang et al., 2018), using energy score (Liu et al., 2020;, and activation pruning-based methods (Djurisic et al., 2023;Sun & Li, 2022;Sun et al., 2021). ...

Reference:

GOLD: Graph Out-of-Distribution Detection via Implicit Adversarial Latent Generation
Outlier Exposure with Focal Loss for Out-of-distribution Detection
  • Citing Conference Paper
  • February 2024

... K-Nearest Neighbors classifier is employed to iteratively optimize the value of K during predictions on test data. Wu et al. [26] introduced an ensemble model for DFU classification by combining three neural architectures: EfficientNet-B3, DenseNet-201 and BiT-M-R101. Their approach deviates from the conventional plurality voting method and focuses on a novel strategy called "voting with expertise". ...

Improving DFU Image Classification by an Adaptive Augmentation Pool and Voting with Expertise
  • Citing Conference Paper
  • April 2023

... Neuropathy, ischaemia, or infection can cause changes in the temperature of diabetics' plantar feet [13][14]. While the normal difference is usually less than 1 ºC, temperature disparities between the right and left feet of more than 2.2 ºC (4 F) are deemed abnormal [15]. By using a thermal-imaging camera, issues may be found early on, thus saving time and money. ...

Ensemble Learning for Diabetic Foot Ulcer Segmentation based on DFUC2022 Dataset
  • Citing Conference Paper
  • March 2023

... Methods like MAE, which utilize masked image modeling (MIM), have opened up new possibilities for medical image segmentation. In terms of MAE's strengths, a team from Dongguan University of Technology [92] (2022.7) finds that MAE performs well in hybrid pre-training for medical segmentation tasks. ...

Hybrid Pre-training Based on Masked Autoencoders for Medical Image Segmentation
  • Citing Chapter
  • December 2022

Communications in Computer and Information Science