July 2023
·
2 Reads
Lecture Notes in Electrical Engineering
Unsupervised Domain Adaptation (UDA) intends to achieve excellent results by transferring knowledge from labeled source domains to unlabeled target domains in which the data or label distribution changes. Previous UDA methods have acquired great success when labels in source domain are pure. However, even the acquisition of a large scare clean labels in source domain needs plenty of cost as well. In the presence of label noise in source domain, the traditional UDA methods will be seriously degraded as they do not take appropriate measures against the label noise. In this paper, we propose an approach named Robust Self-training with Label Refinement (RSLR) to address the above issue. RSLR adopts the self-training framework by maintaining two Labeling Networks (LNets) on the source domain, which are used to provide confident pseudo-labeled to target samples, and a Target-specific Network (TNet) trained by the pseudo-labeled samples. To combat the effect of label noise, LNets progressively distinguish and refine the mislabeled source samples. RSLR leads to significant improvements over the state-of-the-art methods on extensive benchmark datasets.KeywordsMachine learningUnsupervised domain adaptationNoisy learningSelf-training