March 2025
·
2 Reads
Signal Image and Video Processing
Unsupervised domain adaptation (UDA) is a technique for learning from a label-rich source domain and transferring the learned knowledge to an unlabeled target domain. Current researches on feature-based UDA methods usually utilize the pseudo labels to find new feature representations that can minimize the distribution difference between the two domains. But the inaccurate pseudo labels may hinder exploiting the precise intrinsic structures, leading to poor performance. In addition, some theories reveal that the transferability of features might be compromised during the process of learning feature representations. To address these problems, we propose hybrid structure with label consistency (HSLC) for UDA. Firstly, in a dynamically updated low-dimensional space, HSLC adaptively captures the local connectivity of target data by using the local manifold self-learning strategy, and explores the discriminative information of source domain by minimizing the intra-class distance. Then, the pseudo labels of target domain can be obtained by class centroid matching. Furthermore, we utilize between-domain and within-domain label consistency by training multiple class-wise domain classifiers to reweight target samples, which enhances the quality of pseudo labels by considering between-domain sample correlation and geometric structure of target domain. Finally, the refined pseudo labels are used to maximize the inter-class distance for the two domains, which not only reduces the impact of inaccurate pseudo labels on preserving discriminative structure but also contributes to exploring various intrinsic properties. Extensive experiments on the benchmark datasets demonstrate that our method is competitive with the state-of-the-art UDA methods.