February 2025
·
1 Read
Expert Systems with Applications
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
February 2025
·
1 Read
Expert Systems with Applications
January 2025
IEEE Transactions on Circuits and Systems for Video Technology
Federated domain adaptation (FDA) aims to transfer knowledge collaboratively from multiple source domains to related but different unlabelled target domains. The data of each domain are locally maintained, and various domain gaps exist among them, resulting in extreme challenges in simultaneously mitigating diverse distribution shifts and preserving discriminative knowledge without accessing the source data. Many existing works have failed to fully explore different source models to measure domain shifts and leverage semantic knowledge, resulting in skewed alignment and partial preservation of discriminative information. In this paper, we propose a novel approach named FDAC to address Federated Domain Adaptation by thoroughly investigating source models via dual Contrastive mechanisms. FDAC contrastively increases the data diversity to align features across domains in a fine-grained manner by manipulating the latent deep architecture and compensating for knowledge from each source domain; simultaneously, it contrastively utilizes the comprehensive semantic knowledge of different source domains to guide the adaptation process. Extensive experiments on several real datasets demonstrate that FDAC outperforms all comparative methods under most conditions. Furthermore, FDAC only needs approximately half of the communication rounds compared with the state-of-the-art methods, indicating that FDAC can significantly improve communication efficiency, which is another key factor in the federated setting. The source code is publicly available at https://github.com/ycarobot/FDAC.
December 2024
·
2 Reads
October 2024
·
7 Reads
January 2024
·
1 Read
IEEE Transactions on Consumer Electronics
Federated heterogeneous graph learning helps users mine distributed heterogeneous data from consumer electronics while protecting data privacy. However, traditional federated graph learning methods often concentrate solely on aggregating model parameters, overlooking the global structure inherent in intricate heterogeneous graphs from various consumer electronics. This limitation hinders performance enhancement. In this context, where each client of consumer electronics institution holds a fragmentary local subgraph, there exist severed metapaths between these subgraphs. Restoring these missing connections could significantly boost performance. To reconstruct the cross-client global structure of heterogeneous subgraphs, while maintaining robust privacy protections, this paper introduces a novel approach: Pseudo-Metapath-based Federated learning framework for Heterogeneous Graph learning, dubbed PM-FedHG. Using metapaths as a guide, we’ve devised a relation-based technique to generate pseudo neighbor nodes and employ these pseudo metapaths for information exchange. Furthermore, our federated graph fusion and pseudo metapaths allocation algorithm facilitate the recovery of missing cross-client subgraph information, thereby enhancing performance through collaborative training. Crucially, as all uploaded metapaths are pseudo, the privacy of the original data remains securely protected. Comprehensive experiments on two datasets, across various client configurations, underscore the effectiveness of PM-FedHG. Additional ablation studies confirm the necessity and efficacy of each component within our framework.
January 2024
·
2 Reads
October 2023
·
28 Reads
·
2 Citations
Expert Systems with Applications
May 2023
·
37 Reads
·
5 Citations
IEEE Transactions on Neural Networks and Learning Systems
Domain adaptation (DA) aims to transfer knowledge from one source domain to another different but related target domain. The mainstream approach embeds adversarial learning into deep neural networks (DNNs) to either learn domain-invariant features to reduce the domain discrepancy or generate data to fill in the domain gap. However, these adversarial DA (ADA) approaches mainly consider the domain-level data distributions, while ignoring the differences among components contained in different domains. Therefore, components that are not related to the target domain are not filtered out. This can cause a negative transfer. In addition, it is difficult to make full use of the relevant components between the source and target domains to enhance DA. To address these limitations, we propose a general two-stage framework, named multicomponent ADA (MCADA). This framework trains the target model by first learning a domain-level model and then fine-tuning that model at the component-level. In particular, MCADA constructs a bipartite graph to find the most relevant component in the source domain for each component in the target domain. Since the nonrelevant components are filtered out for each target component, fine-tuning the domain-level model can enhance positive transfer. Extensive experiments on several real-world datasets demonstrate that MCADA has significant advantages over state-of-the-art methods.
May 2023
·
14 Reads
Federated domain adaptation (FDA) aims to collaboratively transfer knowledge from source clients (domains) to the related but different target client, without communicating the local data of any client. Moreover, the source clients have different data distributions, leading to extremely challenging in knowledge transfer. Despite the recent progress in FDA, we empirically find that existing methods can not leverage models of heterogeneous domains and thus they fail to achieve excellent performance. In this paper, we propose a model-based method named FDAC, aiming to address {\bf F}ederated {\bf D}omain {\bf A}daptation based on {\bf C}ontrastive learning and Vision Transformer (ViT). In particular, contrastive learning can leverage the unlabeled data to train excellent models and the ViT architecture performs better than convolutional neural networks (CNNs) in extracting adaptable features. To the best of our knowledge, FDAC is the first attempt to learn transferable representations by manipulating the latent architecture of ViT under the federated setting. Furthermore, FDAC can increase the target data diversity by compensating from each source model with insufficient knowledge of samples and features, based on domain augmentation and semantic matching. Extensive experiments on several real datasets demonstrate that FDAC outperforms all the comparative methods in most conditions. Moreover, FDCA can also improve communication efficiency which is another key factor in the federated setting.
May 2022
·
62 Reads
·
8 Citations
Knowledge-Based Systems
Unsupervised domain adaptation aims to transfer knowledge from a labeled source domain to a related but unlabeled target domain. Most existing approaches either adversarially reduce the domain shift or use pseudo-labels to provide category information during adaptation. However, an adversarial training method may sacrifice the discriminability of the target data, since no category information is available. Moreover, a pseudo labeling method is difficult to produce high-confidence samples, since the classifier is often source-trained and there exists the domain discrepancy. Thus, it may have a negative influence on learning target representations. A potential solution is to make them compensate each other to simultaneously guarantee the feature transferability and discriminability, which are the two key criteria of feature representations in domain adaptation. In this paper, we propose a novel method named ATPL, which mutually promotes Adversarial Training and Pseudo Labeling for unsupervised domain adaptation. ATPL can produce high-confidence pseudo-labels by adversarial training. Accordingly, ATPL will use the pseudo-labeled information to improve the adversarial training process, which can guarantee the feature transferability by generating adversarial data to fill in the domain gap. Those pseudo-labels can also boost the feature discriminability. Extensive experiments on real datasets demonstrate that the proposed ATPL method outperforms state-of-the-art unsupervised domain adaptation methods.
... The overall framework consists of three parts, as shown in Figure 3. First, based on the complex components (i.e., potential data distributions or clusters of features) [12] in the insulator defect detection dataset, mining is performed to extract the potential scene semantics therein. Then, use adversarial training to generate diverse insulator datasets under different semantics by relying on multiple generators under different scene semantics, and a unified discriminator is used to ensure the semantic consistency between foreground and background in the generated data. ...
May 2023
IEEE Transactions on Neural Networks and Learning Systems
... Unlike the previous work, Xie et al. [39] guided model training using the consistency loss in order to ensure similar data representation from different perspectives. Another recent study [40] applied adversarial training to enhance the confidence of pseudolabels and improve the adversarial training process using these pseudo-labels. These two processes complement each other, strengthening the transfer effectiveness from the source domain to the target domain. ...
May 2022
Knowledge-Based Systems
... This kind of adaptation can be called domain adaptation. Domain adaptation methods for robotic perception enable robots to understand knowledge from public domain datasets and transfer them to the current work environment to complete specified tasks [26]. ...
November 2020