| In recent years,thanks to the rapid development of computing power and datasets,deep neural networks have been widely applied in various fields.However,the tremendous success of deep neural networks largely relies on large-scale annotated data.Domain adaptation,as a key research direction in machine learning,focuses on how to transfer the model trained in the well-annotated source domain to the unlabeled target domain,thereby reducing the target domain model’s dependency on annotated data.Due to the data distribution gaps between the source and target domains,directly applying the model trained in the source domain to the target domain may lead to performance degradation.The research focus of domain adaptation is to propose innovative algorithms or techniques to reduce the data distribution gaps between the source and target domains and achieve adaptive transfer,thereby improving the model’s performance in the target domain.This is of great significance for solving the problem of limited annotation resources faced by deep neural networks in practical applications,meeting the demand for reducing annotation costs,and adapting to continuously changing realworld scenarios.Although current domain adaptation algorithm research has achieved some success,it still faces four major challenges in real-world applications:(1)Significant distribution gaps under ideal annotations:distribution gaps between source and target domain images lead to well-trained models in the source domain not achieving good performance when transferred to the target domain.(2)Model overfitting under limited annotations:When it is difficult to obtain samples,resulting in limited annotations,models overfitting to the limited annotations prevent itself from sufficiently learning discriminative features,leading to inadequate feature transfer capability.(3)Introduction of incorrect knowledge under noisy annotations:When data is roughly annotated,the rough annotations will contain noise.These incorrectly annotated samples will transfer incorrect knowledge to the source domain,further affecting the performance when transferred to the target domain.(4)Information imbalance under multi-source annotations:When facing multiple source domains,the impact of different source domains on the target domain varies.Source domains that are dissimilar to the target domain may transfer negative effects,thereby affecting the model’s performance in the target domain.To address the above challenges,this paper investigates domain adaptation algorithms oriented towards annotation diversity,delving into four research tasks:ideal annotation domain adaptation,limited annotation domain adaptation,noisy annotation domain adaptation,and multi-source annotation domain adaptation.The main contributions and innovations of the paper are summarized as follows:1.Margin Adversarial Joint Alignment for Domain Adaptation.In the ideal annotation domain adaptation task,the core challenge is to reduce the distribution gaps between the source and target domains.Existing methods only consider aligning the feature marginal distributions between the source and target domains while ignoring the rich semantic information within categories,leading to confusion between different category features in the source and target domains.In addition,existing methods do not sufficiently consider the discriminative nature of category features,causing the classifier to not output high-confidence predictions and easily leading to misclassification.To address these issues,this paper proposes a Margin Adversarial Joint Alignment method.This method leverages the feature and category information in the source and target domains to jointly align the distributions,better reducing the distribution gaps between the source and target domains.Moreover,this method employs category margin adversarial training using virtual and real samples,continuously increasing the category margin of real samples,thereby helping to adjust the class decision boundaries to obtain more discriminative features.2.Cross-Domain Knowledge Interaction for Generalized Few-Shot Domain Adaptation.In the limited annotation domain adaptation task,due to the small amount of annotated data,the features obtained by directly supervised learning from annotated samples usually lack discriminability and transferability.To obtain pseudo-labels,existing clustering algorithms cannot be directly applied to domain adaptation tasks because samples of the same category in different domains have different data distributions.Using current clustering methods can easily generate low-quality and imbalanced clusters,resulting in severely erroneous pseudo-labels.To address this issue,we propose a Cross-Domain Knowledge Interaction framework.This method clusters unlabeled samples based on feature similarity and transferability,obtaining several clusters.Then,using the annotated samples and high-confidence samples,these clusters are assigned pseudo-labels and self-supervised learning is performed.Moreover,the method also conducts cross-domain semantic contrastive learning between the source and target domains,drawing samples of the same class towards the class center while pushing samples of different classes away from the center,thus achieving discriminative and aligned representation learning.3.Seek Common Ground While Reserve Differences Strategy for Noise Domain Adaptation.In the noisy annotation domain adaptation task,existing methods generally assume that samples with supervised loss below a threshold(high-confidence)are more likely to be clean,and use two symmetric networks to select predictionconsistent or prediction-inconsistent high-confidence samples for mutual supervision.However,selecting prediction-consistent high-confidence samples can lead to premature convergence of the two networks,while selecting prediction-inconsistent highconfidence samples often contains many noisy samples.To solve this problem,we propose a model-agnostic Seek Common Ground While Reserve Differences method.This method draws inspiration from ensemble learning and enables mutual learning between the two networks.By utilizing the prediction information of the two networks,samples are divided into prediction-consistent and prediction-inconsistent samples.For prediction-inconsistent samples,the differences between the two networks are maintained;while for prediction-consistent samples,self-supervised learning is performed to improve the networks’ discriminability.This method can effectively transfer knowledge using correctly labeled samples and achieve domain alignment.4.Attention Mechanism for Multi-Source Domain Adaptation.In the multisource annotation domain adaptation task,existing methods mainly focus on reducing domain discrepancies between different domains,but often overlook the negative impact of knowledge transfer from different source domains.To address this issue,we propose a Attention Mechanism Multi-Source Domain Adaptation method.This method considers the distribution similarity between different source domains and the target domain and assigns different weights to different source domains.Using these weights,the method obtains the weighted domain discrepancy between multiple source domains and the target domain,as well as the weighted classification loss of each source domain.Through this approach,the method can better utilize the knowledge of source domains similar to the target domain while avoiding the negative impact of dissimilar source domain knowledge,thereby facilitating more accurate and efficient knowledge transfer in multi-source domain adaptation tasks. |