Abstract: Domain generalization aims to train a model which can generalize to unknown domains after training on several source domains. When the training domain and testing domain are under similar distributions, the well-trained model can achieve similar performance on two domains. But when generalize to unseen domains, the performance of the model will drop significantly, because of the existence of the domain-shift. To solve domain generalization problems, it is important to extract rich and general features from data. To improve the generalize ability, recently, lots of contrastive learning based feature alignment approaches had been proposed. Instance-level contrastive learning approaches force the model to learn domain-invariant features by pushing the positive pairs closer in the representation space. But they ignored the effect of class-level feature distribution in the representation space. In this study, the authors propose a simple method Maximum Intra-class Average Diameter minimization (MIADM) to regularize the class-level feature distribution. The authors evaluate our method on three benchmark datasets including PACS, VLCS, and OfficeHome. The extensive experimental results demonstrate that our method achieves competitive performance compared with SOTA methods.
0 Replies
Loading