Towards Unified and Effective Domain Generalization

19 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Domain Generalization; Foundation Models; Test-Time Adaption
Abstract: We propose \textbf{UniDG}, a novel and \textbf{Uni}fied framework for \textbf{D}omain \textbf{G}eneralization that is capable of significantly enhancing the out-of-distribution performance of foundation models regardless of their architectures. The core idea of UniDG is to finetune models during inference time which saves the cost of iterative training. Specifically, we encourage models to learn the distribution of testing data in an unsupervised manner and impose a penalty regarding the updating step of model parameters. The penalty term can effectively reduce catastrophic forgetting issues as we would like to maximally preserve the valuable knowledge in the original model. Empirically, on up to 12 visual backbones, including CNN-, MLP-, and transformer-based models, ranging from 1.89M to 303M parameters, UniDG shows an average accuracy improvement of 5.4\% on DomainBed. We believe that these performance results are able to manifest the superiority and versatility of UniDG.
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1883
Loading