SBM: Smoothness-Based Minimization for Domain Generalization

Published: 2024, Last Modified: 28 Sept 2024ICASSP 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In topical domain generalization (DG), trained models are asked to perform well on an unknown target domain with different data statistics. In order to improve domain generalization, adversarial learning has proven to be one of the most effective methods. Existing approaches, however, rely primarily on adversarial learning, which can only generalize within a limited range of domains. We argue that smoothness- based minimization (SBM) is a more promising direction for adversarial domain generalization. Our findings indicate that achieving a smoothness-based minimization of task loss stabilizes adversarial training, resulting in better domain generalization performance. This method has been shown to achieve remarkable domain generalization performance on three publicly available benchmarks including PACS, Office- Home and DomainNet.
Loading