Diversity Sampling Regularization for Multi-Domain Generalization

TMLR Paper6478 Authors

12 Nov 2025 (modified: 14 Nov 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Domain Generalization (DG) seeks to create models that can successfully generalize to new, unseen target domains without the need for target domain data during training. Traditional approaches often rely on data augmentation or feature mixing techniques, such as MixUp; however, these methods may fall short in capturing the essential diversity within the feature space, resulting in limited robustness against domain shifts. In this research, we revisit the importance of diversity in DG tasks and propose a simple yet effective method to improve DG performance through diversity-sampling regularization. Specifically, we calculate entropy values for input data to assess their prediction uncertainty, and use these values to guide sampling through Determinantal Point Process (DPP), which prioritizes selecting data sub- sets with high diversity. By incorporating DPP-based diversity sampling as a regularization strategy, our framework enhances the standard Empirical Risk Minimization (ERM) objec- tive, promoting the learning of domain-agnostic features without relying on explicit data aug- mentation. We empirically validate the effectiveness of our method on standard DG bench- marks, including PACS, VLCS, OfficeHome, TerraIncognita, and DomainNet, and through extensive experiments show that it consistently improves generalization to unseen domains and outperforms widely used baselines and S.O.T.A without relying on any task-specific heuristics.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Qi_CHEN6
Submission Number: 6478
Loading