Keywords: Domain Generalization, Multi-Domain Long-Tailed Learning, Prior Shift, Feature Shift, Meta-Learning
TL;DR: Robust DG under concurrent prior and feature shifts. Based on theoretical analysis, we introduce RC-Align, a meta-learning framework that leverages DA loss. We achieved SOTA on both standard DG and MDLT settings.
Abstract: Domain generalization (DG) aims to learn predictive models that can generalize to unseen domains.
Most existing DG approaches focus on learning domain-invariant representations under the assumption of conditional distribution shift (i.e., they primarily address changes in $P(X|Y)$ while assuming the label marginal $P(Y)$ remains stable).
However, real-world data seldom satisfy this assumption.
Multiple domains often differ in more complex ways, where both the label distribution $P(Y)$ and the conditional distribution $P(X|Y)$ vary simultaneously.
In this work, we propose a new framework for robust domain generalization under divergent marginal and conditional distributions.
We introduce a novel risk bound for unseen domains by explicitly decomposing the joint distribution into marginal and conditional components and characterizing risk gaps arising from both sources of divergence.
To operationalize this bound, we design a meta-learning procedure that minimizes and validates the proposed risk bound across seen domains, ensuring strong generalization to unseen ones.
Empirical evaluations demonstrate that our method achieves state-of-the-art performance not only on conventional DG benchmarks but also in challenging Multi-Domain Long-Tailed Recognition (MDLT) settings where both marginal and conditional shifts are pronounced.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 8175
Loading