Keywords: domain generalization, distribution shift, robustness
TL;DR: We introduce the theory moment alignment for domain generalization, unifying Invariant Risk Minimization, gradient matching, and Hessian matching.
Abstract: Domain generalization (DG) seeks to develop models that generalize well to unseen target domains, addressing distribution shifts in real-world applications. One line of research in DG focuses on aligning domain-level gradients and Hessians to enhance generalization. However, existing methods are computationally inefficient and the underlying principles of these approaches are not well understood. In this paper, we develop a theory of moment alignment for DG. Grounded in transfer measures, a principled framework for quantifying generalizability between domains, we prove that aligning derivatives across domains improves transfer measures. Moment alignment provides a unifying understanding of Invariant Risk Minimization, gradient matching, and Hessian matching, three previously disconnected approaches. We further establish the duality between feature moments and derivatives of the classifier head. Building upon our theory, we introduce Closed-Form Moment Alignment (CMA), a novel DG algorithm that aligns domain-level gradients and Hessians in closed-form. Our method overcomes the computational inefficiencies of existing gradient and Hessian-based techniques by eliminating the need for repeated backpropagation or sampling-based Hessian estimation. We validate our theory and algorithm through quantitative and qualitative experiments.
Supplementary Material: zip
Latex Source Code: gz
Code Link: https://github.com/chenyuen0103/CMA.git
Signed PMLR Licence Agreement: pdf
Readers: auai.org/UAI/2025/Conference, auai.org/UAI/2025/Conference/Area_Chairs, auai.org/UAI/2025/Conference/Reviewers, auai.org/UAI/2025/Conference/Submission403/Authors, auai.org/UAI/2025/Conference/Submission403/Reproducibility_Reviewers
Submission Number: 403
Loading