Keywords: invariant risk minimization, OOD generalization, sparse optimization
TL;DR: Existing IRM algorithms depend on expensively searching over viable subsets; we provide recovery guarantees on computationally efficient methods, and additional insights on nonasymptotic bounds for different IRM optimization penalty schemes.
Abstract: Invariant Risk Minimization (IRM) (Arjovsky et al., 2020) proposes an optimization scheme that uses causal features to improve generalization. However, in most realizations, it does not have an explicit feature selection strategy. Prior investigation (Rosenfeld et al., 2020; Zhang et al., 2023) reveals failure cases when searching for causal features, and in light of these concerns, recent work has demonstrated the promise of using sparsity (Zhou et al., 2022; Fan et al., 2024) in IRM, and we make two specific contributions on that theme. First, for the original sparse IRM formulation, we present the first correct non-asymptotic analysis of the effectiveness of sparsity for selecting invariant features. We show that sparse IRM with $L_0$ constraints can select invariant features and ignore spurious and random features. We show that sample complexity depends polynomially on the number of invariant features and otherwise logarithmically on the ambient dimensionality. Second, we present the first invariant feature recovery guarantees with a computationally-efficient implementation of such sparse IRM based on iterative hard thresholding. Prior methods are limited to combinatorially searching over the space of all sparse models, but we present a different loss function. We show this new optimization implies recovery of invariant features under standard assumptions. We present empirical results on standard benchmark datasets to demonstrate the effectiveness and efficiency of the proposed sparse IRM models.
Supplementary Material: zip
Latex Source Code: zip
Signed PMLR Licence Agreement: pdf
Readers: auai.org/UAI/2025/Conference, auai.org/UAI/2025/Conference/Area_Chairs, auai.org/UAI/2025/Conference/Reviewers, auai.org/UAI/2025/Conference/Submission711/Authors, auai.org/UAI/2025/Conference/Submission711/Reproducibility_Reviewers
Submission Number: 711
Loading