Abstract: Domain generalization(DG) techniques classify data from unseen domains by leveraging data from multiple source domains. Most methods in DG focus on improving predictive performance in the unseen domain. Recent studies have started to enhance fairness measures in the unseen domain. However, these studies assume that every domain has the same, single sensitive attribute, including the unseen domain. In practice, each domain may be required to satisfy fairness on its own set of sensitive attributes. Given a set of sensitive attributes $(\mathcal{S})$, current methods need to train 2n models to ensure fairness on any subset of $\mathcal{S}$ where $n=\vert \mathcal{S}\vert$. We propose a single-model solution to address this new problem setting. We learn two feature representations, one to generalize the model's predictive performance, and another to generalize the model's fairness. The first representation is made invariant across domains to generalize predictive performance. The second representation is kept selectively invariant, i.e., invariant only across domains having the same sensitive attributes. Our single model exhibits superior predictive performance and fairness measures against the current alternative of 2n models on unseen domains on multiple real-world datasets. Our code is available at https://github.com/ragjapk/SISA.
Loading