Fair Domain Generalization with Arbitrary Sensitive Attributes

24 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: general machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: domain generalization, fairness, multiple sensitive attributes
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We consider the problem of fairness transfer in domain generalization with sensitive attributes that change its sensitivity across domains
Abstract: We consider the problem of fairness transfer in domain generalization. Traditional domain generalization methods are designed to generalize a model to unseen domains. Recent work has extended this capability to incorporate fairness as an additional requirement. However, it is only applicable to a single, unchanging sensitive attribute across all domains. As a naive approach to extend it to a multi-attribute context, we can train a model for each subset of the potential set of sensitive attributes. However, this results in $2^n$ models for $n$ attributes. We propose a novel approach that allows any combination of sensitive attributes in the target domain. We learn two representations, a domain invariant representation to generalize the model's performance, and a selective domain invariant representation to transfer the model's fairness to unseen domains. As each domain can have a different set of sensitive attributes, we transfer the fairness by learning a selective domain invariant representation which enforces similar representations among only those domains that have similar sensitive attributes. We demonstrate that our method decreases the current requirement of $2^n$ models to $1$ to accomplish this task. Moreover, our method outperforms the state-of-the-art on unseen target domains across multiple experimental settings.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9395
Loading