Primary Area: societal considerations including fairness, safety, privacy
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Algorithmic fairness, Model uncertainty
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: In light of AI's growing ubiquity, concerns about its societal impact have prompted extensive efforts to mitigate different types of bias, often relying on the assumption of complete information regarding individuals' sensitive attributes. In this work, we tackle the problem of algorithmic fairness under partially annotated sensitive attributes. Previous approaches often rely on an attribute classifier as a proxy model to infer "hard" pseudo labels, which are then used to optimize the final model using fairness-aware regularization techniques. In contrast, we propose a novel regularization approach, that leverages the output probability of the attribute classifier as "soft" pseudo labels, derived from the definition of the fairness criteria. Additionally, we study the effect of the uncertainty on the attribute classifier parameters that naturally arise in the case of limited available sensitive attribute annotations. We adopt the Bayesian viewpoint and we propose to optimize our model with respect to the marginal model of the attribute classifier, while our second approach optimizes the fairness objective with respect to each model of the decision maker's belief. To validate our approach, we conduct extensive experiments on Adult and CelebA datasets with tabular and image modalities, respectively. The results of our study highlight the effectiveness of our method as well as the significance of incorporating uncertainty, in improving both utility and fairness compared to a variety of different baselines.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7328
Loading