Fairness Through Matching for better group fairness

22 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: societal considerations including fairness, safety, privacy
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Fairness, Matched demographic parity
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: This work proposes a new algorithm to learn group-fair models that less discriminate subsets or individuals in the same group.
Abstract: Group unfairness, which refers to socially unacceptable bias favoring certain groups (e.g., white, male), is frequently observed ethical concern in AI. Various algorithms have been developed to mitigate such group unfairness in trained models. However, a significant limitation of existing algorithms for group fairness is that trained group-fair models can discriminate against specific subsets or not be fair for individuals in the same sensitive group. The primary goal of this research is to develop a method to find a good group-fair model in the sense that it discriminates less against subsets and treats individuals in the same sensitive group more fairly. For this purpose, we introduce a new measure of group fairness called Matched Demographic Parity (MDP). An interesting feature of MDP is that it corresponds a matching function (a function matching two individuals from two different sensitive groups) to each group-fair model. Then, we propose a learning algorithm to seek a group-fair model whose corresponding matching function matches similar individuals well. Theoretical justifications are fully provided, and experiments are conducted to illustrate the superiority of the proposed algorithm.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4999
Loading