Adversarial Fairness with Elastic Weight Consolidation

TMLR Paper1558 Authors

07 Sept 2023 (modified: 17 Sept 2024)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: A central goal of algorithmic fairness is to develop a non-discriminatory approach to a protected group. We study methods to improve the accuracy for the worst-group, primarily when the data distribution is unevenly distributed. We propose a method to enhance both accuracy and fairness for the worst-group using regularization based on Elastic Weight Consolidation (EWC). We mitigate socially undesirable biases for binary classification tasks by applying adversarial models. To maintain the critical parameters for predicting the target attribute, we regularize the model using the Fisher information, referred to as EWC. We confirm that learning the task using the UCI Adult (Census), CelebA, and Waterbirds datasets yields a better trade-off between accuracy and fairness than in previous studies. The experimental results on table and image datasets show that our proposed method achieves better fairness improvements than the previous methods, maintaining accuracy under widely-used fairness criteria.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Aditya_Menon1
Submission Number: 1558
Loading