Controllable Pareto Trade-off between Fairness and Accuracy

23 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Multi-objective optimization, Fairness-accuracy trade-off
Abstract: The fairness-accuracy trade-off is a fundamental challenge in machine learning.While simply combining the two objectives can result in mediocre or extreme solutions, multi-objective optimization (MOO) could potentially provide diverse trade-offs by visiting different regions of the Pareto front. However, MOO methods usually lack precise control of the trade-offs. They rely on the full gradient per objective and inner products between these gradients to determine the update direction, which can be prone to large data sizes and the curse of dimensionality when training millions of parameters for neural networks. Moreover, the trade-off is usually sensitive to naive stochastic gradients due to the imbalance of groups in each batch and the existence of various trivial directions to improve fairness. To address these challenges, we propose “Controllable Pareto Trade-off (CPT)” that can effectively train models performing different trade-offs defined by reference vectors. CPT begins with a correction stage that solely approaches the reference vector and then includes the discrepancy between the reference and the two objectives as the third objective in the rest training. To overcome the issues caused by high-dimensional stochastic gradients, CPT (1) uses a moving average of stochastic gradients to determine the update direction; and (2) prunes the gradients by only comparing different objectives’ gradients on the critical parameters. Experiments show that CPT can achieve a higher-quality set of diverse models on the Pareto front performing different yet better trade-offs between fairness and accuracy than existing MOO approaches. It also exhibits better controllability and can precisely follow the human-defined reference vectors.
Primary Area: optimization
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7913
Loading