Benchmarking Stochastic Approximation Algorithms for Fairness-Constrained Training of Deep Neural Networks

ICLR 2026 Conference Submission24989 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Fair Machine Learning, stochastic approximation, Augmented Lagrangian, Sequential Quadratic Programming, benchmarking
TL;DR: We provide a benchmark for comparing stochastic approximation algorithms, based on real-world fairness-constrained learning problems.
Abstract: The ability to train Deep Neural Networks (DNNs) with constraints is instrumental in improving the fairness of modern machine-learning models. Many algorithms have been analysed in recent years, and yet there is no standard, widely accepted method for the constrained training of DNNs. In this paper, we provide a challenging benchmark of real-world large-scale fairness-constrained learning tasks, built on top of the US Census (Folktables, Ding et al, 2021). We point out the theoretical challenges of such tasks and review the main approaches in stochastic approximation algorithms. Finally, we demonstrate the use of the benchmark by implementing and comparing three recently proposed, but as-of-yet unimplemented, algorithms both in terms of optimization performance, and fairness improvement. We will release the code of the benchmark as a Python package after peer-review.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Submission Number: 24989
Loading