Fairness via In-Processing in the Over-parameterized Regime: A Cautionary Tale with MinDiff Loss

Published: 05 May 2023, Last Modified: 17 Sept 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Prior work has observed that the test error of state-of-the-art deep neural networks often continues to decrease with increasing over-parameterization, a phenomenon referred to as double descent. This allows deep learning engineers to instantiate large models without having to worry about over-fitting. Despite its benefits, however, prior work has shown that over-parameterization can exacerbate bias against minority subgroups. Several fairness-constrained DNN training methods have been proposed to address this concern. Here, we critically examine MinDiff, a fairness-constrained training procedure implemented within TensorFlow's Responsible AI Toolkit, that aims to achieve Equality of Opportunity. We show that although MinDiff improves fairness for under-parameterized models, it is likely to be ineffective in the over-parameterized regime. This is because an overfit model with zero training loss is trivially group-wise fair on training data, creating an “illusion of fairness,” thus turning off the MinDiff optimization (this will apply to any disparity-based measures which care about errors or accuracy; while it won’t apply to demographic parity). We find that within specified fairness constraints, under-parameterized MinDiff models can even have lower error compared to their over-parameterized counterparts (despite baseline over-parameterized models having lower error compared to their under-parameterized counterparts). We further show that MinDiff optimization is very sensitive to choice of batch size in the under-parameterized regime. Thus, fair model training using MinDiff requires time-consuming hyper-parameter searches. Finally, we suggest using previously proposed regularization techniques, viz. L2, early stopping and flooding in conjunction with MinDiff to train fair over-parameterized models. In our results, over-parameterized models trained using MinDiff+regularization with standard batch sizes are fairer than their under-parameterized counterparts, suggesting that at the very least, regularizers should be integrated into fair deep learning flows, like MinDiff.
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/akshajkumarv/MinDiff
Assigned Action Editor: ~Kangwook_Lee1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 815
Loading