Dr-Fairness: Dynamic Data Ratio Adjustment for Fair Training on Real and Generated Data

Published: 04 Jun 2023, Last Modified: 04 Jun 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Fair visual recognition has become critical for preventing demographic disparity. A major cause of model unfairness is the imbalanced representation of different groups in training data. Recently, several works aim to alleviate this issue using generated data. However, these approaches often use generated data to obtain similar amounts of data across groups, which is not optimal for achieving high fairness due to different learning difficulties and generated data qualities across groups. To address this issue, we propose a novel adaptive sampling approach that leverages both real and generated data for fairness. We design a bilevel optimization that finds the optimal data sampling ratios among groups and between real and generated data while training a model. The ratios are dynamically adjusted considering both the model's accuracy as well as its fairness. To efficiently solve our non-convex bilevel optimization, we propose a simple approximation to the solution given by the implicit function theorem. Extensive experiments show that our framework achieves state-of-the-art fairness and accuracy on the CelebA and ImageNet People Subtree datasets. We also observe that our method adaptively relies less on the generated data when it has poor quality. Our work shows the importance of using generated data together with real data for improving model fairness.
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/NVlabs/Dr-Fairness
Assigned Action Editor: ~Yingzhen_Li1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 885
Loading