On Fairness Measurement for Generative ModelsDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Fairness, Generative models, GAN
Abstract: Deep generative models have made significant progress in improving the diversity and quality of generated data. Recently, there has been increased interest in fair generative models. Fairness in generative models is important, as some bias in the sensitive attributes of the generated samples could have severe effects in applications under high-stakes settings (e.g., criminal justice, healthcare). In this work, we conduct, for the first time, an in-depth study on fairness measurement, a critical component to gauge the research progress of fair generative models. Our work makes two contributions. As our first contribution, we reveal that there exist considerable errors in the existing fairness measurement framework. We attribute this to the lack of consideration for errors in the sensitive attribute classifiers. Contrary to prior assumptions, even highly accurate attribute classifiers can result in large errors in fairness measurement, e.g., a ResNet-18 for Gender with $\sim$97% accuracy could still lead to 4.98% estimation error when measuring the fairness of a StyleGAN2 trained on the CelebA-HQ. As our second (major) contribution, we address this error in the existing fairness measurement framework by proposing a CLassifier Error-Aware Measurement (CLEAM). CLEAM applies a statistical model to take into account the error in the attribute classifiers, leading to significant improvement in the accuracy of fairness measurement. Our experimental results on evaluating fairness of state-of-the-art GANs (StyleGAN2 and StyleSwin) show CLEAM is able to significantly reduce fairness measurement errors, e.g., by 7.78% for StyleGAN2 (8.68%$\rightarrow$0.90%), and by 7.16% for StyleSwin (8.23%$\rightarrow$1.07%)} when targeting the Gender attribute. Furthermore, our proposed CLEAM has minimal additional overhead when compared to the existing baseline. Code and instructions to reproduce the results are included in Supplementary.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Generative models
5 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview