Fair Synthetic Data Does not Necessarily Lead to Fair ModelsDownload PDF

03 Oct 2022, 16:58 (modified: 03 Nov 2022, 14:35)Neurips 2022 SyntheticData4MLReaders: Everyone
Keywords: Fairness, GANs, demographic parity
TL;DR: We show GANs trained with fairness penalties do not necessarily lead to fair downstream classifiers.
Abstract: The Wasserstein GAN (WGAN) is a well-established model allowing for the generation of high-quality synthetic data approximating a given real dataset. We study TabFairGAN, a known tabular variation of WGAN in which a custom penalty term is added to the generator's loss, forcing it to produce fair data. Here we measure the fairness of synthetic data using demographic parity, i.e., the gap in the proportions of positive outcome between different sensitive groups. We reproduce some results from the paper and highlight empirically the fact that although the synthetic data achieves low demographic parity, a classification model trained on said data and evaluated on real data may still output predictions that achieve high demographic parity -- hence is unfair. In particular, we show empirically this gap holds for most parts spectrum of the fairness-accuracy tradeoff, besides the large-penalty case where the model mode collapses to the most frequent target outcome, and the low-penalty case where the data is not constrained to be fair.
4 Replies