Keywords: Image Generation, Fairness, Probabilistic Modeling
Abstract: The production of high-fidelity images by generative models has been transformative to the space of artificial intelligence. Yet, while the generated images are of high quality, the images tend to mirror biases present in the dataset they are trained on. While there has been an influx of work to tackle fair ML broadly, existing works on fair image generation typically rely on modifying the model architecture or fine-tuning an existing generative model which requires costly retraining time. In this paper, we use a family of tractable probabilistic models called probabilistic circuits (PCs), which can be equipped to a pre-trained generative model to produce fair images without retraining. We show that for a given pre-trained generative model, our method only requires a small fair reference dataset to train the PC, removing the need to collect a large (fair) dataset to retrain the generative model. Our experimental results show that our proposed method can achieve a balance between training resources and ensuring fairness and quality of generated images.
Submission Number: 184
Loading