Equalized Generative Treatment: Matching $f$-divergences for Fairness in Generative Models

20 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Fairness, Generative Models
TL;DR: Extending criteria of fairness in generative models to ensure equal treatment
Abstract: Fairness is a crucial concern for generative models, which not only reflect but can also amplify societal and cultural biases. Existing fairness notions for generative models are largely adapted from classification, focusing on balancing probability of generating each sensitive group. We show, both theoretically and empirically, that such criteria are brittle, as they can be satisfied even when different groups are modeled with widely varying quality. To address this gap, we introduce a new fairness definition for generative models, *equalized generative treatment* (EGT), which requires comparable generation quality across all sensitive groups, where this quality is measured via a reference $f$-divergence. We further analyze the trade-offs induced by EGT, showing that fairness constraints necessarily tie global model quality to the hardest group to approximate. Finally, we benchmark several strategies, including min-max optimization and group-conditional training, that directly target this criterion, and demonstrate through image generation experiments that EGT yields fairer outcomes without prohibitive losses in overall performance.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 24511
Loading