Keywords: Reinforcement Learning, Multi-Human Generation, Text-to-Image Generation
Abstract: State-of-the-art text-to-image models excel at realism but collapse on multi-human prompts—duplicating faces, merging identities, and miscounting individuals. We introduce DisCo (Reinforcement with DiverSity Constraints), the first RL-based framework to directly optimize identity diversity in multi-human generation. DisCo fine-tunes flow-matching models via Group-Relative Policy Optimization (GRPO) with a compositional reward that (i) penalizes intra-image facial similarity, (ii) discourages cross-sample identity repetition, (iii) enforces accurate person counts, and (iv) preserves visual fidelity through human preference scores. A single-stage curriculum stabilizes training as complexity scales, requiring no extra annotations. On the DiverseHumans Testset, DisCo achieves 98.6% Unique Face Accuracy and near-perfect Global Identity Spread—surpassing both open-source and proprietary methods (e.g., Gemini, GPT-Image) while maintaining competitive perceptual quality. Our results establish DisCo as a scalable, annotation-free solution that resolves the long-standing identity crisis in generative models and sets a new benchmark for compositional multi-human generation.
Supplementary Material: pdf
Primary Area: generative models
Submission Number: 2859
Loading