Keywords: Autoregressive Image Generation;RL;GRPO
Abstract: Although chain-of-thought (CoT) reasoning and reinforcement learning (RL) have driven breakthroughs in large language models(LLMs), their integration into generative vision models remains underexplored. We introduce ReasonGen-R1, a two-stage framework that first imbues an autoregressive image generator with explicit text-based "thinking" skills via supervised fine-tuning (SFT) on a newly generated reasoning dataset of written rationales, and then refines its outputs using Group Relative Policy Optimization (GRPO).
To enable the model to reason through text before generating images, We automatically generate and release a corpus of model-crafted rationales paired with input prompts, enabling controlled planning of object layouts, styles, and scene compositions.
Our GRPO algorithm uses reward signals from a pretrained vision–language model to assess overall visual quality, optimizing the policy in each update. We further design an adaptive entropy loss to prevent model collapse in this relatively complex task.
Evaluations on GenEval, DPG, and the T2I benchmark demonstrate that ReasonGen-R1 consistently outperforms strong baselines and prior state-of-the-art models.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 13184
Loading