ReasonGen-R1: Cot for Autoregressive Image Generation Models Through SFT and RL

ICLR 2026 Conference Submission13184 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Autoregressive Image Generation;RL;GRPO
Abstract: Although chain-of-thought (CoT) reasoning and reinforcement learning (RL) have driven breakthroughs in large language models(LLMs), their integration into generative vision models remains underexplored. We introduce ReasonGen-R1, a two-stage framework that first imbues an autoregressive image generator with explicit text-based "thinking" skills via supervised fine-tuning (SFT) on a newly generated reasoning dataset of written rationales, and then refines its outputs using Group Relative Policy Optimization (GRPO). To enable the model to reason through text before generating images, We automatically generate and release a corpus of model-crafted rationales paired with input prompts, enabling controlled planning of object layouts, styles, and scene compositions. Our GRPO algorithm uses reward signals from a pretrained vision–language model to assess overall visual quality, optimizing the policy in each update. We further design an adaptive entropy loss to prevent model collapse in this relatively complex task. Evaluations on GenEval, DPG, and the T2I benchmark demonstrate that ReasonGen-R1 consistently outperforms strong baselines and prior state-of-the-art models.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 13184
Loading