Keywords: Image editing, Autoregressive, MLLM, GRPO, RL, Reasoning
Abstract: While image generation techniques are now capable of producing high-quality images that respect prompts which span multiple sentences, the task of text-guided image editing remains a challenge. Even edit requests that consist of only a few words often fail to be executed correctly. We explore three strategies to enhance performance on a wide range of image editing tasks: supervised fine-tuning (SFT), reinforcement learning (RL), and Chain-of-Thought (CoT) reasoning. In order to study all these components in one consistent framework, we adopt an autoregressive multimodal model that processes textual and visual tokens in a unified manner.
We find RL combined with a large multi-modal LLM verifier to be the most effective of these strategies.
As a result, we release **EARL**: **E**diting with **A**utoregression and **RL**, a strong RL-based image editing model that performs competitively on a diverse range of edits compared to strong baselines, despite using much less training data. Thus, EARL pushes the frontier of autoregressive multimodal models on image editing. We release our code, training data, and trained models at [https://github.com/mair-lab/EARL](https://github.com/mair-lab/EARL).
Primary Area: Applications (e.g., vision, language, speech and audio, Creative AI)
Submission Number: 14065
Loading