Keywords: Autoregressive Models, Inference-Time Scaling, Beam Search, Image Generation, Compositional Generation, Verifiers, Computational Efficiency
TL;DR: This work shows a 2B autoregressive model with beam search generates better compositional images than a 12B diffusion model, proving architecture trumps scale for efficient inference-time search.
Abstract: While inference-time scaling through search has revolutionized Large Language Models, translating these gains to image generation has proven difficult. Recent attempts to apply search strategies to continuous diffusion models show limited benefits, with simple random sampling often performing best. We demonstrate that the discrete, sequential nature of visual autoregressive models enables effective search for image generation. We show that beam search substantially improves text-to-image generation, enabling a 2B parameter autoregressive model to outperform a 12B parameter diffusion model across benchmarks. Systematic ablations show that this advantage comes from the discrete token space, which allows early pruning and computational reuse, and our verifier analysis highlights trade-offs between speed and reasoning capability. These findings suggest that model architecture, not just scale, is critical for inference-time optimization in visual generation.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 13846
Loading