Performance Plateaus in Inference-Time Scaling for Text-to-Image Diffusion Without External Models

Published: 10 Jun 2025, Last Modified: 15 Jul 2025MOSS@ICML2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Text-to-Image Diffusion Models, Inference-Time Scaling, Initial Noise Optimization, VRAM-Limited GPUs
TL;DR: Applying Best-of-N inference-time scaling to training-free initial-noise optimization for text-to-image diffusion on VRAM-limited GPUs rapidly reaches a performance plateau.
Abstract: Recently, it has been shown that investing computing resources in searching for good initial noise for a text-to-image diffusion model helps improve performance. However, previous studies required external models to evaluate the resulting images, which is impossible on GPUs with small VRAM. For these reasons, we apply Best-of-N inference-time scaling to algorithms that optimize the initial noise of a diffusion model without external models across multiple datasets and backbones. We demonstrate that inference-time scaling for text-to-image diffusion models in this setting quickly reaches a performance plateau, and a relatively small number of optimization steps suffices to achieve the maximum achievable performance with each algorithm.
Code: zip
Submission Number: 54
Loading