Abstract: The performance of text-to-image diffusion models may be improved at test-time by scaling computation to search for a generated image that maximizes a given reward function. While existing trajectory level exploration methods improve the effectiveness of test-time scaling for standard diffusion models, they are largely incompatible with modern flow matching models, which use deterministic sampling. This imposes significant computational overhead on local trajectory search, making the trade-offs less favorable compared to global search. However, global search strategies like trajectory pruning face two critical challenges: the sharp, low-diversity distributions characteristic of scaled flow models that restrict the candidate search space, and the bias of reward models in the early denoising process. To overcome these limitations, we propose Repel, a token-level mechanism that encourages sample diversity, and NARF, a noise-aware reward fine-tuning strategy to obtain more accurate reward ranking at early denoising stages. Together, these promote more effective test-time scaling resource allocation. Overall, we name our pipeline as \textbf{DOG-Trim}: \textbf{D}iversity enhanced \textbf{O}rder aligned \textbf{G}lobal flow Trimming. The experiments demonstrate that, under the same compute cost, our approach achieves around twice the performance improvement relative to the scaling-free baseline compared to the best existing method.
Loading