Reflective Flow Sampling Enhancement

ICLR 2026 Conference Submission10843 Authors

18 Sept 2025 (modified: 21 Nov 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Diffusion Models, Inference Enhancement, Training-free Algorithm, Image Synthesis
Abstract: The growing demand for text-to-image generation has led to rapid advances in generative modeling. Recently, flow models trained with flow matching algorithms, such as FLUX, have achieved remarkable progress and emerged as strong alternatives to conventional diffusion models. At the same time, inference-time enhancement strategies have been shown to improve the generation quality and text–prompt alignment of text-to-image diffusion models. However, these techniques are mainly applicable to diffusion models and usually fail to perform well on flow model. To bridge this gap, we propose Reflective Flow Sampling (RF-Sampling), a novel training-free inference enhancement framework explicitly designed for flow models, especially for the CFG-distilled variants (i.e., models distilled from CFG guidance techniques) like FLUX. RF-Sampling leverages a linear combination of textual representations and integrates them with flow inversion, allowing the model to explore noise spaces that are more consistent with the input prompt. This approach provides a flexible and effective means of enhancing inference without relying on CFG-specific mechanisms. Extensive experiments across multiple benchmarks demonstrate that RF-Sampling consistently improves both generation quality and prompt alignment, whereas existing state-of-the-art inference enhancement methods such as Z-Sampling fail to apply. Moreover, RF-Sampling is also the first inference enhancement method that can exhibit test-time scaling ability to some extent on FLUX.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 10843
Loading