Free Lunch at Inference: Test-Time Refinement for Diffusion Models

01 Sept 2025 (modified: 14 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Diffusion Model, Test-Time, Training-free
Abstract: Diffusion probabilistic models (DPMs) have recently achieved state-of-the-art performance in generative tasks, surpassing traditional approaches such as GANs and VAEs in both sample quality and training stability. Despite their success, DPMs suffer from high computational cost and slow sampling, since they require sequential denoising across many timesteps. Existing acceleration methods primarily focus on reformulating the reverse process as an ODE/SDE and applying advanced numerical solvers. While effective, these approaches largely overlook the geometric properties inherently induced by the Gaussian process in DPMs. In this work, we investigate the geometric behavior of DPMs in the latent variable manifold, revealing an overlooked isotropic property derived from their Gaussian formulation. Building on this characteristic, we introduce a lightweight test-time refinement that can be seamlessly embedded into existing samplers. Our method reduces the discretization error of sequential sampling methods and accelerates the convergence of parallel sampling strategies, without requiring extra training or additional model evaluations. Extensive experiments across multiple datasets demonstrate that our approach consistently improves both generation quality and efficiency, while remaining fully compatible with existing methods. By uncovering and exploiting the isotropic nature of DPMs, this work provides a new perspective on the geometric foundations of DPMs and offers a complementary direction for advancing their efficiency. As a snapshot result, when integrated into UniPC, our method improves the FID score on LSUN bedroom from 39.89 to 20.08 with 4 function evaluations.
Supplementary Material: pdf
Primary Area: generative models
Submission Number: 569
Loading