Keywords: one-step generative model from scratch, diffusion, flow matching
Abstract: We propose Terminal Velocity Matching (TVM), a generalization of flow matching that enables high-fidelity one- and few-step generative modeling. TVM models the transition between any two diffusion timesteps and regularizes its behavior at its terminal time rather than at the initial time. We prove that TVM provides an upper bound on the $2$-Wasserstein distance between data and model distributions when the model is Lipschitz continuous. However, since Diffusion Transformers lack this property, we introduce minimal architectural changes that achieve a stable, single-stage training. To make TVM efficient in practice, we develop a fused attention kernel that supports backward passes on Jacobian-Vector Products, which scale well with transformer architectures. On ImageNet-256x256, TVM achieves 3.30 FID with a single function evaluation, representing state-of-the-art performance for one-step diffusion models. TVM also establishes a new Pareto frontier for performance versus inference compute in the few-step regime.
Primary Area: generative models
Submission Number: 15221
Loading