Discrete Meanflow Training Curriuculum
Keywords: Generative models, Diffusion Model, Consistency Model, Distillation, Training Curriculum, MeanFlow.
TL;DR: Training Curriculum for fast convergence of Meanflow models
Abstract: Flow-based image generative models exhibit stable training and produce high quality samples when using multi-step sampling procedures.
One-step generative models can produce high quality image samples but can be difficult to optimize as they often exhibit unstable training dynamics.
Meanflow models exhibit excellent few-step sampling performance and tantalizing one-step sampling performance.
Notably, MeanFlow models that achieve this have required extremely large training budgets.
We significantly decrease the amount of computation and data budget it takes to train Meanflow models by noting and exploiting a particular discretization of the Meanflow objective that yields a consistency property which we formulate into a ``Discrete Meanflow'' (DMF) Training Curriculum.
Initialized with a pretrained Flow Model, DMF curriculum reaches one-step FID 3.36 on CIFAR-10 in only 2000 epochs.
We anticipate that faster training curriculums of Meanflow models, specifically those fine-tuned from existing Flow Models, drives efficient training methods of future one-step examples.
Submission Number: 108
Loading