Phase-aware Training Schedule Simplifies Learning in Flow-Based Generative Models

Published: 06 Mar 2025, Last Modified: 24 Apr 2025FPI-ICLR2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: diffusion models, phase transitions, flow-based generative model, high-dimensional gaussian mixtures, denoising autoencoders, training schedules
TL;DR: We introduce a time-dilated training schedule for flow-based generative models that allows the learning of high-level features in high-dimensional settings by overcoming gradient vanishing and enabling phase-specific parameter learning.
Abstract: We analyze the training of a two-layer autoencoder used to parameterize a flow-based generative model for sampling from a high-dimensional Gaussian mixture. Previous work shows that the phase where the relative probability between the modes is learned disappears as the dimension goes to infinity without an appropriate time schedule. We introduce a time dilation that solves this problem. This enables us to characterize the learned velocity field, finding a first phase where the probability of each mode is learned and a second phase where the variance of each mode is learned. We find that the autoencoder representing the velocity field learns to simplify by estimating only the parameters relevant to each phase. Turning to real data, we propose a method that, for a given feature, finds intervals of time where training improves accuracy the most on that feature. Since practitioners take a uniform distribution over training times, our method enables more efficient training. We provide preliminary experiments validating this approach.
Submission Number: 88
Loading