The Geometry of Time-Series Diffusion: Why Latent Space Diffusion Works for Generation and Imputation
Presentation Attendance: No, we cannot present in-person
Keywords: Time-Series Diffusion, Latent Manifolds, Score Matching, Representation Learning
TL;DR: Latent-space diffusion improves time-series generation and imputation by reshaping sequences into smoother, more isotropic manifolds that yield better-conditioned score fields.
Abstract: Diffusion models for time series are often trained in observation space, where autocorrelation and heteroskedasticity induce anisotropic and curved data geometry that complicates score estimation. We argue that the effectiveness of latent diffusion stems from geometric alignment, in which temporal encoders reshape sequences into smoother and more Gaussian-like manifolds on which diffusion is better conditioned. We formalize this view by connecting encoder-induced isotropy to reduced score-field complexity. Empirically, we validate this hypothesis using a geometry diagnostic suite consisting of denoising stability under increasing noise, PCA structure, local isotropy, and finite-difference smoothness. Experiments on two large-scale electricity datasets, ECL and LD2011, consistently exhibit improved conditioning and smoother score behavior under appropriate representation capacity in latent space, providing a principled explanation of when and why latent diffusion is preferable for time-series modeling.
Track: Research Track (max 4 pages)
Submission Number: 65
Loading