DiffPM: Diffusion-Based Generative Framework for Time Series Synthesis

ICLR 2026 Conference Submission18532 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: time series generation, diffusion models, non-autoregressive, window-conditioned diffusion, trend–residual decomposition, parallel window synthesis, overlap-aware stitching, positional conditioning, low-data regime, multivariate time series, data augmentation, generative modeling, time series, generative AI, deep learning, machine learning, data synthesis, sequence modeling, neural networks, stochastic modeling
TL;DR: DiffPM factorizes trend and residual and stitches parallel window samples to generate full-length multivariate time series with strong fidelity, temporal coherence, and scalable inference.
Abstract: Generative models for time series often fail to reconcile local accuracy with global structure. Autoregressive models accumulate errors over long horizons, while standard diffusion approaches can degrade long-range dependencies, resulting in samples with phase drift and weakened correlations. We introduce DiffPM, a non-autoregressive diffusion framework that resolves this tension by factorizing the generation process. DiffPM learns to model time series by explicitly separating them into low-frequency trends and high-frequency residuals, training two specialized, window-conditioned diffusion models. At inference, the models generate short, overlapping windows for each component in parallel. The individual segments are subsequently reassembled into a full sequence via a position-aware stitching mechanism that enforces inter-window consistency. This modular, decompose-and-recombine architecture allows specialized models to excel at local generation, while guaranteeing global coherence in the final synthesis. Our extensive evaluations demonstrate that DiffPM holds a performance advantage over existing methods as well as being considerably faster at inference. This advancement is quantified by marked improvements in metrics for distributional fidelity, such as Contextual FID, and enhanced temporal coherence over long-horizon benchmarks.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 18532
Loading