A Diffusion Model for Regular Time Series Generation from Irregular Data with Completion and Masking

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Time Series Generation, Irregularly-sampled Time Series, Diffusion Models
TL;DR: A novel framework for generating regular time series from irregularly-sampled data using a Time Series Transformer and vision diffusion model with masking, achieving state-of-the-art performance and efficiency.
Abstract: Generating realistic time series data is critical for applications in healthcare, finance, and climate science. However, irregular sampling and missing values present significant challenges. While prior methods address these irregularities, they often yield suboptimal results and incur high computational costs. Recent advances in regular time series generation, such as the diffusion-based ImagenTime model, demonstrate strong, fast, and scalable generative capabilities by transforming time series into image representations, making them a promising solution. However, extending ImagenTime to irregular sequences using simple masking introduces ``unnatural'' neighborhoods, where missing values replaced by zeros disrupt the learning process. To overcome this, we propose a novel two-step framework: first, a Time Series Transformer completes irregular sequences, creating natural neighborhoods; second, a vision-based diffusion model with masking minimizes dependence on the completed values. This hybrid approach leverages the strengths of both completion and masking, enabling robust and efficient generation of realistic time series. Our method achieves state-of-the-art performance across benchmarks, delivering a relative improvement in discriminative score by 70% and in computational cost by 85%.
Primary Area: Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 11502
Loading