A Non-isotropic Time Series Diffusion Model with Moving Average Transitions

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We proposed a novel time series diffusion model with moving average transitions.
Abstract: Diffusion models, known for their generative ability, have recently been adapted to time series analysis. Most pioneering works rely on the standard isotropic diffusion, treating each time step and the entire frequency spectrum identically. However, it may not be suitable for time series, which often have more informative low-frequency components. We empirically found that direct application of standard diffusion to time series may cause gradient contradiction during training, due to the rapid decrease of low-frequency information in the diffusion process. To this end, we proposed a novel time series diffusion model, MA-TSD, which utilizes the moving average, a natural low-frequency filter, as the forward transition. Its backward process is accelerable like DDIM and can be further considered a time series super-resolution. Our experiments on various datasets demonstrated MA-TSD's superior performance in time series forecasting and super-resolution tasks.
Lay Summary: Diffusion models, a kind of generative AI, are known for producing realistic images or videos. We wondered if such strong generative ability could then be utilized for time series analysis, like generating the possible scenes of future stock prices. The classical diffusion model works by repeated denoising a random noise until a clear image. Thus, training a diffusion model needs to teach the computer how to denoise a figure at different noisy levels. Delicately setting these different noise levels, i.e. making a noise schedule, is one key to helping train and make diffusion models work. When we directly applied the classical noise schedule for images to time series data, we found it unstable to train. We revealed that it’s because the noise schedule for images could polarize the noisy levels of time series, e.g. noisy time series at the first 25% levels are informative but the rest of them are almost pure noise. Therefore, we re-designed a diffusion framework where the trends of time series are extracted, and serve as common structural information at different noisy levels. We showed that our diffusion framework can stabilize the training process, and thus achieve higher-quality time series generation results.
Primary Area: Applications->Time Series
Keywords: Diffusion models, Time series forecasting, Time series super-resolution
Flagged For Ethics Review: true
Submission Number: 6732
Loading