TimeDiT: General-purpose Diffusion Transformers for Time Series Foundation Model

24 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Time Series; Foundation model; Diffusion model
TL;DR: We propose a general-purpose time series foundation model based on Diffusion Transofrmers.
Abstract: With recent advances in building foundation models for text and video data, such as Large Language Models (LLMs), there is a surge of interest in foundation modeling for time series. However, real-world time series exhibit unique challenges, such as variable channel sizes across domains, missing values, and varying signal sampling intervals due to the multi-resolution nature of real-world data, which pose fundamental challenges for current de-fact transformer models with rigid architectural choices and predetermined parameter settings. Additionally, the unidirectional nature of temporally autoregressive decoding typically learns a deterministic mapping relationship and limits the incorporation of domain knowledge, such as physical laws. To address these challenges, we introduce the Time Diffusion Transformer (TimeDiT), a general foundation model for time series that jointly leverages the transformer inductive bias to capture temporal dependencies and the diffusion processes to generate high-quality candidate samples. The proposed mask unit for task-agnostic pretraining and task-specific sampling enables direct processing of multivariate inputs even with missing values or multi-resolution. Furthermore, we introduce a theoretically justified finetuning-free model editing strategy that allows the flexible integration of external knowledge during the sampling process. Extensive experiments conducted on a variety of tasks, such as forecasting, imputation, and anomaly detection highlight TimeDiT’s adaptability as a foundation model, addressing diverse time series challenges and advancing analysis in various fields.
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3988
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview