PreDiff: Leveraging Data Priors to Enhance Time Series Generation with Scarce Samples

ICLR 2026 Conference Submission18008 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Time Series Generation, Diffusion Model, Data Scarcity
TL;DR: An approach that effectively leverages data priors to overcome quality degradation in time‐series generation under data‐scarce conditions.
Abstract: The fundamental motivation for time series generation tasks lies in addressing the pervasive challenge of data scarcity. However, we have identified a critical limitation: existing time series generation models are prone to substantial performance degradation when trained on limited data. To tackle this issue, we propose a novel framework that integrates data priors to enhance the robustness and generalization of time series generation under data-scarce conditions. Our framework is structured around a two-stage pipeline: pre-training and fine-tuning. In the pre-training stage, the model is trained on synthetic time series datasets to learn data priors, which encode the fundamental statistical properties and temporal dynamics of time series data. Subsequently, during the fine-tuning stage, the model is refined using a small-scale target dataset to adapt to the specific distribution of the target domain. Extensive experimental evaluations demonstrate that our framework mitigates performance degradation caused by data scarcity, achieving state-of-the-art results in time series generation tasks. This work not only advances the field of time series modeling but also provides a scalable solution for real-world applications where data availability is often limited.
Primary Area: learning on time series and dynamical systems
Submission Number: 18008
Loading