FAT: Frequency-Aware Pretraining for Enhanced Time-Series Representation Learning

Published: 02 Aug 2025, Last Modified: 30 Jul 2025KDD 2025EveryoneCC BY 4.0
Abstract: Recent advancements in time-series forecasting have highlighted the importance of frequency-domain modeling. However, deep learning models primarily operate in the time domain, limiting their ability to capture frequency-based patterns. Existing approaches introduce complex architectures tailored to task-specific frequency properties, yet they often lack generalization and require extensive domain-specific adaptations. In this paper, we propose \textbf{FAT}, a novel pretraining framework that learns generalizable \textbf{F}requency-\textbf{A}ware \textbf{T}ime-series representations through self-supervised learning. The key idea of FAT is to pretrain the model to directly extract consistent and generalizable frequency patterns from time-domain signals and encode them into representations, eliminating the need for architectural adaptations or additional modules during inference. This is achieved through a frequency reformer, which enhances key frequency components learned from self-supervised signals and enforces similarity constraints between the original and frequency-reformed representations. Furthermore, recognizing that semantically equivalent time-series can exhibit different frequency expressions—analogous to how the same phrase is pronounced differently by different speakers—FAT introduces a knowledge-guided frequency reformer that unifies the expression of frequency patterns with the same underlying semantics and extends similarity constraints to frequency-invariant augmented samples to enhance robustness of learned representation. Experiments on 14 benchmark datasets across prediction and forecasting tasks show that FAT consistently achieves state-of-the-art performance while maintaining robustness across diverse backbone models, significantly outperforming existing pretraining methods.
Loading