Keywords: pre-trained model, foundational model, time series, classification
Abstract: Recent research on time series foundation models has primarily focused on forecasting, leaving it unclear how generalizable their learned representations are. 
In this study, we examine whether frozen pre-trained forecasting models can provide effective representations for classification.
To this end, we compare different representation extraction strategies and introduce two model-agnostic embedding augmentations. 
Our experiments show that the best forecasting models achieve classification accuracy that matches or even surpasses that of state-of-the-art models pre-trained specifically for classification. 
Moreover, we observe a positive correlation between forecasting and classification performance. 
These findings challenge the assumption that task-specific pre-training is necessary, and suggest that learning to forecast may provide a powerful route toward constructing general-purpose time series foundation models.
Submission Number: 5
Loading