On the Internal Semantics of Time-Series Foundation Models

Published: 23 Sept 2025, Last Modified: 09 Oct 2025BERT2SEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Time-series foundation models, Representation learning, interpretability
TL;DR: Probing TSFMs to find internal representation of time series concepts
Abstract: Time-series foundation models (TSFMs) have recently emerged as a universal paradigm for learning across diverse temporal domains. Despite their empirical success, the internal mechanisms by which these models represent fundamental time-series concepts remain poorly understood. In this work, we undertake a systematic investigation of concept interpretability in TSFMs. Specifically, we examine: (i) which layers encode which concepts, (ii) whether concept parameters are linearly recoverable, (iii) how representations evolve in terms of concept disentanglement and abstraction across model depth, and (iv) how models process compositions of concepts, which serve as controlled settings for studying interaction and interference. We systematically probe these questions using layer-wise analyses, linear recoverability tests, and representation similarity measures, providing a structured account of TSFM semantics. The resulting insights show that early layers mainly capture local, time-domain patterns (e.g., AR(1), level shifts, trends), while deeper layers encode dispersion and change-time signals, with spectral and warping factors remaining the hardest to recover linearly. In compositional settings, however, probe performance degrades, revealing interference between concepts. This highlights that while atomic concepts are reliably localized, composition remains a challeng pointing to the need for composition-aware training and evaluation protocols to better align TSFMs with the structure of real-world time series.
Submission Number: 35
Loading