Finding the Zeitgeist in Time Series Foundation Models

Published: 01 Mar 2026, Last Modified: 01 Mar 2026ICLR 2026 TSALM Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: TSFM, Interpretability, SAE
Abstract: Time series foundation models (TSFMs) achieve strong zero-shot and transfer performance across diverse forecasting tasks, yet their internal representations remain poorly understood. In language and vision models, sparse autoencoders (SAEs) have emerged as a powerful tool for mechanistic interpretability, revealing disentangled and often monosemantic features from high-dimensional residual streams. In this work, we explore whether similar structures can be uncovered in pretrained TSFMs. Our results demonstrate that SAE-based analysis provides a viable and scalable lens into the internal structure of TSFMs, uncovering sparse features that align with coherent temporal patterns. This work represents an initial step toward unsupervised mechanistic interpretability for TSFMs and highlights promising directions for future research.
Track: Research Track (max 4 pages)
Submission Number: 109
Loading