Abstract: Foundation models have recently emerged as a promising
approach for time series analysis, adapting transformer architectures
originally designed for natural language processing to handle continu-
ous temporal data. While these models demonstrate strong performance
across various time series tasks, their handling of multivariate time se-
ries, particularly inter-channel dependencies, remains underexplored. In
this paper, we present a comprehensive analysis of current foundation
models for time series, including tokenization-based, patch-based, and
shape-based approaches, focusing on their mechanisms and data rep-
resentations for capturing relationships between channels. Our analysis
shows that even though these models have advanced architectures, they
mostly process channels independently, which may prevent them from
fully capturing cross-channel patterns. We examine this limitation across
di!erent model families and discuss its implications for multivariate time
series analysis. Our empirical evaluation shows that foundation models
perform well on simpler tasks but exhibit diminished e!ectiveness as
channel dependencies increase, with specialized time series methods con-
sistently outperforming them on complex datasets. These findings high-
light the critical need for channel-aware architectures and more e!ective
strategies for modeling inter-channel relationships in foundation models.
Loading