Keywords: time series classification, foundation models, vision transformers
TL;DR: We analyze the 2D modeling of time series using Transformers and demonstrate that frozen Vision Transformers, pretrained on large-scale image datasets, surpass time series foundation models in classification.
Abstract: Time series classification is a fundamental task in healthcare and industry, yet the development of time series foundation models (TSFMs) remains limited by the scarcity of publicly available time series datasets. In this work, we propose **Ti**me **Vi**sion **T**ransformer (**TiViT**), a framework that converts time series into images to leverage the representational power of frozen Vision Transformers (ViTs) pretrained on large-scale image datasets. First, we show that the 2D patching of ViTs for time series can increase the number of label-relevant tokens and reduce the sample complexity. Second, we demonstrate that TiViT achieves state-of-the-art performance on time series classification benchmarks by utilizing the hidden representations of large OpenCLIP models.
We explore the structure of TiViT representations and find that intermediate layers with high intrinsic dimension are the most effective for time series classification. Finally, we assess the alignment between TiViT and TSFM representations and identify a strong complementarity, with further performance gains achieved by combining their features.
Submission Number: 18
Loading