Keywords: time series classification, foundation models, vision transformers
TL;DR: We analyze the 2D modeling of time series using Transformers and demonstrate that frozen Vision Transformers, pretrained on large-scale image datasets, surpass time series foundation models in classification and anomaly detection.
Abstract: Time series classification is a fundamental task in healthcare and industry, yet the development of time series foundation models (TSFMs) remains limited by the scarcity of publicly available time series datasets. In this work, we propose **Ti**me **Vi**sion **T**ransformer (**TiViT**), a framework that converts time series into images to leverage the representational power of frozen Vision Transformers (ViTs) pretrained on large-scale image datasets. TiViT achieves state-of-the-art performance on time series classification and anomaly detection benchmarks by utilizing the hidden representations of large OpenCLIP models.
We explore the structure of TiViT representations and find that intermediate layers with high intrinsic dimension are the most effective for time series classification. Furthermore, we assess the alignment between TiViT and TSFM representation spaces and identify a strong complementarity, with additional performance gains achieved by combining their features. Finally, we provide theoretical and qualitative insights about the benefits of 2D patching for time series modeling with ViTs. Our findings reveal a new direction for reusing vision representations in a non-visual domain.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 631
Loading