LETS-C: Leveraging Text Embedding for Time Series Classification

Published: 10 Oct 2024, Last Modified: 26 Nov 2024NeurIPS 2024 TSALM WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Time Series, Language Embeddings, Text Embeddings, Classification, Lightweight
TL;DR: This paper presents LETS-C, a lightweight model for time series classification that uses text embeddings and a simple CNN-MLP head, achieving state-of-the-art accuracy with significantly fewer parameters.
Abstract: Recent advancements in language modeling have shown promising results in time series data analysis, with fine-tuning pre-trained large language models (LLMs) achieving state-of-the-art (SOTA) performance on standard benchmarks. However, LLMs require millions of trainable parameters, presenting a significant drawback due to their large size. We propose an alternative approach to leveraging the success of language modeling in the time series domain. Instead of fine-tuning LLMs, we utilize a text embedding model to embed time series and then pair the embeddings with a simple classification head composed of convolutional neural networks and multilayer perceptron. We conducted extensive experiments on a well-established time series classification benchmark. We demonstrated LETS-C not only outperforms the current SOTA in classification accuracy but also offers a lightweight solution, using only 14.5% of the trainable parameters compared to the SOTA model. Our findings suggest that leveraging text embedding models to encode time series data, combined with a simple yet effective classification head, offers a promising direction for achieving high-performance time series classification while maintaining a lightweight model architecture.
Submission Number: 3
Loading