Large language models are few-shot multivariate time series classifiers

Published: 2025, Last Modified: 29 Jan 2026Data Min. Knowl. Discov. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Large Language Models (LLMs) are widely applied in time series analysis. Yet, their utility in few-shot classification—a scenario with limited training data—remains unexplored. We aim to leverage the pre-trained knowledge in LLMs to overcome the data scarcity problem within multivariate time series. To this end, we propose LLMFew, an LLM-enhanced framework, to investigate the feasibility and capacity of LLMs for few-shot multivariate time series classification (MTSC). We first introduce a Patch-wise Temporal Convolution Encoder (PTCEnc) to align time series data with the textual embedding input of LLMs. Then, we fine-tune the pre-trained LLM decoder with Low-rank Adaptations (LoRA) to enable effective representation learning from time series data. Experimental results show our model consistently outperforms state-of-the-art baselines by a large margin, achieving 125.2% and 50.2% improvement in classification accuracy on Handwriting and EthanolConcentration datasets, respectively. Our results also show LLM-based methods achieve comparable performance to traditional models across various datasets in few-shot MTSC, paving the way for applying LLMs in practical scenarios where labeled data are limited. Our code is available at https://github.com/junekchen/llm-fewshot-mtsc.
Loading