Probing Large Language Models for Zero-shot Time Series Understanding

ACL ARR 2025 July Submission936 Authors

29 Jul 2025 (modified: 20 Aug 2025)ACL ARR 2025 July SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Recently, large language models (LLMs) have shown promising performance in time series forecasting, including two paradigms: (a.) re-customizing LLMs for supervised forecasting, and (b.) keeping LLMs unchanged for zero-shot forecasting. However, how do large language models understand time series? In this work, we explore the understanding capability of LLMs on time series while maintaining their structure and parameters unchanged in zero-shot forecasting scenarios. Specifically, starting from basic time series patterns, we investigate the forecasting ability of LLMs on basic function series, as well as the impact of diverse periods, amplitudes, and phases on the forecasting for sinusoidal series. Subsequently, to gain deeper insights, we design a series of probing methods to further analyze the understanding of LLMs on time series. Finally, guided by these findings, we propose Frequency Decomposition (Freq-Decomp), a lightweight preprocessing method that enhances LLMs’ zero-shot forecasting performance. Experiments across real-world datasets show that LLMs excel at identifying periodic patterns, probing experiments provide insight into the perception of time series information by LLMs' different layers, and Freq-Decomp can yield consistent improvements over prior zero-shot baselines.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: NLP Applications, probing,Interpretability and Analysis of Models for NLP
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data analysis
Languages Studied: English
Submission Number: 936
Loading