Enhancing LLM Reasoning for Time Series Classification by Tailored Thinking and Fused Decision

10 May 2025 (modified: 29 Oct 2025)Submitted to NeurIPS 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Time-series classification, Reasoning LLMs, Time-series Foundation Models
Abstract: The reasoning capabilities of large language models (LLMs) have significantly advanced their performance by enabling nuanced understanding of diverse tasks. With growing interest in applying LLMs to the time series domain, this has proven nontrivial, as evidenced by the limited efficacy of straightforwardly adapting text-domain reasoning strategies. Although recent work has shown promise in time series forecasting tasks, leveraging reasoning LLMs for time series classification (TSC) tasks remains under-explored, despite their prevalence and significance in many real-world applications. In this paper, we introduce ReasonTSC, a novel framework designed to effectively leverage reasoning LLMs for time series classification through a multi-turn reasoning and fused decision-making strategy tailored to TSC. Rather than relying solely on LLMs' built-in reasoning, ReasonTSC first steering the model to think about the essential characteristics of time series data. Next, it integrates predictions and confidence scores from plug-in classifiers, e.g., domain-specific time series models, as in-context examples. Finally, ReasonTSC guides the LLM through a structured reasoning process: it evaluates the initial assessment, backtracks to consider alternative hypotheses, and compares their merits before arriving at a final classification. Extensive experiments and systematic ablation studies demonstrate that ReasonTSC consistently outperforms both standalone reasoning LLM and plug-in models, and even capable of identifying and correcting errors from plug-in models’ false predictions.
Supplementary Material: zip
Primary Area: Applications (e.g., vision, language, speech and audio, Creative AI)
Submission Number: 14001
Loading