TS-Reasoner: Aligning Time Series Foundation Models with LLM Reasoning

ACL ARR 2026 January Submission6977 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: time series
Abstract: Time series reasoning is crucial to decision-making across domains like finance, energy, and scientific discovery. While existing time series foundation models (TSFMs) excel at capturing numerical dynamics, they lack the high-level contextual reasoning inherent in Large Language Models (LLMs). Conversely, without expensive post-training, LLMs often struggle with the numerical understanding of time series data. Although it is intuitive to integrate the two types of models, developing effective training recipes that align the two modalities for reasoning tasks is still an open challenge. To this end, we propose TS-Reasoner that aligns the latent representations of TSFMs with the textual inputs of LLMs for downstream understanding/reasoning tasks. Specifically, we propose a simple yet effective method to curate diverse, synthetic pairs of time series and textual captions for alignment training. We then develop a two-stage training recipe that applies instruction finetuning after the alignment pretraining. Unlike existing works that train an LLM to take time series as inputs, we leverage a pretrained TSFM and freeze it during training. Extensive experiments on several benchmarks demonstrate that TS-Reasoner not only outperforms a wide range of prevailing LLMs, Vision Language Models (VLMs), and Time Series LLMs, but also achieves this with remarkable data efficiency, e.g., using less than half the training data.
Paper Type: Long
Research Area: Financial Applications and Time Series
Research Area Keywords: Financial Applications and Time Series
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 6977
Loading