Keywords: Time-Series, Natural Language Processing, Multi-Modal
Abstract: Time-series analysis is critical in many industries such as healthcare, finance and energy sectors, where understanding time-series trends alongside contextual information is essential for informed decision making. However, current time-series models are limited in their ability to perform reasoning that involves both time-series data and textual information. In this work we address this gap by introducing Chat-TS, a large language model (LLM) designed specifically for reasoning over time-series and textual data. Unlike traditional time-series models Chat-TS integrates time-series tokens into the LLM vocabulary, enhancing its reasoning ability over both text and time-series modalities without compromising its core natural language capabilities.
To support the development and validation of Chat-TS we contribute three new datasets: the TS Instruct Training Dataset which pairs diverse time-series data with relevant text instructions and responses for instruction tuning, the TS Instruct question and answer (QA) benchmark, a set of nearly 4000 multiple-choice questions designed to evaluate multi-modal reasoning and the TS Instruct Qualitative Benchmark which provides a smaller subset of QA, math and decision making questions for LLM evaluation. Our training strategy preserves the inherent reasoning capabilities of the LLM while augmenting it with time-series reasoning capabilities. Evaluation results show that Chat-TS achieves state-of-the-art performance in multi-modal reasoning tasks, maintaining strong natural language proficiency while advancing time-series reasoning. All models, datasets, and code will be made publicly available [link].
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8161
Loading