Rationale-Grounded In-Context Learning for Time Series Reasoning with Multimodal Large Language Models
Keywords: Time series reasoning, Multimodal large language models, Rationale generation, In-context learning
Abstract: The underperformance of existing multimodal large language models for time series reasoning lies in the absence of rationale priors that connect temporal observations to their downstream outcomes, which leads models to rely on superficial pattern matching rather than principled reasoning.
We therefore propose the rationale-grounded in-context learning for time series reasoning, where rationales work as guiding reasoning units rather than post-hoc explanations, and develop the RationaleTS method.
Specifically, we firstly induce label-conditioned rationales, composed of reasoning paths from observable evidence to the potential outcomes.
Then, we design the hybrid retrieval by balancing temporal patterns and semantic contexts to retrieve correlated rationale priors for the final in-context inference on new samples.
We conduct extensive experiments to demonstrate the effectiveness and efficiency of our proposed RationaleTS on three-domain time series reasoning tasks.
We will release our code for reproduction.
Paper Type: Long
Research Area: Financial Applications and Time Series
Research Area Keywords: reasoning over time series, knowledge base construction, cross-modal information extraction, knowledge tracing/discovering/inducing
Contribution Types: Model analysis & interpretability, Data analysis
Languages Studied: English
Submission Number: 4411
Loading