TimeMaster: Training Time-Series Multimodal LLMs to Reason via Reinforcement Learning

Published: 23 Sept 2025, Last Modified: 09 Oct 2025BERT2SEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Time Series Reasoning; Time-Series Multi-modal Large Language Models
Abstract: Time-series reasoning remains a significant challenge for multimodal LLMs due to dynamic temporal patterns and semantic ambiguities, with existing models often lacking structured, human-aligned temporal understanding. In this work, we introduce TimeMaster, a novel reinforcement learning (RL)-based method that enables time-series MLLMs to perform structured, human-aligned reasoning over visualized temporal data. TimeMaster adopts a three-part output format (reasoning, classification, extension) and is optimized through a composite reward function within a two-stage pipeline (SFT followed by RL). Evaluated on TimerBed, TimeMaster achieves state-of-the-art performance, outperforming classical models by 8.3% and GPT-4o baselines by 7.3%, while also delivering human-aligned reasoning and actionable insights. This work offers a promising step towards equipping LLMs with robust temporal reasoning capabilities, paving the way for more interpretable and intelligent time-series analysis. Code is available at [https://anonymous.4open.science/r/TimeMaster-6EC1](https://anonymous.4open.science/r/TimeMaster-6EC1).
Submission Number: 13
Loading