Keywords: time-series benchmark, multimodal time series understanding, temporal reasoning, large language model
Abstract: Time series data are central to domains such as finance, healthcare, and cloud computing, yet existing benchmarks for evaluating various large language models (LLMs) on temporal tasks remain scattered and unsystematic. To bridge this gap, we introduce MMTS-Bench, a comprehensive multimodal benchmark built upon a hierarchical taxonomy of time-series tasks, spanning feature analysis, temporal reasoning, and cross-modal alignment. MMTS-Bench comprises 2,424 time series question answering (TSQA) pairs across 4 subsets: Base, InWild, Match, and Align, generated through a progressive real-world QA framework and modular synthetic data construction. We conduct extensive evaluations on closed-source, open-source LLMs and existing time series adapted large language models (TS-LLMs), revealing that: (1) TS-LLMs significantly lag behind general-purpose LLMs in cross-domain generalization, (2) LLMs show weaknesses in local tasks compared to global tasks, and (3) chain-of-thought (CoT) reasoning and multimodal integration substantially improve performance. MMTS-Bench not only provides a rigorous evaluation framework but also offers clear directions for advancing LLMs toward robust, interpretable, and generalizable time-series reasoning.
Primary Area: datasets and benchmarks
Submission Number: 5150
Loading