Keywords: Time Series Agent, Large Language Models, Benchmark, Time Series Multi-step Analysis
Abstract: The rapid advancement of Large Language Models (LLMs) has sparked growing interest in their application to time series analysis tasks. However, their ability to perform complex reasoning over temporal data application domains remains significantly underexplored. To achieve this goal, one first step is to establish a rigorous benchmark dataset for evaluation. In this work, we introduce TSAIA Benchmark, a first attempt to evaluate LLMs as a time series artificial intelligence assistant. To ensure both scientific rigor and practical relevance, we surveyed over 20 academic publications and identified 33 real world task formulations. The benchmark encompasses a broad spectrum of challenges, ranging from constraint aware forecasting to anomaly detection with threshold calibration, tasks that require compositional reasoning and multistep time series analysis. The question generator is designed to be dynamic and extensible, supporting continuous expansion as new datasets or task types are introduced. Given the heterogeneous nature of the tasks, we adopt task specific success criteria and tailored inference quality metrics to ensure meaningful evaluation for each task. We apply this benchmark to assess eight state of the art LLMs under a unified evaluation protocol. Our analysis reveals limitations in current models' ability to assemble complex time series analysis workflows, underscoring the need for specialized methodologies for adaptation toward domain specific applications. Our benchmark and code are publicly available online.
Submission Number: 137
Loading