Efficient Model Selection for Time Series Forecasting via LLMs

ACL ARR 2026 January Submission9761 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Model Selection, Time Series Forecasting, Meta-learning, Large Language Models
Abstract: Model selection is a critical step in time series forecasting, traditionally requiring extensive performance evaluations across various datasets. Meta-learning approaches aim to automate this process, but they typically depend on a pre-constructed performance matrix, which is expensive and time-consuming to build and maintain. In this work, we propose to leverage Large Language Models (LLMs) as a lightweight alternative for model selection. Our method eliminates the need for an explicit performance matrix by utilizing the inherent knowledge and reasoning capabilities of LLMs. Through extensive experiments on over 320 diverse datasets with Llama, GPT, and Gemini, we demonstrate that our approach outperforms traditional meta-learning techniques and heuristic baselines, while significantly reducing computational overhead. These findings underscore the potential of LLMs in efficient model selection for time series forecasting.
Paper Type: Long
Research Area: Financial Applications and Time Series
Research Area Keywords: time-series forecasting, other time series models and topics
Languages Studied: English
Submission Number: 9761
Loading