Presentation Attendance: No, we cannot present in-person
Keywords: Time Series Forecasting, Foundation Models, Meta-Learning, Hyperparameter Optimization, Amortized Inference, Tuning
TL;DR: We eliminate per-series hyperparameter search in time series forecasting by amortizing configuration selection across series, achieving near–search accuracy with constant deployment cost.
Abstract: Foundation models for time series forecasting are highly sensitive to configuration parameters such as context length and patch size, which substantially influence predictive performance. These parameters are typically chosen via static defaults or per-series hyperparameter search using AutoML, the latter requiring repeated evaluations of a large model. We reinterpret configuration selection as an amortized learning problem: instead of optimizing configurations independently for each new series, we learn a lightweight learning-to-rank model that predicts high-performing configurations. Using the Moirai 1.1-R-small foundation model on the Uber TLC and Electricity benchmarks, Tune-as-Inference achieves accuracy within 3-5% of 20-trial Bayesian optimization while reducing per-series configuration time by approximately 20 times. These results suggest that configuration adaptation for time series foundation models are better treated as inference rather than iterative search.
Track: Research Track (max 4 pages)
Submission Number: 61
Loading