Keywords: Forecasting, Time Series Foundation Models, Implicit Reasoning, Memorization
TL;DR: We assess the implicit reasoning capabilities of time series models in forecasting and find that some models show potential for reasoning beyond memorization in specific out-of-distribution scenarios.
Abstract: Recently, time series foundation models have shown promising zero-shot forecasting performance on time series from a wide range of domains. However, it remains unclear whether their success stems from a true understanding of temporal dynamics or simply from memorizing the training data. While implicit reasoning in language models has been studied, similar evaluations for time series models have been largely unexplored. This work takes an initial step toward assessing the reasoning abilities of deep time series forecasting models. We find that certain linear, MLP-based, and patch-based Transformer models generalize effectively in systematically orchestrated out-of-distribution scenarios, suggesting underexplored reasoning capabilities beyond simple pattern memorization.
Submission Number: 39
Loading