Implicit Reasoning in Deep Time Series Forecasting

Published: 10 Oct 2024, Last Modified: 28 Oct 2024Sys2-Reasoning PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Forecasting, Time Series Foundation Models, Implicit Reasoning, Memorization
TL;DR: We assess the implicit reasoning capabilities of time series models in forecasting and find that some models show potential for reasoning beyond memorization in specific out-of-distribution scenarios.
Abstract: Recently, time series foundation models have shown promising zero-shot forecasting performance on time series from a wide range of domains. However, it remains unclear whether their success stems from a true understanding of temporal dynamics or simply from memorizing the training data. While implicit reasoning in language models has been studied, similar evaluations for time series models have been largely unexplored. This work takes an initial step toward assessing the reasoning abilities of deep time series forecasting models. We find that certain linear, MLP-based, and patch-based Transformer models generalize effectively in systematically orchestrated out-of-distribution scenarios, suggesting underexplored reasoning capabilities beyond simple pattern memorization.
Submission Number: 15
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview