Keywords: Time Series, Reinforcement Learning, Retrieval Augment Generation
Abstract: Deep learning models for time series forecasting, typically optimized with Mean Squared Error (MSE), often exhibit spectral bias. This phenomenon arises because MSE prioritizes minimizing errors in high-energy, typically low-frequency components, leading to an underfitting of crucial, lower-energy high-frequency dynamics and resulting in overly smooth predictions. To address this, we propose Self-adaptive Retrieval-augmented Reinforcement learning for time series Forecasting (SRRF), a novel plug-and-play training enhancement. SRRF uniquely internalizes high-frequency modeling capabilities into base models during training, ensuring no additional inference costs or architectural changes for the base model. The framework operates by first employing Retrieval-Augmented Generation (RAG) to provide contextual grounding via relevant historical exemplars. Subsequently, building on this contextual guidance, a Reinforcement Learning (RL) agent learns an adaptive policy to correct and enhance initial forecasts, optimized via a reward function that promotes both overall predictive accuracy and fidelity to high-frequency details. Comprehensive evaluations on diverse benchmarks demonstrate that models trained with the SRRF methodology substantially improve upon their original versions and other state-of-the-art techniques, especially in accurately predicting volatile series and fine-grained temporal patterns. Qualitative and spectral analyses further confirm SRRF's effectiveness in mitigating spectral bias and enhancing high-frequency representation.
Primary Area: learning on time series and dynamical systems
Submission Number: 12654
Loading