RAGPOT: LLM-based Retrieval-Augmented and Generative Prompt Optimization for Time-Series Forecasting
Keywords: llm based time series forecasting, prompt optimization
Abstract: Recent advances in large language models (LLMs) have demonstrated their strong generalization ability across diverse reasoning and prediction tasks. However, directly applying LLMs to time-series forecasting remains challenging due to their sensitivity to prompt design and lack of domain-specific adaptation. In this work, we explore prompt optimization for time-series forecasting through a systematic framework that enables self-refining prompt improvement for LLMs. Our method iteratively improves prompt quality by first searching for similar multivariate historical time-series segments and then automatically updating the prompt from LLM's feedback, enabling the model to identify more informative and structured instructions for temporal reasoning. We conduct extensive experiments across multiple real-world datasets from energy, weather, finance, etc. domains, and demonstrate that optimized prompts lead to consistent and significant improvements over standard zero-shot and few-shot baselines. The results highlight the potential of prompt-based adaptation as an efficient alternative to parameter fine-tuning for applying LLMs in quantitative forecasting tasks.
Paper Type: Long
Research Area: Retrieval-Augmented Language Models
Research Area Keywords: NLP Applications
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 2456
Loading