Noise Informed LLM for Zero-shot Time Series Forecasting with Uncertainty Quantification

11 Sept 2025 (modified: 20 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM, time series forecast, Uncertainty Quantification
TL;DR: model-agnostic Bayesian approximation framework quantifies the uncertainty of frozen LLMs
Abstract: Large language models (LLMs) exhibit strong zero-shot generalization, not only for complex reasoning but also for time-series forecasting. Existing LLM-based forecasters, however, almost exclusively target deterministic accuracy—via elaborate prompts design, tokenization schemes, or instruction tuning—while ignoring the predictive uncertainty that underlies both hallucination and over-confidence. In this work, we bridge this divide by introducing a novel and model-agnostic noise-informed Bayesian approximation (NBA) framework that quantifies the uncertainty of frozen LLMs. We first derive a Bayesian formulation that treats input noise as a stochastic latent variable; marginalizing this noise yields a predictive distribution whose variance is provably calibrated to epistemic plus aleatoric uncertainty. Consequently, NBA adds negligible overhead, preserves zero-shot accuracy, and avoids the computational cost of posterior inference over LLMs. Systematic experiments on 11 representative LLMs and simulated/real-world datasets show that NBA produces well-calibrated prediction intervals across varying temperature scalings, forecast horizons, model architectures, and prompting strategies. NBA establishes a strong reproducible baseline for uncertainty quantification in LLMs and reveals actionable insights for reliable zero-shot time series forecasting. Code and data are available at \url{ https://anonymous.4open.science/r/NBA-LLM }.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 3938
Loading