Pitfalls in Evaluating Language Model Forecasters

ICLR 2026 Conference Submission17854 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: forecasting, evaluation, criticism, leakage, standards, LLMs, prediction, future, benchmarks
TL;DR: We find conceptual issues and real temporal leakage errors in existing LLM forecasting evaluations, and argue this is a problem.
Abstract: Large language models (LLMs) have recently been applied to forecasting tasks, with some works claiming these systems match or exceed human performance. In this paper, we argue that, as a community, we should be careful about such conclusions as evaluating LLM forecasters presents unique challenges. We identify two broad categories of issues: (1) difficulty in trusting evaluation results due to many forms of temporal leakage, and (2) difficulty in extrapolating from evaluation performance to real-world forecasting. Through systematic analysis and concrete examples from prior work, we demonstrate how evaluation flaws can raise concerns about current and future performance claims. We argue that more rigorous evaluation methodologies are needed to confidently assess the forecasting abilities of LLMs.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 17854
Loading