Are LLMs Prescient? A Continuous Evaluation using Daily News as the Oracle

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Daily Oracle is a continuously updated benchmark that turns daily news into forecasting questions, revealing how LLM performance degrades over time.
Abstract: Many existing evaluation benchmarks for Large Language Models (LLMs) quickly become outdated due to the emergence of new models and training data. These benchmarks also fall short in assessing how LLM performance changes over time, as they consist of a static set of questions without a temporal dimension. To address these limitations, we propose using future event prediction as a continuous evaluation method to assess LLMs' temporal generalization and forecasting abilities. Our benchmark, Daily Oracle, automatically generates question-answer (QA) pairs from daily news, challenging LLMs to predict "future" event outcomes. Our findings reveal that as pre-training data becomes outdated, LLM performance degrades over time. While Retrieval Augmented Generation (RAG) has the potential to enhance prediction accuracy, the performance degradation pattern persists, highlighting the need for continuous model updates. Code and data are available at https://agenticlearning.ai/daily-oracle.
Lay Summary: AI language models stop learning after their last training update. They may answer past questions well, but do they stay sharp as the world keeps changing? To study this, we tracked how well AI models maintain their prediction ability of current events over time. We built Daily Oracle, a benchmark that posts brand-new true/false and multiple-choice forecasting questions every day, based on current news articles. We tested popular models and saw their performance decline from 2020 to 2024—by about 22% on true/false questions and 11% on multiple-choice. Even when models were given recent news articles to help them answer, the decline continued. More surprisingly, this drop happened even in reading tasks where the exact answer was right there in the text. This shows that giving models new information on the fly isn’t enough—they may need fresh training to keep up. Daily Oracle shows how quickly AI models can fall behind and offers a way to measure that drift. It helps evaluate how well models forecast future events and encourages research into strategies like continuous pretraining to help models keep pace with the world.
Link To Code: https://agenticlearning.ai/daily-oracle/
Primary Area: Deep Learning->Large Language Models
Keywords: LLM, Forecasting, Continuous Evaluation, Temporal Generalization
Submission Number: 8409
Loading