Are LLMs Prescient? A Continuous Evaluation using Daily News as the Oracle

Published: 10 Oct 2024, Last Modified: 19 Nov 2024AFM 2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM, Forecasting, Continuous Evaluation, Temporal Generalization
Abstract: Existing evaluation benchmarks for Large Language Models (LLMs) quickly become outdated due to model updates and an evolving information landscape. Moreover, they often lack the ability to assess how model performance evolves over time, as they consist of static questions without a temporal dimension. To address these, we propose using future event prediction as a continuous evaluation method to assess LLMs' temporal generalization and forecasting abilities. Our benchmark, Daily Oracle, automatically generates question-answer (QA) pairs from daily news, challenging LLMs to predict "future" events based on pre-training data. Our findings reveal that as pre-training data becomes outdated, LLM performance degrades over time. While Retrieval Augmented Generation (RAG) can enhance prediction accuracy, the degradation persists, highlighting the need for ongoing model updates.
Submission Number: 135
Loading