ChronoBias: A Benchmark for Evaluating Time-conditional Group Bias in the Time-sensitive Knowledge of Large Language Models

Published: 03 Nov 2025, Last Modified: 09 Feb 2026EMNLP 2025 FindingsEveryoneRevisionsCC BY 4.0
Abstract: In this paper, we propose $\texttt{ChronoBias}$, a novel benchmark for evaluating $\textit{time-conditional group bias}$ in the $\textit{time-sensitive}$ knowledge of large language models (LLMs).Our benchmark is constructed via a template-based semi-automated generation method, balancing the quality-quantity trade-off in existing benchmark curation approaches.For knowledge that changes over time, $\textit{time-conditional group bias}$ exhibits varying patterns across time intervals, evident in both the best- and worst-performing groups and in the bias metric itself.In addition to $\textit{parametric knowledge bias}${--}which influences group bias across all time intervals{--}we identify $\textit{time-sensitivity bias}$ as an additional factor after a model{'}s knowledge cutoff, accounting for much of the variation in $\textit{time-conditional group bias}$ over time.Since both biases are irreducible, retrieval-augmented generation (RAG) can be a promising approach, as it can address post-cutoff knowledge and better leverage pretraining knowledge that is underrepresented in the model parameters.While RAG improves both overall performance and group bias, we observe that the disparate patterns of $\textit{time-conditional group bias}$ still persist.Therefore, through extensive experiments with various model configurations, we illustrate how accurate and fair RAG-based LLMs should behave and provide actionable guidelines toward constructing such ideal models.
Loading