Keywords: Large Language Model, Evolving Knowledge Graph, Reasoning, Question Answering
Abstract: Large language models (LLMs) demonstrate impressive capability in natural language understanding, yet they remain limited when reasoning over knowledge that evolves. A common remedy is to augment LLMs with knowledge graphs (KGs), which provide structured access to factual information. However, most existing approaches rely on a static snapshot of the KG and fail to account for temporal evolution and conflicting updates that naturally arise in real-world knowledge. To address these challenges, we present EvoReasoner, a temporal-aware multi-hop reasoning algorithm that integrates global-local entity grounding, multi-route decomposition, and time-sensitive scoring to support robust inference. Complementing this, we introduce EvoKG, a noise-resilient graph evolution module that continuously updates the KG from unstructured documents using confidence-aware contradiction handling and temporal trend tracking. We evaluate our framework on temporal QA benchmarks and a new end-to-end setting where the KG is dynamically updated from raw text. Our method consistently surpasses prompting-only and static KG-augmented baselines, and notably enables an 8B-parameter model to achieve accuracy on par with a 671B model trained seven months later. These findings underscore the necessity of unifying temporal reasoning with KG evolution to ensure LLMs remain accurate and up-to-date. Code and data are released at: anonymous.4open.science/r/TREK-434C.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 12888
Loading