Keywords: Knowledge Graphs, Large Language Models, Retrieval-Augmented Generation, Retrieval
Abstract: While Large Language Models (LLMs) are powerful reasoners, their effectiveness on complex, knowledge-intensive tasks is fundamentally limited by their reliance on static, parametric knowledge, often leading to factual hallucinations. Augmenting them with external Knowledge Graphs (KGs) provides crucial factual grounding to address this limitation. Existing KG-based retrieval-augmented methods are predominantly based on triple-level retrieval, yet the vast and noisy entity space leads to suboptimal accuracy and scalability. This paper introduces RCR (Relation-Centric Reasoning), a new paradigm that pivots away from the vast entity space to the more stable and semantically richer space of relations. RCR first retrieves a compact set of candidate relations, then employs an LLM to compose them into abstract reasoning paths, and finally materializes these paths into a concrete evidence subgraph using the proposed similarity-based substitution mechanism to ensure robustness. On the challenging WebQSP and CWQ multi-hop question answering benchmarks, RCR achieves state-of-the-art performance. By prioritizing relations as the backbone of reasoning, RCR delivers a more accurate, interpretable, and scalable solution for KG-augmented reasoning.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 4496
Loading