Keywords: Knowledge Graph, Retrieval-augmented Generation, Large Language Model
Abstract: Despite the strong abilities, large language models (LLMs) still suffer from hallucinations and reliance on outdated knowledge, raising concerns in knowledge-intensive tasks. Graph-based retrieval-augmented generation (GRAG) enriches LLMs with knowledge by retrieving graphs leveraging relational evidence, but it faces two challenges: structure-coupled irrelevant knowledge introduced by neighbor expansion and structure-reasoning discrepancy between graph embeddings and LLM semantics. We propose Align-GRAG, an anchor-and-rationale guided refinement framework to address these challenges. It prompts an LLM to extract anchors and rationale chains, which provide intermediate supervision for (1) node-level alignment that identifies critical nodes and prunes noisy evidence, and (2) graph-level alignment that bridges graph and language semantic spaces via contrastive learning. Extensive experiments on commonsense reasoning, scene graph understanding, and knowledge graph reasoning demonstrate consistent gains over 18 strong baselines, validating the effectiveness of Align-GRAG for improving graph-grounded generation. The code can be found in https://anonymous.4open.science/r/Align-GRAG-F3D8/.
Paper Type: Long
Research Area: Retrieval-Augmented Language Models
Research Area Keywords: knowledge graphs, retrieval-augmented generation
Contribution Types: NLP engineering experiment, Reproduction study
Languages Studied: English
Submission Number: 2221
Loading