Keywords: Knowledge Graph, Large Language Models, Retrieval-Augmented Generation
TL;DR: GNN-RAG is a novel graph neural retrieval method for KGQA, which leverages GNNs to retrieve complex graph infromation.
Abstract: Retrieval-augmented generation (RAG) in Knowledge Graph Question Answering (KGQA) enriches the context of Large Language Models (LLMs) with retrieved KG information based on the question. However, KGs contain complex graph information and existing KG retrieval methods are challenged when questions require multi-hop information. To improve RAG in complex KGQA, we introduce the GNN-RAG framework, which leverages Graph Neural Networks (GNNs) for effective graph reasoning and retrieval. GNN-RAG consists of a graph neural phase, where the GNN retriever learns to identify useful graph information for KGQA, e.g., when tackling complex questions. At inference time, the GNN scores answer candidates for the given question and the shortest paths in the KG that connect question entities and answer candidates are retrieved to represent KG reasoning paths. The paths are verbalized and given as context to the downstream LLM for ultimate KGQA; GNN-RAG can be seamlessly integrated with different LLMs for RAG. Experimental results show that GNN-RAG achieves state-of-the-art performance in two widely used KGQA benchmarks (WebQSP and CWQ), outperforming or matching GPT-4 performance with a 7B tuned LLM. In addition,GNN-RAG excels on multi-hop and multi-entity questions outperforming competing approaches by 8.9--15.5\% points at answer F1. Furthermore, we show the effectiveness of GNN-RAG in retrieval augmentation, which further boosts KGQA performance.
Supplementary Material: zip
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 10558
Loading