Keywords: Knowledge Graph Question Answering, Retrieval-Augmented Generation, Graph Attention Networks, Multi-hop Reasoning
Abstract: Large language models (LLMs) increasingly rely on external knowledge to mitigate hallucinations, yet retrieving precise multi-hop evidence for knowledge-augmented reasoning remains difficult. Existing Knowledge Graph (KG)-based Retrieval-Augmented Generation (RAG) systems insufficiently model the interaction between query semantics and relation types, resulting in imprecise subgraph retrieval and unstable reasoning. We propose Query-aware Subgraph Retrieval Augmented Generation (QSRAG), a retrieval framework built upon a Query-Relational Graph Attention Network (QR-GAT) that integrates query semantics and relation embeddings directly into the attention mechanism, enabling fine-grained triple scoring and scalable subgraph construction. This query–relation conditioning improves relevance estimation and suppresses noisy edges, producing faithful reasoning subgraphs. Experiments on WebQSP and CWQ establish new state-of-the-art results in both Triple Recall and Answer Recall, and significantly enhance LLMs reasoning accuracy without finetuning. These findings underscore the effectiveness of modeling query–relation interactions for reliable knowledge-augmented reasoning.
Paper Type: Long
Research Area: Retrieval-Augmented Language Models
Research Area Keywords: knowledge base QA, multihop QA, reasoning, graph-based methods, knowledge-augmented methods, retrieval-augmented generation
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 403
Loading