G-RAGent: Dynamic Reasoning on Hypergraphs for Retrieval-Augmented Language Models

Published: 19 Dec 2025, Last Modified: 05 Jan 2026AAMAS 2026 ExtendedAbstractEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Retrieval-Augmented Generation, Reasoning, Hypergraph
Abstract: Retrieval-Augmented Generation (RAG) with structured knowledge graphs improves factual grounding in large language models (LLMs) but remains limited by two key factors. First, conventional graph-based RAG relies on binary relations, which cannot represent complex n-ary interactions among entities common in real-world knowledge. Second, static, query-agnostic retrieval introduces redundant information that interferes with the LLM's intrinsic reasoning. To overcome these challenges, we propose G-RAGent, a dynamic reasoning framework that unifies hypergraph-based knowledge representation with an adaptive retrieval agent. G-RAGent encodes multi-entity facts as hyperedges and employs a ReAct-style iterative reasoning process in which the LLM decomposes questions, predicts relevant semantic topics, selectively retrieves sub-hypergraphs, and halts retrieval when internal knowledge suffices. Experiments on multi-domain QA benchmarks demonstrate G-RAGent's effectiveness and superiority on retrieval \& reasoning.
Area: Representation and Reasoning (RR)
Generative A I: I acknowledge that I have read and will follow this policy.
Submission Number: 1548
Loading