Think-on-Graph 3.0: Efficient and Adaptive LLM Reasoning on Heterogeneous Graphs via Multi-Agent Dual-Evolving Context Retrieval
Keywords: Retrieval-Augmented Generation (RAG), Multi-Agent, Dual-Evolving, Heterogeneous Graph
TL;DR: We introduce Think-on-Graph 3.0 (ToG-3), which provides a unified, efficient, and adaptive solution for complex knowledge reasoning tasks (including deep reasoning and broad reasoning tasks) via Multi-Agent Dual-Evolving Context Retrieval Loop.
Abstract: Retrieval-Augmented Generation (RAG) and Graph-based RAG has become the important paradigm for enhancing Large Language Models (LLMs) with external knowledge.
However, existing approaches face a fundamental trade-off. While graph-based methods are inherently dependent on high-quality graph structures, they face significant practical constraints: manually constructed knowledge graphs are prohibitively expensive to scale, while automatically extracted graphs from corpora are limited by the performance of the underlying LLM extractors, especially when using smaller, local-deployed models.
This paper presents Think-on-Graph 3.0 (ToG-3), a novel framework that introduces Multi-Agent Context Evolution and Retrieval (MACER) mechanism to overcome these limitations.
Our core innovation is the dynamic construction and refinement of a Chunk-Triplets-Community heterogeneous graph index, which pioneeringly incorporates a dual-evolution mechanism of Evolving Query and Evolving Sub-Graph for precise evidence retrieval.
This approach addresses a critical limitation of prior Graph-based RAG methods, which typically construct a static graph index in a single pass without adapting to the actual query.
A multi-agent system, comprising Constructor, Retriever, Reflector, and Responser agents, collaboratively engages in an iterative process of evidence retrieval, answer generation, sufficiency reflection, and, crucially, evolving query and subgraph. This dual-evolving multi-agent system allows ToG-3 to adaptively build a targeted graph index during reasoning, mitigating the inherent drawbacks of static, one-time graph construction and enabling deep, precise reasoning even with lightweight LLMs.
Extensive experiments demonstrate that ToG-3 outperforming compared baselines on both deep and broad reasoning benchmarks,and ablation studies confirm the efficacy of the components of MACER framework.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 4456
Loading