Enhancing Agentic Textual Graph Retrieval with Synthetic Stepwise Supervision

ACL ARR 2026 January Submission3025 Authors

04 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: agent, large language models
Abstract: Integrating textual graphs into Large Language Models (LLMs) is promising for complex graph-based QA. However, a key bottleneck is retrieving informative yet compact subgraphs that fit the LLM context. Existing retrievers often struggle, relying either on shallow embedding similarity or costly interactive policies that require excessive supervision. To address these challenges, we introduce Graph-S$^3$, an agentic textual graph reasoning framework featuring an LLM-based retriever trained with synthetic stepwise supervision. Rather than relying on final answer rewards—which often yield sparse and unstable signals—we optimize the retriever by evaluating each step against offline-extracted golden subgraphs. Our approach distills golden subgraphs via a specialized data synthesis pipeline to formulate dense rewards, facilitating a two-stage training scheme that effectively learns the interactive graph exploration policy. Based on extensive experiments on three common datasets in comparison with seven strong baselines, our approach achieves an average improvement of 8.1\% in accuracy and 9.7\% in $F_1$ score. The advantage is even higher in more complicated multi-hop reasoning tasks. Our code will be open-sourced.
Paper Type: Long
Research Area: AI/LLM Agents
Research Area Keywords: Language Modeling
Languages Studied: English
Submission Number: 3025
Loading