Keywords: Agentic Graph Learning, Large Language Models
Abstract: Large Language Models (LLMs) increasingly rely on agentic capabilities—iterative retrieval, tool use, and decision-making—to overcome the limits of static, parametric knowledge. Yet existing agentic frameworks treat external information as unstructured text and fail to leverage the topological dependencies inherent in real-world data. To bridge this gap, we introduce Agentic Graph Learning (AGL), a paradigm that reframes graph learning as an interleaved process of topology-aware navigation and LLM-based inference. Specifically, we propose AgentGL, the first reinforcement learning (RL)–driven framework for AGL. AgentGL equips an LLM agent with graph-native tools for multi-scale exploration, regulates tool usage via search-constrained thinking to balance accuracy and efficiency, and employs a graph-conditioned curriculum RL strategy to stabilize long-horizon policy learning without step-wise supervision. Across diverse Text-Attributed Graph (TAG) benchmarks and multiple LLM backbones, AgentGL substantially outperforms strong GraphLLMs and GraphRAG baselines, achieving absolute improvements of up to 13.8\% in node classification and 24.3\% in link prediction. These results demonstrate that AGL is a promising frontier for enabling LLMs to autonomously navigate and reason over complex relational environments. Our code is anonymously shared at \url{ https://anonymous.4open.science/r/AgentGL-3672}.
Paper Type: Long
Research Area: AI/LLM Agents
Research Area Keywords: Graph Machine Learning, Large Language Models
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 10750
Loading