Keywords: LLM, KGQA, RL, Symbol Search
TL;DR: We propose KGQA-Star, a reinforcement learning–based framework that enables LLMs to generate, execute, and refine symbolic retrieval plans over knowledge graphs, significantly improving reasoning accuracy in knowledge-intensive tasks.
Abstract: Large language models (LLMs) excel in natural language processing but struggle with knowledge-intensive tasks such as multi-hop reasoning and symbolic retrieval, where issues like outdated knowledge, hallucinations, and weak planning often arise. Knowledge Graph Question Answering (KGQA) offers a promising solution but existing approaches face limitations in reasoning efficiency, graph structure utilization, and symbolic query generation. We propose KGQA-Star, a reinforcement learning–based framework that enhances LLM reasoning over knowledge graphs. KGQA-Star introduces a simplified KG retrieval plan and an execution system (KGSRS) supporting error feedback and reflective correction. To address the lack of RL methods in KGQA, we build a high-quality KG-Cot dataset via data distillation and apply curriculum learning for cold-start training. The framework employs a three-stage process—exploration, planning, and reflection—optimized with Reinforce++ and task-specific rewards. Experiments show that KGQA-Star significantly improves symbolic query quality and reasoning accuracy in complex KGQA tasks, offering a practical path to strengthen LLM performance in knowledge-intensive scenarios.
Primary Area: reinforcement learning
Submission Number: 23253
Loading