ConceptBot: Knowledge-Graph–Grounded Commonsense for Task Decomposition in LLM Robot Planning

18 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: robotic planning, task decomposition, large language models, knowledge graphs
TL;DR: We introduce ConceptBot, a hybrid framework that combines LLMs with commonsense knowledge graphs to produce safer and more reliable robotic task plans in unstructured environments.
Abstract: Robotic planning breaks down when commonsense reasoning is required to resolve linguistic ambiguity and to interpret objects correctly. To address this, we present ConceptBot, a modular planning framework that integrates large language models with knowledge graphs to produce feasible, risk-aware plans while jointly disambiguating instructions and grounding object semantics. ConceptBot comprises three components: (i) an Object Properties Extraction (OPE) module that augments scene understanding with semantic concepts from ConceptNet; (ii) a User Request Processing (URP) module that resolves ambiguities and structures free-form instructions; and (iii) a Planner that synthesizes context-aware, feasible pick-and-place policies. Evaluations in simulation and on real-world setups show consistent gains over prior LLM-based planners—for example, +56 percentage points on implicit tasks (87% vs. 31% for SayCan) and +61 points on risk-aware tasks (76% vs. 15%)—and an overall score of 80% on SafeAgentBench. These improvements translate to more reliable performance in unstructured environments without domain-specific training.
Primary Area: applications to robotics, autonomy, planning
Submission Number: 12165
Loading