Sketch-Plan-Generalize : Learning and Planning with Neuro-Symbolic Programmatic Representations for Inductive Spatial Concepts
Keywords: Concept Learning, Neuro-Symbolic Planning, Continual Learning, Learning from Demonstrations
TL;DR: Continual Learning of Inductive Programs for Spatial Concepts from Human Demonstrations by developing a Neuro-Symbolic Pipeline.
Track: Long Paper (up to 9 pages)
Abstract: Effective human-robot collaboration requires the ability to learn personalized concepts from a limited number of demonstrations, while exhibiting inductive generalization, hierarchical composition, and adaptability to novel constraints. Existing approaches that use code generation capabilities of pre-trained large (vision) language models as well as purely neural models show poor generalization to _a-priori_ unseen complex concepts. Neuro-symbolic methods offer a promising alternative by searching in program space, but face challenges in large program spaces due to the inability to effectively guide the search using demonstrations. Our key insight is to factor inductive concept learning as: (i) _Sketch:_ detecting and inferring a coarse signature of a new concept (ii) _Plan:_ performing an MCTS search over grounded action sequences guided by human demonstrations (iii) _Generalize:_ abstracting out grounded plans as inductive programs. Our pipeline facilitates generalization and modular re-use, enabling continual concept learning. Our approach combines the benefits of code generation ability of large language models (LLMs) along with grounded neural representations, resulting in neuro-symbolic programs that show stronger inductive generalization on the task of constructing complex structures vis-'a-vis LLM-only and purely neural approaches. Further, we demonstrate reasoning and planning capabilities with learned concepts for embodied instruction following.
Format: We have read the camera-ready instructions, and our paper is formatted with the provided template.
De-Anonymization: This submission has been de-anonymized.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 22
Loading