Contrastive In-Context Learning with Active Memory for Task Planning

Published: 06 Sept 2025, Last Modified: 26 Sept 2025CoRL 2025 Robot Data WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Task planning, language model, contrastive in-context learning
Abstract: Large language models (LLMs) have shown great promise in robotic task planning through in-context learning with a few successful demonstrations, guiding the model to generate feasible task plans. However, existing approaches typically overlook the learning signals from failures, often relying solely on manually curated positive examples. Recent methods attempt to utilize failures by transforming them into feedback or improved plans through LLMs, which involves additional steps to generate knowledge that can serve as positive examples for learning. Moreover, these methods often store examples indiscriminately in an external memory, leading to unbounded memory expansion and inefficiencies in both retrieval and resource usage. In this work, we propose a contrastive in-context learning framework for task planning that utilizes both successful and failed demonstrations through a dual prompting strategy. It enables the model to learn by contrasting positive prompts from successful ones, which guide correct behaviors, and negative prompts from failed ones, which prevent recurring mistakes. To implement this strategy efficiently, we introduce an active memory with a limited budget that selectively incorporates useful demonstrations generated by an LLM during task planning. Experiments on diverse multi-object, long-horizon, and spatially constrained manipulation tasks show that our method improves the average task success rate by 10.7%, while reducing memory size by up to 75.0% compared to existing LLM-based task planning approaches.
Supplementary Material: zip
Lightning Talk Video: mp4
Submission Number: 35
Loading