Multi-Stage Manipulation with Demonstration-Augmented Reward, Policy, and World Model Learning

Published: 18 Jun 2025, Last Modified: 23 Jun 2025OOD Workshop @ RSS2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reinforcement Learning, Learning from Demonstrations, Robotics, Manipulation
TL;DR: We propose a framework multi-stage manipulation tasks with sparse rewards and visual inputs; our framework combines learned dense rewards, model-based RL, a bi-phasic training scheme, and a small number of demonstrations.
Abstract: Long-horizon tasks in robotic manipulation present significant challenges in reinforcement learning (RL) due to the difficulty of designing dense reward functions and effectively exploring the expansive state-action space. However, despite a lack of dense rewards, these tasks often have a multi-stage structure, which can be leveraged to decompose the overall objective into manageable sub-goals. In this work, we propose DEMO³, a framework that exploits this structure for efficient learning from visual inputs. Specifically, our approach incorporates multi-stage dense reward learning, a bi-phasic training scheme, and world model learning into a carefully designed demonstration-augmented RL framework that strongly mitigates the challenge of exploration in long-horizon tasks. Our evaluations demonstrate that our method improves data-efficiency by an average of 40% and by 70% on particularly difficult tasks compared to state-of-the-art approaches. We validate this across 16 sparse-reward tasks spanning four domains, including challenging humanoid visual control tasks using as few as five demonstrations.
Submission Number: 11
Loading