Beyond the Teacher: Leveraging Mixed-Skill Demonstrations for Robust Imitation Learning

Published: 06 Sept 2025, Last Modified: 26 Sept 2025CoRL 2025 Robot Data WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Imitation Learning, naive demonstrations, Demonstration filtering, expert-like learning, DMP, LSTM
TL;DR: We enable robot learning from imperfect data by using the best example to correct the others and synthesize a clean, expert-like dataset.
Abstract: Achieving expert-like robotic task execution in dynamic environments typically requires extensive, high-quality expert demonstrations, a significant bottleneck for real-world deployment. We present a novel learning framework that overcomes this data dependency, enabling robots to perform complex periodic tasks with expert-like proficiency, even when learning from naive demonstrations. Our two-stage pipeline first selects a representative demonstration based on user-defined task intention scores. This single best demo is then used to extract a canonical motion shape via Periodic Dynamic Movement Primitives (DMPs). Finally, a Long Short-Term Memory (LSTM) network refines the entire set of demonstrations, leveraging a multi-objective loss that combines the canonical shape with other task quality metrics. The proposed approach is then demonstrated on a Franka Emika robot performing phasic tasks such as wiping and stirring.
Lightning Talk Video: mp4
Submission Number: 38
Loading