everyone
since 04 Oct 2024">EveryoneRevisionsBibTeXCC BY 4.0
Recent advancements in robotic learning have enabled robots to perform a wide range of tasks. However, generalizing policies from training environments to deployment environments remains a major challenge, and improving these policies by collecting and annotating demonstrations in target environments is both costly and time-consuming. To address this issue, we propose Embodied Scene Cloning, a novel visual-prompt-based framework that generates visual-aligned trajectories from existing data by leveraging visual cues from the specific deployment environment. This approach minimizes the impact of environmental discrepancies on policy performance. Unlike traditional embodied augmentation methods that rely on text prompts, we propose to "clone" source demonstrations into the target environment and edit it with visual prompt to effectively improve the generalization ability on specific embodied scene. Experimental results demonstrate that samples generated by Embodied Scene Cloning significantly enhance the generalization ability of policies in the target deployment environments, representing a meaningful advancement in embodied data augmentation.