Enhancing Probabilistic Imitation Learning with Robotic Perception for Self-Organising Robot Workstation
Keywords: Imitation Learning, Computer Vision, Kernelized Movement Primitives
TL;DR: An end-to-end pipeline combining probabilistic imitation learning methods with computer vision for learning and adapting tasks from human demonstrations, demonstrated on a self-organizing workstation task.
Abstract: The scalability of robotic systems is constrained by traditional programming methods that require specialized expertise. While learning from human demonstrations presents an intuitive programming solution, generalizing learned behaviors to changing surroundings remains a key challenge. In this work, we tackle these issues by integrating Kernelized Movement Primitives (KMP) with computer vision, enabling robots to adapt object-centric tasks learned from demonstrations to varying object configurations. YOLO-based object detection and 3D pose estimation allow the system to dynamically capture variations in object placement and adapt learned trajectories accordingly, leading to precise interaction with objects regardless of their placement. We developed a scalable framework to collect data on the robot and used BlenderProc to automatically generate extensive image datasets for training. We demonstrate this approach on a self-organizing workstation task, where a 7-DOF robot autonomously and effectively cleans up scattered objects.
Submission Number: 50
Loading