Keywords: Imitation learning, Sim2real, Representation Learning
Abstract: Learning visuamotor policy through imitation learning often suffers from perceptual challenges, where visual differences between training and evaluation environments degrade policy performance. Policies relying on state estimations like 6D pose, require task-specific tracking and are difficult to scale, while raw sensor-based policies may lack robustness to small visual disturbances. In this work, we leverage 2D keypoints — spatially consistent features in the image frame — as a state representation for robust policy learning, and apply it to both sim-to-real transfer and real-world imitation learning. However, the choice of which keypoints to use can vary across objects and tasks. We propose a novel method -ATK, to automatically select keypoints in a task-driven manner, such that the chosen keypoints are
that are predictive of optimal behavior for the given task. Our proposal optimizes for a minimal set of task-relevant keypoints that preserve policy performance and robustness. We distill expert data (either from an expert policy in simulation or a human expert) into a policy that operates on RGB images while tracking the selected keypoints. By leveraging pre-trained visual modules, our system effectively tracks keypoints and transfers policies to the real-world evaluation scenario, even given perceptual challenges like transparent objects or fine-grained manipulation, or widely varying scene appearance. We validate our approach on various robotic tasks, demonstrating that these minimal keypoint representations improve robustness to visual disturbances and environmental variations.
Supplementary Material: zip
Spotlight: mp4
Submission Number: 450
Loading