Point Policy: Unifying Observations and Actions with Key Points for Robot Manipulation

Published: 25 Jun 2025, Last Modified: 25 Jun 2025Dex-RSS-25EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Imitation Learning, Robot Perception, Sensing & Vision
TL;DR: We present a method that unifies robot observations and actions with key points and enables learning generalizable robot policies exclusively from human videos.
Abstract: Building robotic agents capable of operating across diverse environments and object types remains a significant challenge, often requiring extensive data collection. This is particularly restrictive in robotics, where each data point must be physically executed in the real world. Consequently, there is a critical need for alternative data sources for robotics and frameworks that enable learning from such data. In this work, we present Point Policy, a new method for learning robot policies exclusively from offline human demonstration videos without any teleoperation data. Point Policy leverages state-of-the-art vision models and policy architectures to translate human hand poses into robot poses while capturing object states through semantically meaningful key points. This approach yields a morphology-agnostic representation that facilitates effective policy learning. Through experiments on a diverse set of real-world tasks, we demonstrate that Point Policy significantly outperforms prior methods for policy learning from human videos, performing well not only within the training distribution but also generalizing to novel object instances and cluttered environments. Videos of the robot are best viewed at point-policy.github.io.
Submission Number: 6
Loading