Keywords: Dexterous In-Hand Manipulation, Reinforcement Learning
Abstract: In-hand manipulation of pen-like objects is a most basic and important skill in our daily lives, as many tools such as hammers and screwdrivers are similarly shaped. However, current learning-based methods struggle with this task due to a lack of high-quality demonstrations and the significant gap between simulation and the real world. In this work, we push the boundaries of learning-based in-hand manipulation systems by demonstrating the capability to spin pen-like objects. We use reinforcement learning to train a policy and generate a high-fidelity trajectory dataset in simulation. This serves two purposes: 1) pre-training a sensorimotor policy in simulation; 2) conducting open-loop trajectory replay in the real world. We then fine-tune the sensorimotor policy using these real-world trajectories to adapt to the real world. With less than 50 trajectories, our policy learns to rotate more than ten pen-like objects with different physical properties for multiple revolutions. We present a comprehensive analysis of our design choices and share the lessons learned during development. Videos are shown on https://corl-2024-dexpen.github.io/.
Supplementary Material: zip
Spotlight Video: mp4
Website: https://penspin.github.io/
Code: https://github.com/HaozhiQi/penspin
Publication Agreement: pdf
Student Paper: yes
Submission Number: 158
Loading