A Hierarchical Reinforcement Learning Approach to Control Legged Mobile ManipulatorsDownload PDF

Anonymous

07 Nov 2022 (modified: 05 May 2023)CoRL Agility Workshop 2022Readers: Everyone
Keywords: Reinforcement Learning, Quadruped Robots, Object Manipulation
TL;DR: Teaching a robot dog with a robot arm to play fetch with hierarchical reinforcement learning
Abstract: Recent years have seen a Cambrian explosion of robotic systems yielding ever more capable and affordable systems, with quadrupedal robotic platforms emerging as a commercially-viable base to perform a wide variety of tasks across uneven terrain. Augmenting these with a robotic arm allows the possibility of even more complex interactions. At the same time, there has been a growing body of research into using deep reinforcement learning (DRL) for embodied agent navigation and object manipulation, which promises a more sample-efficient, flexible, and robust approach to learning such policies than existing classical methods. Recent works have shown a functional approach for learning a joint base and arm policy with DRL but have not yet demonstrated how the result can be used in downstream tasks. In this work, we investigate the problem of learning an object manipulation and navigation policy for a quadrupedal robot with a mounted robotic arm - specifically, we address the problem of fetching stationary and moving objects autonomously (“playing fetch” with the robot dog). Our method consists of (a) a low-level policy that moves the base and arm and (b) a high-level policy that generates the commands for the low-level policy. The low-level policy is jointly learned for both the arm and the base which generates joint torques for directional commands. The high-level policy is task-specific, translates the ball position to directional commands for the low-level policy, and deals with acceleration/deceleration and stability. We demonstrate that our high-level policy can outperform a tuned Proportional-Derivative (PD) controller.
0 Replies

Loading