Hierarchical World Models as Visual Whole-Body Humanoid Controllers

Published: 31 Oct 2024, Last Modified: 08 Nov 2024CoRL 2024 Workshop WCBMEveryoneRevisionsBibTeXCC BY 4.0
Keywords: reinforcement learning, world model, humanoid
TL;DR: We propose a hierarchical world model for visual whole-body control of humanoids, which produces highly performant policies as well as motions that are broadly preferred by humans
Abstract: Whole-body control for humanoids is challenging due to the high-dimensional nature of the problem, coupled with the inherent instability of a bipedal morphology. Learning from visual observations further exacerbates this difficulty. In this work, we explore highly data-driven approaches to visual whole-body humanoid control based on reinforcement learning, without any simplifying assumptions, reward design, or skill primitives. Specifically, we propose a hierarchical world model in which a high-level agent generates commands based on visual observations for a low-level agent to execute, both of which are trained with rewards. Our approach produces highly performant control policies in 8 tasks with a simulated 56-DoF humanoid, while synthesizing motions that are broadly preferred by humans. Code and videos: https://rlpuppeteer.github.io
Submission Number: 14
Loading