Keywords: robotics, humanoids, vision-language-action models, whole-body control
TL;DR: The first latent vision-language action model for humanoid whole-body control
Abstract: Vision–language–action (VLA) models have demonstrated strong semantic understanding and zero-shot generalization, yet most existing systems assume an accurate low-level controller with hand-crafted action "vocabulary" such as end-effector pose or root velocity. This assumption confines prior work to quasi-static tasks and precludes the agile, whole-body behaviors required by humanoid whole-body control (WBC) tasks. To capture this gap in the literature, we start by introducing the first sim-to-real-ready, vision-language, closed-loop benchmark for humanoid WBC, comprising over 150 tasks from 10 categories. We then propose LeVERB: Latent Vision-Language-Encoded Robot Behavior, a hierarchical latent instruction-following framework for humanoid vision-language WBC, the first of its kind. At the top level, a vision–language policy learns a latent action vocabulary from synthetically rendered kinematic demonstrations; at the low level, a reinforcement-learned WBC policy consumes these latent verbs to generate dynamics-level commands. In our benchmark, LeVERB attains a 58.5 \% success rate, outperforming naive hierarchical VLA implementation by 7.8 times, and can be zero-shot deployed in real.
Supplementary Material: zip
Primary Area: applications to robotics, autonomy, planning
Submission Number: 21713
Loading