Keywords: visual-language-action models, object-centric representations, robotic manipulation, imitation learning
TL;DR: We drastically reduce the number of visual tokens processed in VLAs to the objects that matter for the agent and find that our approach offers an increase in training efficiency while slightly outperforming its non-object-centric counterpart.
Abstract: Vision-Language-Action (VLA) models offer a pivotal approach to learning robotic manipulation at scale by repurposing large pre-trained Vision-Language-Models (VLM) to output robotic actions. However, adapting VLMs for robotic domains comes with an unnecessarily high computational cost, which we attribute to the tokenization scheme of visual inputs. In this work, we aim to enable efficient VLA training by proposing Oat-VLA, an Object-Agent-centric Tokenization for VLAs. Building on the insights of object-centric representation learning, our method introduces an inductive bias towards scene objects and the agent's own visual information. As a result, we find that Oat-VLA can drastically reduce the number of visual tokens to just a few tokens without sacrificing performance. We reveal that Oat-VLA converges at least twice as fast as OpenVLA on the LIBERO suite, as well as outperform OpenVLA in diverse real-world pick and place tasks.
Supplementary Material: zip
Spotlight: zip
Submission Number: 1123
Loading