Keywords: embodied ai; robot learning; imitation learning; policy learning
TL;DR: Stop using just the last layer of your vision model for robotics; our work shows that leveraging the features from all layers significantly boosts performance.
Abstract: In robot learning, Vision Transformers (ViTs) are standard for visual perception, yet most methods discard valuable information by using only the final layer's features. We argue this provides an insufficient representation and propose the Vision Action Transformer (VAT), a novel architecture that is extended from ViT and unlocks the full feature hierarchy of ViT. VAT processes specialized action tokens with visual features across all transformer layers, enabling a deep and progressive fusion of perception and action generation. On a suite of simulated manipulation tasks, VAT achieves a 98.15\% average success rate across four LIBERO benchmarks, establishing a new state-of-the-art by outperforming prior methods like OpenVLA-OFT. Our work presents not only a powerful model for imitation learning but also demonstrates the critical importance of leveraging the complete "representation trajectory" of vision models to advance robotic policy.
Supplementary Material: zip
Primary Area: applications to robotics, autonomy, planning
Submission Number: 12061
Loading