TwinVLA: Data-Efficient Bimanual Manipulation with Twin Single-Arm Vision-Language-Action Models

ICLR 2026 Conference Submission16174 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: VLA, Bimanual manipulation, Imitation Learning
TL;DR: We introduce TwinVLA, a vision-language-action (VLA) model for bimanual manipulation that fusing pretrained single-arm VLA models. This design reduces reliance on scarce bimanual data while achieving comparable performance.
Abstract: Vision-language-action models (VLAs) trained on large-scale robotic datasets have demonstrated strong performance on manipulation tasks, including bimanual tasks. However, because most public datasets focus on single-arm demonstrations, adapting VLAs for bimanual tasks typically requires substantial additional bimanual data and fine-tuning. To address this challenge, we introduce TwinVLA, a modular framework that composes two copies of a pretrained single-arm VLA into a coordinated bimanual VLA. Unlike monolithic cross-embodiment models trained on mixtures of single-arm and bimanual data, TwinVLA improves both data efficiency and performance by fusing pretrained single-arm policies. Across diverse bimanual tasks in real-world and simulation settings, TwinVLA matches or exceeds previous approaches trained with larger data and compute budgets without requiring *any* bimanual pretraining. These results highlight modular policy composition as a scalable route to bimanual manipulation using existing public single-arm data.
Supplementary Material: zip
Primary Area: applications to robotics, autonomy, planning
Submission Number: 16174
Loading