Maestro: Orchestrating Robotics Modules with Vision-Language Models for Zero-Shot Generalist Robots

Published: 23 Sept 2025, Last Modified: 19 Nov 2025SpaVLE OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Vision-Language Models, Generalist Robots
TL;DR: Maestro: a VLM coding agent that composes diverse robotics-related tools into programmatic policies. Its streamlined closed-loop interface and tool repertoire allow it to largely surpass today's VLA models on challenging zero-shot manipulation tasks.
Abstract: Today’s best-explored routes towards generalist robots center on collecting ever larger “observations-in actions out” robotics datasets to train large end-to-end models, copying a recipe that has worked for vision-language models (VLMs). We pursue a road less traveled: building generalist policies directly around VLMs by augmenting their general capabilities with specific robot capabilities encapsulated in a carefully curated set of perception, planning, and control modules. In Maestro, a VLM coding agent dynamically composes these modules into a programmatic policy for the current task and scenario. Maestro's architecture benefits from a streamlined closed-loop interface without many manually imposed structural constraints, and a comprehensive and diverse tool repertoire. As a result, it largely surpasses today’s VLA models for zero shot performance on challenging manipulation skills. Further, Maestro is easily extensible to incorporate new modules, easily editable to suit new embodiments such as a quadruped mounted arm, and even easily adapts from minimal real world experiences through local code edits. See our project site maestro-robot.github.io/ for videos and supplementary material.
Submission Type: Long Research Paper (< 9 Pages)
Submission Number: 90
Loading