OWMM-Agent: Open World Mobile Manipulation With Multi-modal Agentic Data Synthesis

Published: 03 Jun 2025, Last Modified: 03 Jun 2025RSS MoMa 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Embodied AI, Mobile Manipulation, Agentic Data Synthesis, Vision-Language Model, LLM Agent
TL;DR: We introduce a novel embodied VLM agent with a VLM fine-tuned by agentic data synthesis for open-world mobile manipulation, unifying scene understanding, state tracking, and action generation for state-of-the-art results.
Abstract: The rapid progress of navigation, manipulation, and vision models has made mobile manipulators capable in many specialized tasks. However, the open-world mobile manipulation (OWMM) task remains a challenge due to the need for generalization to open-ended instructions and environments, as well as the systematic complexity to integrate high-level decision making with low-level robot control based on both global scene understanding and current agent state. To address this complexity, we propose a novel multi-modal agent architecture that maintains multi-view scene frames and agent states for decision-making and controls the robot by function calling. A second challenge is the hallucination from domain shift. To enhance the agent performance, we further introduce an agentic data synthesis pipeline for the OWMM task to adapt the VLM model to our task domain with instruction fine-tuning. We highlight our fine-tuned OWMM-VLM as the first dedicated foundation model for mobile manipulators with global scene understanding, robot state tracking, and multi-modal action generation in a unified model. Through extensive experiments, we demonstrate that our model achieves state-of-the-art performance compared to other models.The project page is at https://owmm-vlm-project.github.io
Submission Number: 5
Loading