Keywords: mobile manipulation, robotics, trajectory generation
Abstract: Mobile manipulators must seamlessly coordinate their base, arm, and manipulated objects to perform tasks effectively, particularly within confined household environments. While learning-based methods hold promise for discovering robust, generalizable control policies, they critically depend on access to large volumes of high-quality, physically valid training data. Generating such coordinated, whole-body trajectories requires simultaneously satisfying complex robot-scene constraints, yet existing datasets remain limited by the computational complexity of producing physically valid motions across diverse robots, environments, and task configurations. Here we present AutoMoMa, a system that efficiently generates high-quality whole-body trajectories using Virtual Kinematic Chain (VKC) modeling and GPU-accelerated motion planning at a rate of 2.5k valid episodes per hour per consumer-level GPU. Unlike prior approaches that rely on manual demonstrations or are tied to specific robot-scene pairs, AutoMoMa generalizes across diverse household layouts, interactive objects, robot morphologies, and manipulation tasks while ensuring physical feasibility and strict constraint satisfaction. This large-scale, diverse dataset establishes a foundation for advancing learning-based approaches to mobile manipulation in everyday environments. By making our code and dataset publicly available, we enable community-driven research toward autonomous robots that can reliably perform complex manipulation tasks in human-centered spaces. Website: https://automoma.pages.dev
Submission Number: 14
Loading