DynaMem: Online Dynamic Spatio-Semantic Memory for Open World Mobile Manipulation

Published: 10 Nov 2024, Last Modified: 10 Nov 2024CoRL-X-Embodiment-WS 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Semantic Scene Understanding; Mobile Manipulation; Continual Learning
TL;DR: Open vocabulary mobile manipulation (OVMM) in a dynamically changing environments by a spatio-semantic memory adaptable to environment changes.
Abstract: Significant progress has been made in open-vocabulary mobile manipulation, where the goal is for a robot to perform tasks in any environment given a natural language description. However, most current systems assume a static environment, which limits the system’s applicability in real-world scenarios where environments frequently change due to human intervention or the robot’s own actions. In this work, we present DynaMem, a new approach to open-world mobile manipulation that uses a dynamic spatio-semantic memory to represent a robot’s environment. DynaMem constructs a 3D data structure to maintain a dynamic memory of point clouds, and answers open-vocabulary object localization queries using multimodal LLMs or open-vocabulary features generated by state-of-the-art vision-language models. Powered by DynaMem, our robots can explore novel environments, search for objects not found in memory, and continuously update the memory as objects move, appear, or disappear in the scene. We run extensive experiments on the Stretch SE3 robots in three real and nine offline scenes, and achieve an average pick-and-drop success rate of 70% on non-stationary objects, which is more than a 2× improvement over state-of-the-art static systems.
Previous Publication: No
Submission Number: 21
Loading