CoViS-Net: A Cooperative Visual Spatial Foundation Model for Multi-Robot Applications

Published: 05 Sept 2024, Last Modified: 08 Nov 2024CoRL 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multi-Robot Systems, Robot Perception, Foundation Models
TL;DR: We propose a decentralized, platform-agnostic visual spatial foundation model for multi-robot systems, enhancing spatial understanding through data-derived priors for real-world, online deployment.
Abstract: Autonomous robot operation in unstructured environments is often underpinned by spatial understanding through vision. Systems composed of multiple concurrently operating robots additionally require access to frequent, accurate and reliable pose estimates. Classical vision-based methods to regress relative pose are commonly computationally expensive (precluding real-time applications), and often lack data-derived priors for resolving ambiguities. In this work, we propose CoViS-Net, a cooperative, multi-robot visual spatial foundation model that learns spatial priors from data, enabling pose estimation as well as general spatial comprehension. Our model is fully decentralized, platform-agnostic, executable in real-time using onboard compute, and does not require existing networking infrastructure. CoViS-Net provides relative pose estimates and a local bird's-eye-view (BEV) representation, even without camera overlap between robots, and can predict BEV representations of unseen regions. We demonstrate its use in a multi-robot formation control task across various real-world settings. We provide supplementary material online and will open source our trained model in due course. https://sites.google.com/view/covis-net
Supplementary Material: zip
Video: https://www.youtube.com/watch?v=ovoj62CElmE
Website: https://proroklab.github.io/CoViS-Net/
Code: https://github.com/proroklab/CoViS-Net
Publication Agreement: pdf
Student Paper: yes
Spotlight Video: mp4
Submission Number: 176
Loading