Keywords: Deep SLAM; Multi-agent; Deep Estimation ;3DGS
Abstract: Visual Simultaneous Localization and Mapping (SLAM) reconstructs the metric structure of the physical world from sensor imagery, enabling precise robotic pose estimation. However, environmentally induced image degradation and varied image processing strategie significantly compromise localization accuracy. Intelligent SLAM systems address this challenge by autonomously perceiving dynamic perturbations and formulating adaptive processing strategies, further identifying and deploying optimal methodologies to achieve target localization objectives with enhanced metric precision. This paper introduces MAV-SLAM, a novel Multi-LLM-Agent-Orchestrated visual SLAM framework that proactively identifies and compensates for suboptimal image quality while autonomously selecting optimal depth estimation models. Specifically, we integrate a visual-language model that performs autonomous image restoration guided by image quality assessment, significantly enhancing SLAM localization performance. Furthermore, we implement a routing large language model for adaptive depth estimation, which consequently elevates the quality of 3D reconstruction via 3D Gaussian Splatting (3DGS). Rigorous evaluation across multiple benchmarks demonstrates that MAV-SLAM exhibits superior performance in both localization accuracy and 3DGS-based reconstruction fidelity, validating its effectiveness in real-world scenarios.
Primary Area: applications to robotics, autonomy, planning
Submission Number: 7039
Loading