TL;DR: We introduce view-tied 3D Gaussians, a novel representation for RGBD SLAM that improves scalability by tying Gaussians to depth pixels, reducing storage needs and enabling finer detail representation.
Abstract: Jointly estimating camera poses and mapping scenes from RGBD images is a fundamental task in simultaneous localization and mapping (SLAM). State-of-the-art methods employ 3D Gaussians to represent a scene, and render these Gaussians through splatting
for higher efficiency and better rendering. However, these methods cannot scale up to extremely large scenes, due to the inefficient tracking and mapping strategies that need to optimize all 3D Gaussians in the limited GPU memories throughout the training to maintain the geometry and color consistency to previous RGBD observations. To resolve this issue, we propose novel tracking and mapping strategies to work with a novel 3D representation, dubbed view-tied 3D Gaussians, for RGBD SLAM systems. View-tied 3D Gaussians is a kind of simplified Gaussians, which is tied to depth pixels, without needing to learn locations, rotations, and multi-dimensional variances. Tying Gaussians to views not only significantly saves storage but also allows us to employ many more Gaussians to represent local details in the limited GPU memory. Moreover, our strategies remove the need of maintaining all Gaussians learnable throughout the training, while improving rendering quality, and tracking accuracy. We justify the effectiveness of these designs, and report better performance over the latest methods on the widely used benchmarks in terms of rendering and tracking accuracy and scalability. Please see our project page for code and videos at https://machineperceptionlab.github.io/VTGaussian-SLAM-Project.
Lay Summary: Jointly estimating camera poses and mapping scenes from RGBD images is a fundamental task in simultaneous localization and mapping (SLAM). Recent state-of-the-art SLAM methods employ 3D Gaussians to represent a scene, but they face scalability challenges in large scenes. This is primarily due to the need to optimize all Gaussians in the limited GPU memories throughout training to maintain the geometry and color consistency to previous RGBD observations.
We introduce novel tracking and mapping strategies to work with a novel 3D representation, which we term view-tied 3D Gaussians, for SLAM systems. Unlike recent approaches that learn the position and shape of each 3D Gaussian, our method ties these Gaussians directly to depth pixels from the camera. This design greatly reduces memory usage and enables the use of many more Gaussians to capture fine scene details.
Our method significantly enhances rendering quality, tracking accuracy, and scalability. It achieves better performance than state-of-the-art systems on widely used benchmarks. Please see our project page for code and videos at https://machineperceptionlab.github.io/VTGaussian-SLAM-Project.
Link To Code: https://machineperceptionlab.github.io/VTGaussian-SLAM-Project/
Primary Area: Applications->Computer Vision
Keywords: 3D Gaussian Splatting, view-tied 3D Gaussians, RGBD-SLAM
Submission Number: 1011
Loading