LiDARGrid: Self-supervised 3D Opacity Grid from LiDAR for Scene Forecasting

Published: 05 Sept 2024, Last Modified: 08 Nov 2024CoRL 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: 3D perception, lidar, opacity grid, occupancy grid, neural rendering, self-supervised learning, mobile robot, autonomous driving
Abstract: Timely capturing the dense geometry of the surrounding scene with unlabeled LiDAR data is valuable but under-explored for mobile robotic applications. Its value lies in the huge amount of such unlabeled data, enabling self-supervised learning for various downstream tasks. Current dynamic 3D scene reconstruction approaches however heavily rely on data annotations to tackle the moving objects in the scene. In response, we present LiDARGrid, a 3D opacity grid representation instantly derived from LiDAR points, which captures the dense 3D scene and facilitates scene forecasting. Our method features a novel self-supervised neural volume densification procedure based on an autoencoder and differentiable volume rendering. Leveraging this representation, self-supervised scene forecasting can be performed. Our method is trained on NuScenes dataset for autonomous driving, and is evaluated by predicting future point clouds using the scene forecasting. It notably outperforms state-of-the-art methods in point cloud forecasting in all performance metrics. Beyond scene forecasting, our representation excels in supporting additional tasks such as moving region detection and depth completion, as shown by experiments.
Publication Agreement: pdf
Student Paper: no
Spotlight Video: mp4
Submission Number: 636
Loading