MaskedFusion360: Reconstruct LiDAR Data by Querying Camera FeaturesDownload PDF

01 Mar 2023 (modified: 03 Nov 2024)Submitted to Tiny Papers @ ICLR 2023Readers: Everyone
Keywords: Sensor fusion, self-driving, self-supervised
TL;DR: Fuse LiDAR and camera features by learning to reconstruct masked LiDAR data
Abstract: In self-driving applications, LiDAR data provides accurate information about distances in 3D but lacks the semantic richness of camera data. Therefore, state-of-the-art methods for perception in urban scenes fuse data from both sensor types. In this work, we introduce a novel self-supervised method to fuse LiDAR and camera data for self-driving applications. We build upon masked autoencoders (MAEs) and train deep learning models to reconstruct masked LiDAR data from fused LiDAR and camera features. In contrast to related methods that use birds-eye-view representations, we fuse features from dense spherical LiDAR projections and features from fish-eye camera crops with a similar field of view. Therefore, we reduce the learned spatial transformations to moderate perspective transformations and do not require additional modules to generate dense LiDAR representations. Code is available at: https://github.com/KIT-MRT/masked-fusion-360
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/maskedfusion360-reconstruct-lidar-data-by/code)
10 Replies

Loading