Multimodal Neural Surface Reconstruction: Recovering the Geometry and Appearance of 3D Scenes from Events and Grayscale Images

Published: 03 Nov 2023, Last Modified: 03 Nov 2023NeurIPS 2023 Deep Inverse Workshop PosterEveryoneRevisionsBibTeX
Keywords: multimodal data integration, deep learning, neural surface reconstruction, disentangled learning
TL;DR: Multimodal Neural Surface Reconstruction: Recovering the Geometry and Appearance of 3D Scenes from Events and Grayscale Images
Abstract: Event cameras offer high frame rates, minimal motion blur, and excellent dynamic range. As a result they excel at reconstructing the geometry of 3D scenes. However, their measurements do not contain absolute intensity information, which can make accurately reconstructing the appearance of 3D scenes from events challenging. In this work, we develop a multimodal neural 3D scene reconstruction framework that simultaneously reconstructs scene geometry from events and scene appearance from grayscale images. Our framework---which is based on neural surface representations, as opposed to the neural radiance fields used in previous works---is able to reconstruct both the structure and appearance of 3D scenes more accurately than existing unimodal reconstruction methods.
Submission Number: 4
Loading