G3R: Gradient Guided Generalizable Reconstruction

Published: 09 Sept 2024, Last Modified: 23 Sept 2024ECCV 2024 Wild3DEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Generalizable Reconstruction, Neural Rendering, Learned Optimization, 3DGS, Large Reconstruction Models
TL;DR: We propose one generalizable reconstruction approach that can efficiently predict high-quality 3D scene representations for large scenes
Abstract: Large scale 3D scene reconstruction is important for applications such as virtual reality and simulation. Existing neural rendering approaches (e.g., NeRF, 3DGS) have achieved realistic reconstructions on large scenes, but optimize per scene, which is expensive and slow, and exhibit noticeable artifacts under large view changes due to overfitting. Generalizable approaches or large reconstruction models are fast, but primarily work for small scenes/objects and often produce lower quality rendering results. In this work, we introduce G3R, a generalizable reconstruction approach that can efficiently predict high-quality 3D scene representations for large scenes. We propose to learn a reconstruction network that takes the gradient feedback signals from differentiable rendering to iteratively update a 3D scene representation, combining the benefits of high photorealism from per-scene optimization with data-driven priors from fast feed-forward prediction methods. Experiments on urban-driving and drone datasets show that G3R generalizes across diverse large scenes and accelerates the reconstruction process by at least 10x while achieving comparable or better realism compared to 3DGS, and also being more robust to large view changes.
Submission Number: 40
Loading