FFnsr: Fast and Fine Neural Surface Reconstruction

Published: 01 Jan 2024, Last Modified: 15 May 2025ICME 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Recent methods for neural surface representation and rendering have shown the ability to reconstruct surface but require lengthy training. The latest approach employs hash encoding to expedite training, but neglects reconstruction accuracy, leading to lower surface quality. To address this problem, we propose a fast neural surface reconstruction method called FFnsr, which incorporates two optimizations. First, FFnsr uses a predetermined linear growth function to replace the learnable parameter in previous volume rendering methods. This allows the network to concentrate on the rough shape during the initial stages and refine the details in the later stages. Simultaneously, FFnsr proposes a regularization scheme using second-order derivatives in the direction of the gradient, which can stabilize the training of the network and obtain a flatter surface. Our experimental results on DTU datasets demonstrate that FFnsr can generate high-quality and robust reconstruction results while maintaining high speed.
Loading