Depth-NeuS: Neural Implicit Surfaces Learning for Multi-view Reconstruction Based on Depth Information Optimization
Abstract: Recently, the representation and rendering methods of learning neural implicit surfaces through volume rendering have become more and more popular and have made excellent progress rapidly. However, these methods still face some challenges that existing methods only use surface normal to represent implicit surfaces lack a direct representation of depth information, which makes object reconstruction unrestricted by geometric features, resulting in undesirable reconstruction of objects with texture and color features. In order to solve these problems, we propose a neural implicit surface learning method called Depth-NeuS. To address the deficiencies in the reconstruction of objects with color and texture characteristics, and to enhance the depiction of intricate surface features, we introduce depth loss to explicitly constrain Signed Distance Field (SDF) regression to achieve deep information optimization. Additionally, we integrate photometric loss and geometric loss into loss function as geometric consistency loss to achieve geometric constraints. Empirical experiments have showcased the superior performance of Depth-NeuS over existing technologies across various scenarios. Moreover, it resolves the inadequacies present in current reconstruction methods based on implicit neural representation, particularly in the reconstruction of objects with complex texture and color attributes. Ultimately, Depth-NeuS delivers exceptional quality in surface reconstruction.
Loading