Edge-Sampler: Efficient Importance Sampling for Neural Implicit Surfaces Reconstruction

21 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: representation learning for computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Neural implicit surfaces, sampling algorithm, computer vision
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Neural implicit surfaces have attracted much attention in the 3D reconstruction field. Equipped with signed distance functions (SDFs), neural implicit surfaces significantly improve geometry reconstruction quality compared to neural radiance fields (NeRFs). However, compared with NeRFs, training SDFs is more challenging and time-consuming because it requires large sample counts to sample the thin edges of implicit surface density functions. Up till today, the error bounded sampling has been the sole volume importance sampling technique dedicated to implicit SDFs, which theoretically bounds the errors of sample weights and thus prevents missing important thin surface edges, but at the cost of large sample counts. In this work, we introduce an efficient edge-sampler technique to significantly reduce the required sample counts by up to 10x while still preserving the theoretical error bound by reducing Riemann integral bias. Specifically, the technique first proposes a double-sampling strategy to detect the thin intervals of surface edges containing all valid samples. Then, it fits the density functions of the intervals with bounded cumulated distribution functions (CDF) errors and produces the final Riemann sum with sparse uniform samples. Extensive results in various scenes demonstrate the superiority of our sampling technique, including improving geometry reconstruction details, significantly reducing sample counts and training time, and the capability to be generalized to various implicit SDF frameworks.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3519
Loading