Abstract: Optimizing the underlying continuous volumetric scene function using sparse input view collections is crucial for applications in modern industrial production and virtual reality technologies. However, existing technologies in this domain continue to exhibit significant shortcomings in specific areas. Therefore, this paper proposes a method that leverages neural radiance fields as a scene representation, employing an efficient and robust backend penalty loss algorithm to supervise model convergence. This approach achieves high-quality 3D reconstruction from images captured from surrounding views, surpassing existing methods that rely on explicit volumetric representations. Additionally, CL-NeRF incorporates a straightforward tracking and mapping system that adjusts based on the underlying point cloud representation of the neural radiance field. This method is independent of scene size and avoids issues related to sub-map capacity, making it suitable for reconstructing larger scenes. CL-NeRF offers several advantages over previous models, including faster rendering and higher-quality optimization.
Loading