Keywords: Novel View Synthesis, Surface Reconstruction, Gaussian Splatting
TL;DR: Creat3r builds a robust geometric scaffold on the fly and uses novel "exploration" and "confidence" maps to guide view selection, achieving state-of-the-art results with significantly less data and computation.
Abstract: We introduce Creat3r, an active view selection framework designed for efficient and high-quality 3D reconstruction using a limited subset of image-pose pairs. Given an initial set of selected views, our method iteratively identifies the most informative candidate views to maximize reconstruction accuracy while adhering to computational constraints. Our approach begins by generating an intermediate 3D point cloud through dense pixel correspondences and stereo triangulation, refining point estimates via the Direct Linear Transform (DLT). To assess reconstruction reliability, we introduce a 3D confidence field that integrates camera support and view consistency, enabling a quantitative evaluation of point quality. This confidence information is then propagated to all candidate views using an efficient Gaussian projection technique, generating 2D confidence and exploration maps for each potential viewpoint. We define an exploration measure based on these maps to evaluate and optimally select the next best view. By balancing exploration, reconstruction accuracy, and computational efficiency, Creat3r is well-suited for applications in autonomous 3D scanning, robotic vision, and multi-view scene reconstruction. To demonstrate its effectiveness, our method is evaluated against baselines using the standard 3DGS representation for 3D reconstruction from the selected views. The experimental results show that our method excels in novel view synthesis and surface reconstruction, achieving significant improvements in SSIM and F1-score.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 11567
Loading