PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting for Novel View Synthesis

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: A feed-forward model to estimate 3D Gaussians for novel view synthesis
Abstract: We consider the problem of novel view synthesis from unposed images in a single feed-forward. Our framework capitalizes on fast speed, scalability, and high-quality 3D reconstruction and view synthesis capabilities of 3DGS, where we further extend it to offer a practical solution that relaxes common assumptions such as dense image views, accurate camera poses, and substantial image overlaps. We achieve this through identifying and addressing unique challenges arising from the use of pixel-aligned 3DGS: misaligned 3D Gaussians across different views induce noisy or sparse gradients that destabilize training and hinder convergence, especially when above assumptions are not met. To mitigate this, we employ pre-trained monocular depth estimation and visual correspondence models to achieve coarse alignments of 3D Gaussians. We then introduce lightweight, learnable modules to refine depth and pose estimates from the coarse alignments, improving the quality of 3D reconstruction and novel view synthesis. Furthermore, the refined estimates are leveraged to estimate geometry confidence scores, which assess the reliability of 3D Gaussian centers and condition the prediction of Gaussian parameters accordingly. Extensive evaluations on large-scale real-world datasets demonstrate that PF3plat sets a new state-of-the-art across all benchmarks, supported by comprehensive ablation studies validating our design choices. We will make the code and weights publicly available.
Lay Summary: Reconstructing 3D scenes and creating realistic novel views from just a few images is challenging, especially when the images lack precise camera positions. Existing methods either require large amounts of densely captured images (100+) or accurate camera measurements, making them impractical for casual use. We introduce PF3plat, a new approach that quickly generates high-quality 3D views without precise camera positions, even from sparse images captured from wide baselines. By identifying the existing limitations of 3D Gaussian Splatting, our method leverages advanced models initially trained to estimate depth and image correspondences, then fine-tunes these predictions with lightweight adjustments. By further assessing the reliability of these predictions, we ensure stable, accurate 3D reconstruction. Extensive experiments confirm PF3plat significantly outperforms previous techniques in speed, accuracy, and image quality across diverse indoor and outdoor settings. Our research brings us closer to making high-quality 3D scene capture accessible to everyone, even with just a few unposed photos from everyday cameras or smartphones.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Link To Code: https://github.com/cvlab-kaist/PF3plat
Primary Area: Applications->Computer Vision
Keywords: Pose-free, novel view synthesis, 3D reconstruction
Submission Number: 7048
Loading