Uncertainty-Aware Diffusion-Guided Refinement of 3D Scenes

Published: 14 Sept 2025, Last Modified: 13 Oct 2025ICCV 2025 Wild3DEveryoneRevisionsBibTeXCC BY 4.0
Keywords: 3D scene understanding
Abstract: Reconstructing 3D scenes from a single image is a fundamentally ill-posed task due to the severely under-constrained nature of the problem. Consequently, when the scene is rendered from novel camera views, existing single image to 3D reconstruction methods render incoherent and blurry views. This problem is exacerbated when the unseen regions are far away from the input camera. In this work, we address these inherent limitations in existing single image-to-3D scene feedforward networks. To alleviate the poor performance due to insufficient information beyond the input image’s view, we leverage a strong generative prior, in the form of a pretrained latent video diffusion model, for iterative refinement of a coarse scene represented by optimizable Gaussian parameters. To ensure that the style and texture of the generated images align with that of the input image, we incorporate on-the-fly Fourier-style transfer between the generated images and the input image. Additionally, we design a semantic uncertainty quantification module that calculates the pepixel entropy and yields uncertainty maps used to guide the refinement process from the most confident pixels while dis- carding the remaining highly uncertain ones. We conduct extensive experiments on real-world scene datasets, includ- ing in-domain RealEstate-10K and out-of-domain KITTI-v2, showing that our approach can provide more realistic and high-fidelity novel view synthesis results compared to existing state-of-the-art methods.
Supplementary Material: pdf
Submission Number: 18
Loading