FlowR: Flowing from Sparse to Dense 3D Reconstructions

Published: 14 Sept 2025, Last Modified: 13 Oct 2025ICCV 2025 Wild3DEveryoneRevisionsBibTeXCC BY 4.0
Keywords: 3D Gaussian Splatting, Diffusion Models, Flow Matching
TL;DR: We present FlowR that bridges the gap between sparse and dense 3D reconstruction. Contrary to prior works, we learn a direct mapping between incorrect renderings and ground-truth images, augmenting scene captures with consistent novel views.
Abstract: 3D Gaussian splatting enables high-quality novel view synthesis (NVS) at real-time frame rates. However, its quality drops sharply as we depart from the training views. Thus, dense captures are needed to match the high-quality expectations of applications like Virtual Reality (VR). However, such dense captures are very laborious and expensive to obtain. Existing works have explored using 2D generative models to alleviate this requirement by distillation or generating additional training views. These models typically rely on a noise-to-data generative process conditioned only on a handful of reference input views, leading to hallucinations, inconsistent generation results, and subsequent reconstruction artifacts. Instead, we propose a multi-view, flow matching model that learns a flow to directly connect novel view renderings from possibly sparse reconstructions to renderings that we expect from dense reconstructions. This enables augmenting scene captures with consistent, generated views to improve reconstruction quality. Our model is trained on a novel dataset of 3.6M image pairs and can process up to 45 views at 540x960 resolution (91K tokens) on one H100 GPU in a single forward pass. Our pipeline consistently improves NVS in sparse- and dense-view scenarios, leading to higher-quality reconstructions than prior works across multiple, widely-used NVS benchmarks.
Submission Number: 33
Loading