ViewFusion: Learning Composable Diffusion Models for Novel View Synthesis

Published: 30 May 2025, Last Modified: 30 May 2025Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Deep learning is providing a wealth of new approaches to the problem of novel view synthesis, from Neural Radiance Field (NeRF) based approaches to end-to-end style architectures. Each approach offers specific strengths but also comes with limitations in their applicability. This work introduces ViewFusion, an end-to-end generative approach to novel view synthesis with unparalleled flexibility. ViewFusion consists in simultaneously applying a diffusion denoising step to any number of input views of a scene, then combining the noise gradients obtained for each view with an (inferred) pixel-weighting mask, ensuring that for each region of the target view only the most informative input views are taken into account. Our approach resolves several limitations of previous approaches by (1) being trainable and generalizing across multiple scenes and object classes, (2) adaptively taking in a variable number of pose-free views at both train and test time, (3) generating plausible views even in severely underdetermined conditions (thanks to its generative nature)---all while generating views of quality on par or even better than comparable methods. Limitations include not generating a 3D embedding of the scene, resulting in a relatively slow inference speed, and our method only being tested on the relatively small Neural 3D Mesh Renderer dataset. Code is available.
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/bronemos/view-fusion
Assigned Action Editor: ~Adam_W_Harley1
Submission Number: 4100
Loading