3D-free meets 3D priors: Novel View Synthesis from a Single Image with Pretrained Diffusion Guidance

Published: 27 Jan 2026, Last Modified: 27 Jan 2026AAAI 2026 AI4ES PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Computer Vision and Image Understanding, Scenario Generation
Abstract: Recent 3D novel view synthesis (NVS) methods often require extensive 3D data for training, and also typically lack generalization beyond the training distribution. Moreover, they tend to be object centric and struggle with complex and intricate scenes. Conversely, 3D-free methods can generate text-controlled views of complex, in-the-wild scenes using a pretrained stable diffusion model without the need for a large amount of 3D-based training data, but lack camera control. In this paper, we introduce a method capable of generating camera-controlled viewpoints from a single input image, by combining the benefits of 3D-free and 3D-based approaches. Our method excels in handling complex and diverse scenes without extensive training or additional 3D and multiview data. It leverages widely available pretrained NVS models for weak guidance, integrating this knowledge into a 3D-free view synthesis style approach, along with enriching the CLIP vision-language space with 3D camera angle information, to achieve the desired results. Experimental results demonstrate that our method outperforms existing models in both qualitative and quantitative evaluations, achieving high-fidelity, consistent novel view synthesis at desired camera angles across a wide variety of scenes while maintaining accurate, natural detail representation and image clarity across various viewpoints. We also support our method with a comprehensive analysis of 2D image generation models and the 3D space, providing a solid foundation and rationale for our solution. Furthermore, the proposed framework contributes to scenario generation and ecological visualization by enabling controllable, multi-view synthesis of natural and urban environments from limited imagery. This capability can support climate impact simulations and environmental narrative synthesis, aligning with recent advances in generative AI and foundation models for scientific and ecological applications.
Submission Number: 15
Loading