MVDiffusion: Enabling Holistic Multi-view Image Generation with Correspondence-Aware Diffusion

Published: 21 Sept 2023, Last Modified: 25 Dec 2023NeurIPS 2023 spotlightEveryoneRevisionsBibTeX
Keywords: multiview; image generation; generative model; diffusion models
TL;DR: This paper introduces MVDiffusion, a simple yet effective multi-view image generation method, tailored for scenarios where pixel-to-pixel correspondences are available, such as perspective crops from panorama or multi-view images given depth/pose.
Abstract: This paper introduces MVDiffusion, a simple yet effective method for generating consistent multi-view images from text prompts given pixel-to-pixel correspondences (e.g., perspective crops from a panorama or multi-view images given depth maps and poses). Unlike prior methods that rely on iterative image warping and inpainting, MVDiffusion simultaneously generates all images with a global awareness, effectively addressing the prevalent error accumulation issue. At its core, MVDiffusion processes perspective images in parallel with a pre-trained text-to-image diffusion model, while integrating novel correspondence-aware attention layers to facilitate cross-view interactions. For panorama generation, while only trained with 10k panoramas, MVDiffusion is able to generate high-resolution photorealistic images for arbitrary texts or extrapolate one perspective image to a 360-degree view. For multi-view depth-to-image generation, MVDiffusion demonstrates state-of-the-art performance for texturing a scene mesh. The project page is at https://mvdiffusion.github.io/.
Supplementary Material: zip
Submission Number: 1349
Loading