Keywords: 3D Generation, Diffusion Models, Gaussian Splatting
TL;DR: 3D Scene Generation using Pretrained 2D Inpainting Diffusion Models
Abstract: We introduce RealmDreamer, a technique for generating forward-facing 3D scenes from text descriptions. Our method optimizes a 3D Gaussian Splatting representation to match complex text prompts using pretrained diffusion models. Our key insight is to leverage 2D inpainting diffusion models conditioned on an initial scene estimate to provide low variance and high-fidelity estimates of unknown regions during 3D distillation. In conjunction, we imbue correct geometry with geometric distillation from a depth diffusion model, conditioned on samples from the inpainting model. We find that the initialization of the optimization is crucial, and provide a principled methodology for doing so. Notably, our technique doesn't require video or multi-view data and can synthesize various high-quality 3D scenes in different styles with complex layouts. Further, the generality of our method allows 3D synthesis from a single image. As measured by a comprehensive user study, our method outperforms all existing approaches, preferred by 88-95%. We encourage viewing the supplemental website and video. Project page: https://realmdreamer.github.io/
Supplementary Material: zip
Submission Number: 312
Loading