Learning Pseudo 3D Guidance for View-consistent 3D Texturing with 2D Diffusion

18 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: 3D Texturing, Diffusion Model
Abstract: Text-driven 3D texturing requires the generation of high-fidelity texture that conforms to given geometry and description. Recently, the high-quality text-to-image generation ability of 2D diffusion model has significantly promoted this task, by converting it into a texture optimization process guided by multi-view synthesized images. Thus the generation of high-quality and multi-view consistency images becomes the key issue. State-of-the-art methods introduce global consistency by treating novel view image generation as image inpainting conditioned on the texture generated by previously seen views. However, due to the error accumulation of inpainting itself and the occlusion between object parts, these inpainting-based methods often fail to deal with long-range texture consistency and the learned texture is of low quality. To address these, we present P3G, a text to 3D texturing approach based on learned Pseudo 3D Guidance. The key idea of P3D is to first learn a coarse but view-consistent texture, to serve as a semantics and layout guidance for high-quality view-consistent multi-view image generation. To this end, we propose a novel method to enable the learning of the pseudo 3D guidance, and design an efficient framework for high-quality and multi-view consistent image generation that incorporates both the depth map, the learned high-level semantics and layout guidance, and the previously generated texture. Quantitative and qualitative evaluation on variant 3D shapes demonstrates the superiority of our P3G on both consistency and quality.
Supplementary Material: zip
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1160
Loading