Abstract: Modeling 3D assets is universal in various applications, including animation and game development. However, a key challenge
lies in the labor-intensive task of 3D texturing, where creators
must repeatedly update textures to align with modified geometric shapes on the fly. This iterative workflow makes 3D texturing
significantly more cumbersome and less efficient than 2D image
painting. To address this, we introduce BlendFusion, an interactive
framework that leverages generative diffusion models to streamline
3D texturing. Unlike existing systems that generate textures from
scratch, BlendFusion integrates the procedural nature of texturing
by incorporating multi-view projection to guide the generation
process, enhancing stylistic alignment with the creator’s intent.
Experimental results demonstrate the robustness and consistency
of BlendFusion across both objective and subjective evaluations
Loading