Keywords: Computational Photography, Image & Video Synthesis, 3D Texture, Continuous Learning
TL;DR: We introduce LVstyler, an innovative generative framework that extends beyond the conventional 0-to-1 UV texture synthesis, specializing in a more varied style transfer for 3D meshes while preserving geometric fidelity.
Abstract: We introduce LVstyler, an innovative generative framework that extends beyond the conventional 0-to-1 UV texture synthesis, specializing in a more varied style transfer for 3D meshes while preserving geometric fidelity. The core challenge lies in maintaining style consistency across complex 3D surfaces without introducing style-agnostic artifacts. Other methods leverage a pre-trained texture generation model, which primarily relies on a diffusion model, producing an initial texture map. However, due to the limited styles that a simple pre-trained diffusion-based model can generate for objects, these methods can only handle short and object-based prompts, rather than styling prompts. Therefore, we integrate an optimization-based texture generation model in the image space, specifically modifying it with two LoRA extensions for shape consistency and UV map space adaptation. Through this technique, LVstyler can produce varied high-quality UV textures that allow more imagination through the detailed styling text guidance, significantly advancing the state-of-the-art in texturing 3D objects.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 22780
Loading