Can Shape-Infused Joint Embeddings Improve Image-Conditioned 3D Diffusion?

Published: 01 Jan 2024, Last Modified: 13 Nov 2024IJCNN 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Recent advancements in deep generative models, particularly with the application of CLIP (Contrastive Language–Image Pre-training) to Denoising Diffusion Probabilistic Models (DDPMs), have demonstrated remarkable effectiveness in text-to-image generation. The well-structured embedding space of CLIP has also been extended to image-to-shape generation with DDPMs, yielding notable results. Despite these successes, some fundamental questions arise: Does CLIP ensure the best results in shape generation from images? Can we leverage conditioning to bring explicit 3D knowledge into the generative process and obtain better quality? This study introduces CISP (Contrastive Image-Shape Pre-training), designed to enhance 3D shape synthesis guided by 2D images. CISP aims to enrich the CLIP framework by aligning 2D images with 3D shapes in a shared embedding space, specifically capturing 3D characteristics potentially overlooked by CLIP’s text-image focus. Our comprehensive analysis assesses CISP’s guidance performance against CLIP-guided models, focusing on generation quality, diversity, and coherence of the produced shapes with the conditioning image. We find that, while matching CLIP in generation quality and diversity, CISP substantially improves coherence with input images, underscoring the value of incorporating 3D knowledge into generative models. These findings suggest a promising direction for advancing the synthesis of 3D visual content by integrating multimodal systems with 3D representations.
Loading