ConTiCoM-3D: A Continuous-Time Consistency Model for 3D Point Cloud Generation

Published: 05 Nov 2025, Last Modified: 30 Jan 20263DV 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: 3D Continiuous Consistency Models; Generation; Point Clouds
Abstract: Fast and accurate 3D shape generation from point clouds is essential for real-world applications such as robotics, AR/VR, and digital content creation. We present \textbf{ConTiCoM-3D}, a continuous-time consistency model that generates 3D shapes directly in point space, without relying on discretized diffusion steps, pre-trained teacher models, or latent-space encodings. Our approach combines a TrigFlow-inspired continuous noise schedule with a Chamfer Distance-based geometric loss, providing stable training in high-dimensional point sets while avoiding costly Jacobian-vector products. This enables efficient one- to two-step inference with high geometric fidelity. Unlike previous methods that require iterative denoising or latent decoders, ConTiCoM-3D operates entirely in continuous time with a time-conditioned neural network, achieving fast generation. Extensive experiments on the ShapeNet benchmark demonstrate that our method matches or surpasses leading diffusion and latent consistency models in both quality and efficiency, establishing ConTiCoM-3D as a practical solution for scalable 3D shape generation.
Supplementary Material: pdf
Submission Number: 401
Loading