Diff-pcg: diffusion point cloud generation conditioned on continuous normalizing flow

Published: 01 Jan 2025, Last Modified: 21 Jul 2025Vis. Comput. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: With the continuous advancement of computer technology and graphic capabilities, the creation of 3D point clouds holds great promise across various fields. However, previous methods in this area are still facing huge challenges, such as complex training setups and limited precision in generating high-quality 3D content. Taking inspiration from the denoising diffusion probabilistic model, we propose Diff-PCG, a Diffusion Point Cloud Generation Conditioned on Continuous Normalizing Flow for 3D generation. Our approach seamlessly combines forward diffusion and reverse processes to produce high-quality 3D point clouds. Moreover, we include a trainable continuous normalizing flow that controls the foundational structure of the point cloud to enhance the representation ability of the encoded information. Extensive experiments validate the efficacy of our approach in generating high-quality 3D point clouds.
Loading