Abstract: A generative model for high-fidelity point cloud is of great importance in synthesizing 3d environments for applications such as autonomous driving and robotics. Despite recent success of deep generative models for 2d images, it is non-trivial to generate point cloud without a comprehensive understanding on both local and global geometric structures. In this paper, we devise a new 3d point cloud generation framework using a divide-and-conquer approach, where the whole generation process can be divided into a set of patch-wise generation tasks. Specifically, all patch generators are based on learnable priors which aims to capture the information of geometry primitives. We introduce point- and patch-wise transformers to enable the interactions between points and patches. Therefore, the proposed divide-and-conquer approach contributes to a new understanding on point cloud generation from the geometry constitution of 3d shapes. Experimental results on a variety of object categories from the most popular point cloud dataset, ShapeNet, show the effectiveness of the proposed patch-wise point cloud generation, where it clearly outperforms recent state-of-the-art methods for high-fidelity point cloud generation.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: (1) include comparisons with most recent works in Table 1;
(2) include experiment of qualitative and quantitative comparisons of different patches in our Appendix;
(3) include experiment of ablation on the patch-based generation in our Appendix;
(4) correct some typos;
Assigned Action Editor: ~Jiajun_Wu1
Submission Number: 311
Loading