Constraint-Aware Feature learning for parametric point cloud

Published: 15 Jul 2025, Last Modified: 22 Jul 2025https://openreview.net/forum?id=wMKVouX9JS&referrer=%5BAuthor%20Console%5D(%2Fgroup%3Fid%3Dthecvf.com%2FICCV%2F2025%2FConference%2FAuthors%23your-submissions)EveryoneCC BY 4.0
Abstract: Parametric point clouds are sampled from CAD shapes, and have been widely used in industrial manufacturing. Most existing CAD-specific networks focus on the geomet ric features, such as primitive parameters, overlooking the important attribute of constraints inherent in CAD shapes, which limits these methods’ ability to discriminate CAD shapes with similar appearance but constraint divergent. To address this issue, we analyzed the effect of constraints, and proposed its deep learning-friendly representation, i.e. point-wise Main Axis Direction (MAD), Adjacency (ADJ), and Primitive Type (PMT). After that, the Constraint Feature Learning Network (CstNet) was developed to extract and leverage constraints. CstNet includes two stages. Stage 1 extracts constraints from B-Rep data or point cloud based on the locality of constraints, enabling it to generalize to unseen datasets after pre-trained. Stage 2 leverages coordinates and constraints to enhance the comprehension of CAD shapes, which employ attention layers adaptively ad justs the weights on MAD, ADJ, and PMT, facilitating the effective utilization of constraints. Extensive experiments and ablation studies demonstrated the effectiveness and robustness of our design. To the best of our knowledge, CstNet is the first constraint-aware deep learning method tailored for parametric point cloud analysis.
Loading