DepthEnhanced PointNet++ (DEP-Net): An Optimization of PointNet++ for Incomplete Point Clouds Using Projected Depth Maps

XJTU 2024 CSUC Submission8 Authors

31 Mar 2024 (modified: 03 Apr 2024)XJTU 2024 CSUC SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: DEP-Net, 3D Affordance analysis, incomplete point cloud, AffordanceNet, PointNet++, multi-view depth image, semantic segmentation, Depth Enhanced
Abstract: Since the introduction of PointNet, many current research methodologies have shifted their focus towards directly processing point cloud data using deep learning techniques.However, these studies predominantly concentrate on the analysisof complete point clouds for 3D affordance analysis, and most existing models experience a decline in performance when dealing with incomplete point cloud inputs. The research data from 3D AffordanceNet indicates that due to the loss of geometric information in partial point clouds relative to complete ones,the performance of classic networks such as PointNet++, DGCNN, and U-Net decreases by 2.3% , 4.2%, and 4.4%,respectively, compared to their performance with complete point clouds. Inspired by point cloud completion networks,we initially designed a self-view fusion network that utilizes multi-view depth image information to observe the incomplete self-shape and generate a compact global shape. By acquiring complete view features through the completion network, these are then inputted into the PointNet++ network to further perform downstream semantic segmentation tasks
Submission Number: 8