Abstract: Recently, deep neural networks have made remarkable achievements in 3D point cloud analysis. However, the current shape descriptors are inadequate for capturing the information thoroughly. To handle this problem, a feature representation learning method, named Dual-Neighborhood Deep Fusion Network (DNDFN), is proposed to serve as an improved point cloud encoder for the task of point cloud analysis. Specifically, the traditional local neighborhood ignores the long-distance dependency and DNDFN utilizes an adaptive key neighborhood replenishment mechanism to overcome the limitation. Furthermore, the transmission of information between points depends on the unique potential relationship between them, so a convolution for capturing the relationship is proposed. Extensive experiments on existing benchmarks especially non-idealized datasets verify the effectiveness of DNDFN and DNDFN achieves the state of the arts.
0 Replies
Loading