Abstract: Point cloud processing methods exploit local point features and global context through aggregation which does
not explicit model the internal correlations between local and global features. To address this problem, we propose full point
encoding which is applicable to convolution and transformer architectures. Specifically, we propose Full Point Convolution
(FuPConv) and Full Point Transformer (FPTransformer) architectures. The key idea is to adaptively learn the weights from
local and global geometric connections, where the connections are established through local and global correlation functions
respectively. FuPConv and FPTransformer simultaneously model the local and global geometric relationships as well as their
internal correlations, demonstrating strong generalization ability and high performance. FuPConv is incorporated in classical
hierarchical network architectures to achieve local and global shape-aware learning. In FPTransformer, we introduce full point
position encoding in self-attention, that hierarchically encodes each point position in the global and local receptive field.
We also propose a shape aware downsampling block which takes into account the local shape and the global context.
Experimental comparison to existing methods on benchmark datasets show the efficacy of FuPConv and FPTransformer
for semantic segmentation, object detection, classification, and normal estimation tasks. In particular, we achieve state-of-the-
art semantic segmentation results of 76.8% mIoU on S3DIS 6-fold and 73.1% on S3DIS Area 5. Our code is available at
https://github.com/hnuhyuwa/FullPointTransformer.
Loading