Automated Mobile Attention KPConv Networks via A Wide & Deep PredictorDownload PDF


Sep 29, 2021 (edited Nov 22, 2021)ICLR 2022 Conference Blind SubmissionReaders: Everyone
  • Keywords: 3D Point Cloud Classification and segmentation, Neural Architecture Search
  • Abstract: Kernel Point Convolution (KPConv) achieves cutting-edge performance on 3D point cloud applications. Unfortunately, the large size of KPConv network limits its usage in mobile scenarios. In addition, we observe that KPConv ignores the kernel relationship and treats each kernel point equally when formulating neighbor-kernel correlation via Euclidean distance. This leads to a weak representation power. To mitigate the above issues, we propose a module named Mobile Attention Kernel Point Convolution (MAKPConv) to improve the efficiency and quality of KPConv. MAKPConv employs a depthwise kernel to reduce resource consumption and re-calibrates the contribution of kernel points towards each neighbor point via Neighbor-Kernel attention to improve representation power. Furthermore, we capitalize Inverted Residual Bottleneck (IRB) to craft a design space and employ a predictor-based Neural Architecture Search (NAS) approach to automate the design of efficient 3D networks based on MAKPConv. To fully exploit the immense design space via an accurate predictor, we identify the importance of carrying feature engineering on searchable features to improve neural architecture representations and propose a Wide & Deep Predictor to unify dense and sparse neural architecture representations for lower error in performance prediction. Experimental evaluations show that our NAS-crafted MAKPConv network uses 96% fewer parameters on 3D point cloud classification and segmentation benchmarks with better performance. Compared with state-of-the-art NAS-crafted model SPVNAS, our NAS-crafted MAKPConv network achieves ~1% better mIOU with 83% fewer parameters and 52% fewer Multiply-Accumulates.
  • One-sentence Summary: We introduce MAKPConv module to enable high-performing and efficient learning on 3D point cloud, and automate the design of MAKPConv networks using predictor-based NAS with enhanced neural architecture representations via Wide & Deep predictor.
17 Replies