Abstract: This study focuses on point cloud upsampling, crucial in 3-D data processing but hindered by current 3-D sensor limitations. Point clouds from RGB-D cameras and light detection and ranging (LiDAR) scanners are often sparse, noisy, and irregular, challenging traditional processing methods reliant on prior knowledge and hindering detail preservation. Despite deep learning's transformative impact, issues like hole overfitting and insufficient local-global feature fusion persist. To address these, we introduce the bilevel fusion point cloud upsampling (BiPU) network. It features a parallel extractor for simultaneous local and global feature extraction and a consistency-based feature alignment module employing cross-attention for enhanced multiscale feature transfer. BiPU also incorporates 4-D encoding for rotational invariance and depthwise separable convolutions to reduce complexity and parameters. Tested across multiple datasets, BiPU excels in maintaining hole contours and reducing costs, marking a notable advancement in point cloud processing.
External IDs:dblp:journals/tii/ZhuZCZ24
Loading