Abstract: With the rapid advancements of 3-D acquisition technology, 3-D change detection has gained lots of attentions recently. Existing deep learning-based point cloud change detection methods usually adopt a common encoder-decoder structure to learn pointwise features. However, these feature learning backbones are not specifically designed for change detection task, and ignore the local structure discrepancies during feature learning. To address these issues, this article proposes a multiscale difference-aware network (Ms-DANet) for 3-D point cloud change detection. First, we propose a difference-guided multiscale feature learning (DG-MsFL) module to enhance the feature differences between bi-temporal point clouds at multiple scales during feature encoding, and use these differences to guide the network focusing more on the local structures with large discrepancies. Next, we introduce a multiscale difference feature fusion (Ms-DFF) module to fuse the multiscale feature differences to learn more discriminative features during feature decoding. Finally, we treat the point cloud change detection task as a semantic classification problem, and propose a multiscale loss (Ms-Loss) function to promote the network training. We conduct experiments on the real-world street-level point cloud change detection dataset SLPCCD and the simulated airborne urban point cloud change detection dataset URB3DCD. The experimental results show that Ms-DANet obtains a significant improvement on both the real-world and simulated point cloud change detection datasets, demonstrating its effectiveness and robustness across various sensors and data modalities.
Loading