Abstract: Semantic segmentation is a fundamental task in understanding urban mobile laser scanning (MLS) point clouds.
Recently, deep learning-based methods have become prominent for semantic segmentation of MLS point clouds,
and many recent works have achieved state-of-the-art performance on open benchmarks. However, due to
differences of objects across different scenes such as different height of buildings and different forms of the
same road-side objects, the existing open benchmarks (namely source scenes) are often significantly different
from the actual application datasets (namely target scenes). This results in underperformance of semantic
segmentation networks trained using source scenes when applied to target scenes. In this paper, we propose a
novel method to perform unsupervised scene adaptation for semantic segmentation of urban MLS point clouds.
Firstly, we show the scene transfer phenomena in urban MLS point clouds. Then, we propose a new pointwise
attentive transformation module (PW-ATM) to adaptively perform the data alignment. Next, a maximum
classifier discrepancy-based (MCD-based) adversarial learning framework is adopted to further achieve feature
alignment. Finally, an end-to-end alignment deep network architecture is designed for the unsupervised scene
adaptation semantic segmentation of urban MLS point clouds. To experimentally evaluate the performance
of our proposed approach, two large-scale labeled source scenes and two different target scenes were used
for the training. Moreover, four actual application scenes are used for the testing. The experimental results
indicated that our approach can effectively achieve scene adaptation for semantic segmentation of urban MLS
point clouds.
0 Replies
Loading