Keywords: collaborative learning, semi-supervised training, LiDAR semantic segmentation
Abstract: Annotating large-scale LiDAR point clouds for 3D semantic segmentation is costly and time-consuming, motivating the use of semi-supervised learning (SemiSL). Standard SemiSL methods typically rely on a single LiDAR representation in a two-stage framework, where consistency between identical models is enforced under input perturbations. However, these approaches treat pseudo-labels from a single network as fully reliable, which reinforces architectural biases and propagates errors during distillation, ultimately limiting student performance. Recent dual-representation methods alleviate this issue but still remain constrained by the limitation of two-stage design. We introduce CoLLiS, a novel framework that leverages Collaborative Learning for LiDAR Semi-supervised segmentation. Unlike prior paradigms, CoLLiS trains multiple representations collaboratively in a single stage by treating them as coequal students. Cross-representation distillation is adaptively balanced by monitoring inter-student disparities to mitigate confirmation bias and improves robustness. Extensive experiments on three public benchmarks show that CoLLiS consistently enhances the performance of all participating models and achieves superior results compared to state-of-the-art LiDAR SemiSL methods. The code will be released upon acceptance.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 13894
Loading