Multi-Level Collaborative Learning for Multi-Target Domain Adaptive Semantic Segmentation

Published: 01 Jan 2024, Last Modified: 28 Jan 2025IEEE Trans. Circuits Syst. Video Technol. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In autonomous driving, it is crucial to train a single segmentation model that can generalize well on various target environments. Due to the lack of pixel-level annotation and a large domain discrepancy between domain pairs, it could be tough to achieve encouraging performance for multi-target domain adaptive semantic segmentation. To this end, we propose a novel Multi-level Collaborative Learning (MCL) framework that consists of two core components, namely Multi-level Self-Training (MST) and Hierarchical Knowledge Distillation (HKD). Specifically, MST focuses on individual, collaborative, and ensemble learning, whilst HKD aims to play the model’s ensemble capability. These designs enable the proposed MCL to fully exploit the multiple target data to train more powerful teachers and yield more accurate domain alignment. In addition, we integrate style transfer, self-training, and knowledge distillation into an end-to-end training scheme, making the proposed MCL more practical in applications. Empirically, we conduct extensive experiments on multi-target benchmarks. The encouraging results show the effectiveness of our method and state-of-the-art performance has been achieved. Codes are available at https://github.com/feifei-cv/MCL.
Loading