FDC: Feature Dropout Consistency for unsupervised domain adaptation semantic segmentation

Published: 2025, Last Modified: 14 Nov 2025Neural Networks 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In Unsupervised Domain Adaptation Semantic Segmentation (UDASS), while self-training techniques have become one of the most effective methods to date, the absence of target labels makes models susceptible to overfitting. To address this problem, consistency techniques regularize the model by perturbing the target domain and enforcing consistent pixel predictions before and after the perturbation. Consistency techniques are based on the idea that a well-performing model should yield consistent predictions for the target data stream before and after perturbations. In this paper, we introduce the Feature Dropout Consistency (FDC) module, a novel approach that instigates perturbations at the feature level. This is achieved by implementing random feature dropout between the encoder and decoder of the student network. Subsequently, a feature dropout consistency loss is applied to minimize discrepancies in predictions between the perturbed student network and its teacher counterpart. Notably, FDC seamlessly integrates with existing self-training methodologies and, when paired with input perturbation, delves into an even broader perturbation spectrum. Through rigorous experimentation in standard UDA settings — training with synthetic labeled and real unlabeled data — FDC consistently outperforms baseline models, establishing new benchmarks on GTAV → Cityscapes and SYNTHIA → Cityscapes.
Loading