Foggy-Aware Teacher: An Unsupervised Domain Adaptive Learning Framework for Object Detection in Foggy Scenes

Published: 2025, Last Modified: 10 Nov 2025IEEE Robotics Autom. Lett. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Unsupervised domain adaptation (UDA) is an effective scheme to improve the performance of an object detector in foggy scenes by adapting labeled normal images (source domain) to unlabeled foggy images (target domain). Existing methods leverage the Teacher-Student mutual learning framework, i.e., unbiased mean teacher, to produce pseudo-labels of the foggy images for self-training. This kind of general-purpose design cannot fully explore the characteristics of foggy images, resulting in the low-quality pseudo-labels which degrades the mutual learning between the teacher and student network. Addressing this problem, we propose a foggy-aware teacher (FAT) framework that improves the quality of pseudo-labels by exploring the fog characteristics from two aspects: 1) we design a light-weight foggy-aware module (FAM), which is inserted before the teacher network to estimate the fog information, i.e., the global atmospheric light, and produce the source-like (defogged) images to mitigate the domain shift. 2) we propose a foggy-information-guided image fusion (FIF) scheme, which is inserted before the student network to produce the interpolated images by employing the fog characteristics on the normal images. The student can be trained with the high-quality pseudo-labels produced by the teacher and the ground-truth labels supplied by the normal images. The entire mutual learning framework can then form a virtuous self-training cycle to improve the performance on foggy images. Extensive experiments demonstrate that our FAT framework is effective in improving detector performance under foggy conditions, and has achieved better results than state-of-the-art methods on benchmark datasets Foggy Cityscapes, RTTS, and VOC_foggy.
Loading