TTSDA-YOLO: A Two Training Stage Domain Adaptation Framework for Object Detection in Adverse Weather

Published: 2025, Last Modified: 08 Mar 2025IEEE Trans. Instrum. Meas. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Object detection plays a crucial role in the fields of autonomous driving, security surveillance, unmanned aerial vehicle (UAV), and so on. However, the performance of detectors can be drastically degraded by adverse weather conditions, such as fog, rain, and snow. This is because detectors are usually trained on images taken in clear weather conditions but tested under adverse weather conditions. There is a domain shift problem between images captured in adverse weather and those taken in clear weather. In this article, we propose a robust detection framework called two training stage domain adaptation you only look once (TTSDA-YOLO), which performs well in both normal and adverse weather conditions based on YOLOv7. We design a new training strategy that fully utilizes auxiliary domains to transfer knowledge from the source domain to the target domain. This training strategy consists of two stages. In the first training stage, we address the disparity in feature distributions between normal weather images and adverse weather images. We use a multiscale image-level domain adaptation (IDA) module to gradually adapt the normal weather domain to the adverse weather domain. In the second training stage, we make full use of the auxiliary domain by inputting it into the network as a training set. To prevent new domain shifts from being generated during the training process, we design a backbone regularization module (BRM). Extensive experimental results of the proposed TTSDA-YOLO on benchmark datasets show that our approach can significantly improve the detection performance of the network in adverse weather conditions.
Loading