DDNet: Density and depth-aware network for object detection in foggy scenes

Published: 2023, Last Modified: 27 Jan 2026IJCNN 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Fog causes serious degradation in image quality that in turn can degrade the performance of object detection. The main reason can be concluded that (i) the degraded images make object localization difficult, (ii) the difficulty in extracting robust features for accurate detection results in various fog densities. To address the above two problems, in this paper, we propose a simple yet efficient network named density and depth-aware network (DDNet), which consists of a density-aware attention network (DAANet) and a depth-aware non-local contextual network (DNCNet). The DNCNet captures long-range dependencies guided by depth information to improve object localization. DAANet employs an attention mechanism guided by predicted fog densities to ensure the robustness of features under different fog densities. Experiments are performed on the FoggyDriving dataset. Our approach achieves the state-of-the-art performance.
Loading