Ihenet: an illumination invariant hierarchical feature enhancement network for low-light object detection
Abstract: Extracting discriminative visual cues for downstream tasks under low-light conditions remains a significant challenge. We propose the illumination-invariant hierarchical feature enhancement network (IHENet), which is integrated with existing detectors to form an elegant framework. To improve the model’s adaptability to complex lighting conditions, we introduce an illumination-robust feature extractor, which extends physical models into a learnable form to generate illumination-invariant features. Furthermore, we propose a hierarchical feature enhancement network, which adjusts global illumination and local details by separately modulating high- and low-frequency components, effectively addressing image degradation while preserving detection-relevant information. Finally, to balance image processing with object detection, we adopt an end-to-end joint training strategy that uses only a standard detection loss, simplifying the training process while ensuring the optimization aligns with downstream tasks, thus improving detection accuracy. Extensive experimental evaluations validate the superiority of IHENet in low-light object detection. Specifically, it achieves improvements of 1.8% and 2.0% in mAP@0.5 on the ExDark and DARK FACE datasets, while significantly reducing parameter counts.
External IDs:dblp:journals/mms/LiPXLDX25
Loading