Fine-tuning feature interaction for unsupervised domain adaptive low-light object detection

Maomao Xiong, Qunshu Zhang, Dagang Li, Wenmin Wang, Zaigui Zhang, Kai Zhang, Cong Liu, Da Chen, Jinglin Zhang

Published: 01 Dec 2025, Last Modified: 07 Nov 2025NeurocomputingEveryoneRevisionsCC BY-SA 4.0
Abstract: Object detection in low-light conditions is a challenging task, as detectors trained on well-lit datasets often experience significant performance degradation under poor lighting. This issue arises due to the absence of labeled low-light images and the difficulties associated with direct knowledge transfer from well-lit domains. To address this challenge, we propose a novel Fine-tuning Feature Interaction Network (FFINet) for unsupervised domain adaptation (UDA) in low-light object detection. Our approach leverages Global-Local Augmentation (GLA), which employs retinex and fractional-order differential masks to better represent crucial features in low-light environments and a federated learning-based Fine-tuning Feature Interaction (FFI) strategy to align feature representations between day and low-light scenes. To further reduce the domain discrepancy, we introduce an effective Causal Attention Alignment (CAA) module that enhances the feature interaction by exploring causal relationships across MobileSAM and ResNet50. These innovations enable efficient feature transfer and adaptation without requiring labeled low-light data. Extensive experiments on benchmark datasets, including BDD100K, SHIFT, and DARK FACE, demonstrate that FFINet consistently outperforms previous UDA methods for low-light object detection.
Loading