Watching it in Dark: A Target-Aware Representation Learning Framework for High-Level Vision Tasks in Low Illumination

Published: 01 Jan 2024, Last Modified: 05 Mar 2025ECCV (75) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Low illumination significantly impacts the performance of learning-based models trained under well-lit conditions. While current methods mitigate this issue through either image-level enhancement or feature-level adaptation, they often focus solely on the image itself, ignoring how the task-relevant target varies along with different illumination. In this paper, we propose a target-aware representation learning framework designed to improve high-level task performance in low-illumination environments. We achieve a bi-directional domain alignment from both image appearance and semantic features to bridge data across different illumination conditions. To concentrate more effectively on the target, we design a target highlighting strategy, incorporated with the saliency mechanism and Temporal Gaussian Mixture Model to emphasize the location and movement of task-relevant targets. We also design a mask token-based representation learning scheme to learn a more robust target-aware feature. Our framework ensures compact and effective feature representation for high-level vision tasks in low-lit settings. Extensive experiments conducted on CODaN, ExDark, and ARID datasets validate the effectiveness of our approach for a variety of image and video-based tasks, including classification, detection, and action recognition. Our code is available at https://github.com/ZhangYh994/WiiD.
Loading