Abstract: Event cameras are bio-inspired vision sensors with a high dynamic range (140 dB for event cameras <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">vs.</i> 60 dB for traditional cameras) and can be used to tackle the image degradation problem under extremely low-illumination scenarios, which is still not well-explored yet. In this article, we propose a joint framework to compose the underexposed frames and event streams captured by the event camera to reconstruct clear images with detailed textures under almost dark conditions. A residual fusion module is proposed to reduce the domain gap between event streams and frames by using the residuals of both modalities. A multi-level reconstruction loss based on the variability of the contrast distribution is proposed to reduce the perceptual errors of the output image. In addition, we construct the first real-world low-illumination image enhancement dataset (mainly under 2 lux illumination scenes), named LIE, containing event streams and frames collected under indoor and outdoor low-light scenarios together with the ground truth clear images. Experimental results on our LIE dataset demonstrate that our proposed method could achieve significant improvements compared with existing methods.
0 Replies
Loading