An unsupervised low-light video enhancement network based on inter-frame consistency

Shuyuan Wen, Wenchao Li

Published: 01 Jan 2024, Last Modified: 19 Mar 2025Signal Image Video Process. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Video captured by camera sensors easily suffer from great quality degeneration under dim light circumstances. Low-light video enhancement (LLVE) can effectively solve the above problems and has received considerable attention and remarkable progress in recent years. However, most previous methods are mostly trained on paired static single-image or videos, resulting in flickering and artifact effect that degrades the overall quality after enhancement. Therefore, to address above issues, we propose an unsupervised multi-scale LLVE method that can constraint inter-frame consistency in video without ground-truth labels. Specifically, we design a novel shared weight mechanism by feeding videos frame by frame into network and inter-frame consistency loss function to establish spatial-temporal relationship. To utilize principled physical constraints, our proposed network based on Retinex theory and multiple unsupervised training loss. In addition, we design a denoising loss function and attention mechanism to eliminate noise and improve enhance quality. Experimental results demonstrate that our method achieves the remarkable performance in image quality and inter-frame consistency. Our method overcomes flicker and artifact problems, which verifies the feasibility and effectiveness.
Loading