Abstract: Existing low-light image enhancement approaches based upon pixel-wise reconstruction losses are inadept at capturing the complex distribution of well-exposed images, resulting in residual noise, insufficient illuminance, and artifacts. Additionally, the mapping relationship between weakly-illuminated and normally exposed images is one-to-many, making low-light image enhancement a vastly ill-posed problem. In this work, we probe into this one-to-many relationship via an attention and frequency driven normalizing flow network by minimizing the negative log-likelihood loss. The proposed model comprises of two parts: a dual-attention-oriented frequency encoder network and an invertible network which inputs the conditional low-light images and changes the mapping of the complex distribution of well-light images to simpler Gaussian distribution. The proposed model not only utilizes the spatial information inherent in the image for improving the contrast but also extracts the frequency information for preserving the intricate details. To sum up, the distribution of the well-exposed images can be characterized better, and the overall enhancement mechanism becomes analogous to being restrained by a loss function which defines the manifold structure of natural images during the training. Detailed experiment analysis on a variety of challenging low-light images exemplifies the potency of the model and shows its primacy over the state-of-the-art in terms of enhanced quality and efficiency.
Loading