Low-bit Quantization for Seeing in the Dark

27 Sept 2024 (modified: 14 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: low-light image enhancement, network quantization
Abstract: Several properties of raw data exhibit significant potential for enhancing images under extremely low-light conditions. Recently, many deep-learning methods for raw-based low-light image enhancement (LLIE) have demonstrated excellent performance. However, deploying them on resource-limited devices is restricted by high computational and storage demands. In this work, we propose a novel low-bit quantization method for raw-based LLIE model to improve their efficiency. Nevertheless, directly adopting existing quantizers for LLIE networks leads to an obvious performance drop due to two main reasons. i) The U-Net model, commonly employed in LLIE, faces challenges in identifying a suitable quantization range due to disparities in distribution between the encoder and decoder features. ii) Low-bit quantized LLIE networks struggle to restore clear details in low-light images because their features have a constraint capacity. We address these issues by introducing a novel low-bit quantization method, the Distribution-Separative Asymmetric Quantizer (DSAQ), designed specifically for U-Net architectures used in LLIE. In order to accurately determine the quantization intervals, DSAQ separates the distribution of encoder and decoder features before they are concatenated by the skip connection. We also make the quantizer asymmetric with trainable scale and offset parameters to suit skewed activation ranges caused by non-linear functions. To further enhance performance, we propose a uniform feature distillation technique, which allows the low-bit student model to effectively assimilate knowledge from the full-precision teacher model, bridging the gap in representation capability. Extensive experiments show that our approach not only greatly reduces the memory and computational requirements of raw-based LLIE models but also has a promising performance. Our 4-bit quantized model can achieve comparable or superior results to full-precision counterparts.
Primary Area: applications to computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9097
Loading