Abstract: Low-light image enhancement is an important task in computer vision, often made challenging by the limitations of image sensors, such as noise, low contrast, and color distortion. These challenges are further exacerbated by the computational demands of processing spatial dependencies under such conditions. We present a novel transformer-based framework that enhances efficiency by utilizing depthwise separable convolutions instead of conventional approaches. Additionally, an original feed-forward network design reduces the computational overhead while maintaining high performance. Experimental results demonstrate that this method achieves competitive results, providing a practical and effective solution for enhancing images captured in low-light environments.
Loading