Abstract: Learning lighting adaptation is a crucial step in achieving good visual perception and supporting downstream vision tasks. Current research often addresses individual light-related challenges, such as high dynamic range imaging and exposure correction, in isolation. However, we identify shared fundamental properties across these tasks:
i) different color channels have different light properties, and ii) the channel differences reflected in the spatial and frequency domains are different. Leveraging these insights, we introduce the channel-aware Learning Adaptive Lighting Network (LALNet), a multi-task framework designed to handle multiple light-related tasks efficiently. Specifically, LALNet incorporates color-separated features that highlight the unique light properties of each color channel, integrated with traditional color-mixed features by Light Guided Attention (LGA). The LGA utilizes color-separated features to guide color-mixed features focusing on channel differences and ensuring visual consistency across all channels. Additionally, LALNet employs dual domain channel modulation for generating color-separated features and a mixed channel modulation and light state space module for producing color-mixed features. Extensive experiments on four representative light-related tasks demonstrate that LALNet significantly outperforms state-of-the-art methods on benchmark tests and requires fewer computational resources. We provide an anonymous online demo at [LALNet](https://xxxxxx2025.github.io/LALNet/).
Lay Summary: Adapting to different lighting conditions is essential for both how humans and computers see the world. While recent research often focuses on solving single lighting-related problems, like making dark photos brighter or handling very bright and dark areas in the same image, we notice these challenges share some common traits. Specifically, different colors (like red, green, and blue) react to light in different ways, and these differences show up both in the details and the overall look of an image. Building on these insights, we introduce the channel-aware Learning Adaptive Lighting Network (LALNet), a system designed to handle a variety of lighting problems all at once. LALNet learns to treat each color channel separately, capturing what’s unique about how each one responds to light, and then combines this information using a process we call Light Guided Attention. This approach ensures the final image looks natural and balanced, with all colors working together smoothly. Our experiments show that LALNet outperforms leading methods on several important lighting tasks, and does so using fewer computing resources. You can try it out for yourself on our online demo: [LALNet](https://xxxxxx2025.github.io/LALNet/).
Primary Area: Applications->Computer Vision
Keywords: Exposure Correction, Retouching, Image Enhancement
Submission Number: 2769
Loading