Abstract: Current learning-based color constancy methods are typically employed to find camera-specific illuminant mappings. Consequently, these methods exhibit poor generalization to images captured by varying cameras. In this paper, we present a Camera-Independent learning method based on Scene Semantics, and we call it CISS. Inspired by the camera-independent property of gray-based methods, CISS does not directly estimate camera-specific illuminant by training model as most learning methods do. Instead, the model's output is transformed into camera-independent scene statistics related to gray-based assumptions to avoid being affected by camera variations. Based on these estimated scene statistics, illuminant can be calculated indirectly. To estimate scene statistics accurately, CISS designs illuminant-invariance scene semantics features as input to the model. Then, the model estimates scene statistics for each input image in terms of scene semantics with exemplar-based learning. Experiments show that, on several public datasets, CISS is able to outperform present methods for multi-cameras color constancy, and is flexible enough to be well generalized to the unseen camera without fine-tuning by additional images.
0 Replies
Loading