Keywords: dynamic activation, image quality assessment, image segmentation, knowledge distillation
Abstract: Degraded image understanding remains a significant challenge in computer vision. To mitigate the domain shift between high-quality and low-quality image distributions, we propose an adaptation approach based on activation functions rather than adjusting convolutional parameters. First, inspired by physiological findings in the human visual system, we introduce Quality-adaptive Activation (QuAC), a novel concept that automatically adjusts neuron activations based on input image quality to enhance essential semantic representations. Second, we implement Quality-adaptive meta-ACON (Q-ACON), which incorporates hyperparameters learned from image quality assessment functions. Q-ACON is efficient, flexible, and plug-and-play. Extensive experiments demonstrate that it consistently improves the performance of various networks—including convolutional neural networks, transformers, and diffusion models—against challenging degradations across multiple vision tasks, such as semantic segmentation, object detection, image classification, and image restoration. Furthermore, QuAC integrates effectively with existing techniques like knowledge distillation and image restoration, and can be extended to other activation functions. The code will be released after peer review.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 15963
Loading