Balancing Easy and Hard Distortions: A Multi-Rate Knowledge Distillation Strategy for Blind Image Quality Assessment
Abstract: In the evolving fields of computer vision and image processing, Image Quality Assessment (IQA) has become essential due to the prevalence of digital images in today's applications. Our comprehensive study underscores that current IQA models demonstrate varied learning aptitudes towards different image distortions. Notably, these models, employing a uniform learning rate, often yield suboptimal results for certain challenging distortions, affecting the overall evaluation precision. Addressing this challenge, we present an innovative online knowledge distillation strategy named Multi-Rate Knowledge Distillation (MRKD). Our approach fosters the student model to assimilate diverse features from the teacher model, leveraging self-distillation regularization to enhance its generalization capability while enabling the student model to circumvent the pitfalls of local optima. This approach leverages two models with varying learning rates, wherein a high learning rate teacher model mentors a student model with a lower rate. Extensive testing on the TID2013, KADID-10k, and LIVEC datasets has validated the efficacy of our MRKD approach, demonstrating its potential in enhancing performance for challenging distortion types.
Loading