Abstract: Deep convolutional neural networks (CNNs) have become a promising approach to no-reference image quality assessment (NR-IQA). This paper aims at improving the power of CNNs for NR-IQA in two aspects. Firstly, motivated by the deep connection between complex-valued transforms and human visual perception, we introduce complex-valued convolutions and phase-aware activations beyond traditional real-valued CNNs, which improves the accuracy of NR-IQA without bringing noticeable additional computational costs. Secondly, considering the content-awareness of visual quality perception, we include a dynamic filtering module for better extracting content-aware features, which predicts features based on both local content and global semantics. These two improvements lead to a complex-valued content-aware neural NR-IQA model with good generalization. Extensive experiments on both synthetically and authentically distorted data have demonstrated the state-of-the-art performance of the proposed approach.
0 Replies
Loading