Abstract: Blind image quality assessment (BIQA) has always been a challenging problem due to the absence of reference images. In this paper, we propose a novel dual-branch vision transformer for BIQA, which simultaneously considers both local distortions and global semantic information. It first extracts dual-scale features from the backbone network, and then each scale feature is fed into one of the transformer encoder branches as a local feature embedding to consider the scale-variant local distortions. Each transformer branch obtains the context of global image distortion as well as the local distortion by adopting content-aware embedding. Finally, the outputs of the dual-branch vision transformer are combined by using multiple feed-forward blocks to predict the image quality scores effectively. Experimental results demonstrate that the proposed BIQA method outperforms the conventional methods on the six public BIQA datasets.
Loading