Abstract: The concept of visual masking reveals that human visual perception is influenced by content and distortion information. Existing projection-based methods lose depth information and intrinsic topological structures. Due to the limitations of computational memory, the existing point-based methods tend to deal with small patches with little content information. In this paper, we propose a novel point-based no-reference quality assessment method, namely cellular aggregation network (CANet). The method effectively extracts the quality-aware features of large patches in a divide-and-conquer manner. Specifically, the cellular sampling module is used to divide large patches into smaller cells, which effectively avoids the memory explosion problem. The cellular aggregation module is proposed to obtain more content information from small cells. A global aggregation module is proposed to extract global sketch information. Furthermore, a long-term fusion module is introduced to capture long-term dependencies, which can better receive content-aware semantic features. Experimental results on benchmark databases demonstrate that CANet achieves competitive performances.
Loading