Screen Content-Aware Video Coding Through Non-Local Model Embedded With Intra-Inter In-Loop Filtering

Published: 01 Jan 2025, Last Modified: 11 Apr 2025IEEE Trans. Circuits Syst. Video Technol. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Many studies have focused on utilizing convolutional neural networks (CNNs) to enhance loop filter performance in video encoding. However, existing methods primarily concentrate on improving the natural sequence quality rather than addressing the specific needs of screen content sequences, which have gained increased attention due to the growing demands of remote desktops and online meetings. This paper proposed to understand machine behavior from the machine’s point of view, and adopts the machine intelligence to screen content coding. It presents a novel loop filter specifically tailored for screen content coding (SCC), referred to as video coding-SCC (VC-SCC). It employs a multiscale feature extraction structure and introduces two innovative non-local models to address distortions in different frame types across various coding setups. Specifically, considering regions of text and graphic textures in screen content, three types of prior maps, including screen content maps, coding configuration maps, and traditional filtering maps, are designed as auxiliary information in the model, promoting distortion pattern learning under different configurations. Two novel non-local models are proposed to enhance the model’s ability to capture global features in intra- and inter-frames while keeping low computational complexity. Finally, the VC-SCC is proposed for parallel implementation with the standard in-loop filter, and the optimal results are selected in each patch. Experimental results demonstrate significant performance improvements, with average BD-rate savings of 9.93%, 11.05%, and 10.73% for the all-intra(AI), low-delay(LD), and random-access(RA) configurations, respectively, outperforming other state-of-the-art approaches.
Loading