Abstract: Previous works have shown that reducing parameter overhead and computations for transformer-based single image super-resolution (SISR) models (e.g., SwinIR) usually leads to a reduction of performance. In this paper, we present GRFormer, an efficient and lightweight method, which not only reduces the parameter overhead and computations, but also greatly improves performance. The core of GRFormer is Grouped Residual Self-Attention (GRSA), which is specifically oriented towards two fundamental components. Firstly, it introduces a novel grouped residual layer (GRL) to replace the QKV linear layer in self-attention, aimed at efficiently reducing parameter overhead, computations, and performance loss at the same time. Secondly, it integrates a compact Exponential-Space Relative Position Bias (ES-RPB) as a substitute for the original relative position bias to improve the ability to represent position information while further minimizing the parameter count.
Extensive experimental results demonstrate that GRFormer outperforms state-of-the-art transformer-based methods for x2, x3 and x4 SISR tasks, notably outperforming SOTA by a maximum PSNR of 0.23dB when trained on the DIV2K dataset, while reducing the number of parameter and MACs by about 60% and 49% in only self-attention module respectively. We hope that our simple and effective method that can easily applied to SR models based on window-division self-attention can serve as a useful tool for further research in image super-resolution. The code is available at https://github.com/sisrformer/GRFormer.
Primary Subject Area: [Content] Vision and Language
Secondary Subject Area: [Content] Vision and Language
Relevance To Conference: Our work on developing a lightweight super-resolution model is highly relevant to the MM Vision and Language track, as it addresses a pivotal challenge in multimedia/multimodal processing: maintaining efficiency without sacrificing quality. In multimedia environments where vision and language converge, the balance of processing speed and model compactness is crucial. Our model significantly enhances this balance by reducing the parameter count, thereby facilitating faster processing and reducing memory demands. This is particularly beneficial for real-time applications and devices with limited computational power.
Supplementary Material: zip
Submission Number: 4634
Loading