SSIR: Spatial shuffle multi-head self-attention for Single Image Super-Resolution

Published: 01 Jan 2024, Last Modified: 13 Nov 2024Pattern Recognit. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Highlights•We used attribution analysis to find that some transformer based SR methods can only utilize limited spatial range information during the reconstruction process.•To address this, we introduce Spatial Shuffle Multi-Head Self-attention (SS-MSA) for efficient global pixel dependency modeling and a local perceptual unit to enhance local feature information.•Our method surpasses existing approaches in reconstruction accuracy and visual performance across five benchmarks. Moreover, it reduces parameters by 40%, GPU memory by 30%, and inference time by 30% compared to transformer-based methods.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview