HOCA: Higher-Order Channel Attention for Single Image Super-ResolutionDownload PDFOpen Website

2021 (modified: 16 Nov 2022)ICASSP 2021Readers: Everyone
Abstract: Convolutional neural networks (CNNs) have obtained great success in single image super-resolution (SR). More recent works (e.g., RCAN and SAN) have obtained remarkable performance with channel attention based on first- or second-order statistics of features. However, these methods neglect the rich feature statistics higher than second-order, thus hindering the representation ability of CNNs. To address this issue, we propose a higher-order channel attention (HOCA) module to enhance the representation ability of CNNs. In our HOCA module, to capture different types of semantic information, we first compute k-order of feature statistics, followed by channel attention to learn the feature interdependencies. Considering the diversity of input contents, we design a gate mechanism to adaptively select a specific k-order channel attention. Besides, our HOCA module serves as a plug-and-play module and can be easily plugged into existing state-of-art CNN-based SR methods. Extensive experiments on public benchmarks show that our HOCA module effectively improves the performance of various CNN-based SR methods.
0 Replies

Loading