GhostSR: Learning Ghost Features for Efficient Image Super-Resolution
Abstract: Modern single image super-resolution (SISR) systems based on convolutional neural networks (CNNs) have achieved impressive performance but require huge computational costs. The problem on feature redundancy has been well studied in visual recognition task, but rarely discussed in SISR. Based on the observation that many features in SISR models are also similar to each other, we propose to use shift operation for generating the redundant features (i.e. ghost features). Compared with depth-wise convolution which is time-consuming on GPU-like devices, shift operation can bring a real inference acceleration for CNNs on common hardware. We analyze the benefits of shift operation in SISR and make the shift orientation learnable based on the Gumbel-Softmax trick. Besides, a clustering procedure is explored based on pre-trained models to identify the intrinsic filters for generating corresponding intrinsic features. The ghost features will be generated by moving these intrinsic features along a certain orientation. Finally, the complete output features are constructed by concatenating the intrinsic and ghost features together. Extensive experiments on several benchmark models and datasets demonstrate that both the non-compact and lightweight SISR CNN models embedded with the proposed method can achieve a comparable performance to the baseline models with a large reduction of parameters, FLOPs and GPU inference latency. For example, we reduce the parameters by 46%, FLOPs by 46% and GPU inference latency by 42% of x2 EDSR model with almost lossless performance. Code will be available at https://gitee.com/mindspore/models/tree/master/research/cv/GhostSR.
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Fix some typos and grammatical errors，A.4，A.5
Assigned Action Editor: ~Wei_Liu3
Submission Number: 370