Abstract: Recent convolution neural networks (CNNs) have achieved remarkable success in lightweight image super-resolution (LISR). The goal of LISR is to restore more accurate details with less model capacity. However, we observe two phenomena in current micro-architectures, one is the lack of consistent learning ability of high-frequency components, the other is large residual problem which does harm to the stability of residual learning. To tackle the two issues, we propose two strategies, namely global-guided attention strategy (GGAS) and channel-wise scaling strategy (CWSS), which can significantly improve the performance of the state-of-the-arts with negligible overheads.
0 Replies
Loading