TARN: a lightweight two-branch adaptive residual network for image super-resolution

Published: 01 Jan 2024, Last Modified: 27 Jan 2025Int. J. Mach. Learn. Cybern. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Currently, single-image super-resolution (SISR) methods based on convolutional neural networks have achieved remarkable results. However, most methods improve the reconstruction performance of the network by increasing the depth and complexity of the network, which leads to an increase in the computation and storage of the network. To address this problem, this paper proposes a new lightweight two-branch adaptive residual network (TARN) for SISR reconstruction. To effectively utilize the residual features, a two-branch adaptive residual block (TARB) based on the lattice linear combination structure is designed. In TARB, an attention residual block (ARB) is built by combining residual learning and attention mechanisms, which can realize the interactive learning of two branches and retain the most useful feature information for SISR reconstruction. To realize the learning of hierarchical features of different depths, multiple TARBs are cascaded to form the backbone structure of TARN. Furthermore, the features extracted by TARBs are aggregated into a feature bank, and then a distillation fusion block (DFB) is designed to perform features compression and distillation by recalibrating the channel feature responses and adaptively assigning weights. Experimental results on multiple datasets show that the proposed TARN achieves better subjective performance and quantitative results than most state-of-the-art lightweight networks. Specifically, the proposed TARN achieves the higher PSNR values (32.20, 28.19, and 26.15) and SSIM values (0.9289, 0.8529, and 0.7874) than all the comparison methods on the Urban100 dataset in terms of 2 × , 3 × , and 4 × super-resolution, respectively.
Loading