HADT: Image super-resolution restoration using Hybrid Attention-Dense Connected Transformer Networks

Published: 01 Jan 2025, Last Modified: 06 Oct 2025Neurocomputing 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Image super-resolution (SR) plays a vital role in vision tasks, in which Transformer-based methods outperform conventional convolutional neural networks. Existing work usually uses residual linking to improve the performance, but this type of linking provides limited information transfer within the block. Also, existing work usually restricts the self-attention computation to a single window to improve feature extraction. This means transformer-based networks can only use feature information within a limited spatial range. To handle the challenge, this paper proposes a novel Hybrid Attention-Dense Connected Transformer Network (HADT) to utilize the potential feature information better. HADT is constructed by stacking an attentional transformer block (ATB), which contains an Effective Dense Transformer Block (EDTB) and a Hybrid Attention Block (HAB). EDTB combines dense connectivity and swin-transformer to enhance feature transfer and improve model representation, and meanwhile, HAB is used for cross-window information interaction and joint modeling of features for better visualization. Based on the experiments, our method is effective on SR tasks with magnification factors of 2, 3, and 4. For example, using the Urban100 dataset in an experiment with an amplification factor of 4 our method has a PSNR value that is 0.15 dB higher than the previous method and reconstructs a more detailed texture.
Loading