Image denoising using channel attention residual enhanced Swin Transformer

Published: 01 Jan 2024, Last Modified: 07 May 2024Multim. Tools Appl. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Transformers have achieved remarkable results in high-level vision tasks, but their application in low-level computer vision tasks such as image denoising remains largely unexplored. In this paper, we propose a novel channel attention residual enhanced Swin Transformer denoising network (CARSTDn), which is an efficient and effective Transformer-based architecture. CARSTDn consists of three modules: shallow feature extraction, deep feature extraction, and image reconstruction modules. The deep feature extraction module is the core of CARSTDn, and it employs a channel attention residual Swin Transformer block (CARSTB). Our benchmarking results demonstrate that CARSTDn outperforms existing state-of-the-art methods, showcasing its superiority. We hope that our work will inspire further research into the use of Transformer-based architectures for image denoising tasks.
Loading