Abstract: Deep learning based image deraining has been widely researched. However, rain streaks are hard to differentiate with similar textures of background without context knowledge. In this paper, a novel Context-Aware Transformer (CAT) is proposed for single image deraining where both local and global context within the input rainy image are utilized for better background reconstruction performance. The proposed CAT perceives a comprehensive context view through efficient self-attention mechanism and dilated convolutions in the Context-Aware Transformer Block (CATB). The Rain-Aware Feature Selection module (RAFS) generates feature blending coefficients adaptively to filter out rain streaks components and preserves clear background in hierarchical features of CAT. Meanwhile, a High-Frequency Preserved Loss (HFPL) provides further supervision on training and promotes reserving clearer structures and sharper details. Experiments on synthesized and real-world benchmarks illustrate the outstanding performance over state-of-the-art methods and pleasing visual results in various scenes.
External IDs:dblp:conf/icassp/ChenLHDL024
Loading