Abstract: The infrared image can distinguish the targets from the background based on radiation differences, providing more significant target visibility under dense haze.
Fusion of visible haze images with infrared prior representations can generate high-quality fused images for high-level tasks.
Consequently, we propose a novel dual-modal fusion network structure that makes full use of infrared prior representations for dehazing.
Specifically, we emphasize a Multi-modal Feature Extraction Network (MMFE) to extract deep multi-scale features.
Meanwhile, we introduce a Multi-scale Feature Extraction module (MSFE), integrating an Efficient Dual Attenion block (EDAB) to efficiently explore more spatial and marginal information.
Additionally, we propose a new feature fusion strategy, which calculates feature fusion weights based on an adaptive multi-head self-attention.
Therefore, IPRDehazeNet achieves better dehazing results through dual-modal fusion.
Experimental results indicate that IPRDehazeNet outperforms various advanced methods.
Loading