Abstract: In multi-focus image fusion, different targets often have different sizes, and the network with poor multi-scale feature extraction ability will inevitably lead to the omission of the source image information. Inspired by this, we propose a network that uses the double multi-scale feature pyramid to extract multi-scale features. We design an effective channel compression excitation module and a channel spatial attention module, which form the semantic segmentation mechanism. The mechanism can efficiently extract multi-scale feature maps, maximize the global information of the source image and ignore similar information. We introduce a joint loss function and use post-processing operations to generate smooth decision maps and fused images. The proposed SFPN is compared with the seven existing MFIf methods in terms of six objective quantitative metrics and subjective visual effects and achieves superior performance.
Loading