Abstract: As a powerful and continuously sought-after medical assistance technique, multimodal medical image fusion integrates the useful information from different single-modal medical images into a fused one. Nevertheless, existing deep learning-based methods often feed source images into a single network without considering the information among different channels and scales, which may inevitably lose the important information. To solve this problem, we proposed a multimodal medical image fusion method based on multichannel aggregated network. By iterating different residual densely connected blocks to efficiently extract the image features at three scales, as well as extracting the spatial domain, channel and fine-grained feature information of the source image at each scale separately. Simultaneously, we introduced multispectral channel attention to address the global average pooling problem of the vanilla channel attention mechanism. Extensive fusion experiments demonstrated that the proposed method surpasses some representative state-of-the-art methods in terms of both subjective and objective evaluation. The code of this work is available at https://github.com/JasonWong30/MCAFusion .
0 Replies
Loading