Abstract: Accurate brain tumor image segmentation can significantly assist doctors in clinical diagnosis and treatment. The complementary information from multimodal magnetic resonance imaging provides more comprehensive information on brain tumors for lesion segmentation. However, the complexity of clinical scenarios may lead to modal loss or damage, thereby reducing segmentation performance. Therefore, brain tumor image segmentation becomes a challenging task in the absence of modalities. In recent years, leveraging interactive mechanisms to learn shared feature representations across multiple modalities has become a dominant approach, especially for enhancing segmentation accuracy in scenarios with incomplete modalities. However, a significant challenge in most existing methods is their inability to effectively capture complete multimodal information due to the loss of modality-specific details during interaction processes. To overcome this challenge, we introduce a novel network, referred to as the Multimodal Invariant Feature Prompt Network (MIFPN), which enhances learning by incorporating modality prompts within multimodal interactions, thus ensuring a more comprehensive acquisition of modality information. This network aims to introduce modality-invariant feature prompts in modal information interaction, guiding the model to learn the missing modality information and facilitating integration. To fully leverage modal information, we learned both modality-invariant and modality-specific information, and fused them during the prompt learning process to obtain a more comprehensive understanding of modal information. Considering the different contributions of various modalities to the segmentation results, we designed modality-aware masks and modality selection strategies to merge shallow features of the encoder. MIFPN effectively mitigates the brain tumor image segmentation problem under missing modalities by learning from missing modality prompts and modality-aware fusion. Extensive experiments conducted on the BraTS2020 and BraTS2018 datasets show that our method outperforms state-of-the-art methods in most scenarios across 15 missing modality configurations. The code is available at https://github.com/diaoyq121/MIFPN.
Loading