Query Re-Training for Modality-Gnostic Incomplete Multi-modal Brain Tumor Segmentation

Published: 01 Jan 2023, Last Modified: 05 Apr 2025MTSAIL/LEAF/AI4Treat/MMMI/REMIA@MICCAI 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Although Magnetic Resonance Imaging (MRI) is crucial for segmenting brain tumors, it frequently lacks specific modalities in clinical practice, which limits prediction performance. In current methods, training involves multiple stages, and encoders are different for each modality, which means hybrid modules must be manually designed to incorporate multiple modalities’ features, lacking interaction across modalities. To ameliorate this problem, we propose a transformer-based end-to-end model with just one auto-encoder to provide interactive computations in any modality missing condition. Considering that it is challenging for a single model to perceive several missing states, we introduce learnable modality combination queries to assist the transformer decoder in adjusting to the incomplete multi-modal segmentation. Furthermore, to address the suboptimization issue of the Transformer under small datasets, we adopt a re-training mechanism to facilitate convergence to a better local minimum. The extensive experiments on the BraTS2018 and BraTS2020 datasets demonstrate that our method outperforms the current state-of-the-art methods for incomplete multi-modal brain tumor segmentation on average.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview