Learning Dynamic Convolutions for Multi-modal 3D MRI Brain Tumor SegmentationOpen Website

2020 (modified: 10 Nov 2022)BrainLes@MICCAI (2) 2020Readers: Everyone
Abstract: Accurate automated brain tumor segmentation with 3D Magnetic Resonance Image (MRIs) liberates doctors from tedious annotation work and further monitors and provides prompt treatment of the disease. Many recent Deep Convolutional Neural Networks (DCNN) achieve tremendous success on medical image analysis, especially tumor segmentation, while they usually use static networks without considering the inherent diversity of multi-modal inputs. In this paper, we introduce a dynamic convolutional module into brain tumor segmentation and help to learn input-adaptive parameters for specific multi-modal images. To the best of our knowledge, this is the first work to adopt dynamic convolutional networks to segment brain tumor with 3D MRI data. In addition, we employ multiple branches to learn low-level features from multi-modal inputs in an end-to-end fashion. We further investigate boundary information and propose a boundary-aware module to enforce our model to pay more attention to important pixels. Experimental results on the testing dataset and cross-validation dataset split from the training dataset of BraTS 2020 Challenge demonstrate that our proposed framework obtains competitive Dice scores compared with state-of-the-art approaches.
0 Replies

Loading