Abstract: Semantic segmentation is an important task in medical imaging. Typically, encoder-decoder architectures, such as the U-Net, are used in various variants to approach this task. Normalization methods, such as Batch or Instance Normalization are used throughout the architectures to adapt to data-specific noise. However, it is barely investigated which normalization method is most suitable for a given dataset and if a combination of those is beneficial for the overall performance. In this work, we show that by using evolutionary algorithms we can fully automatically select the best set of normalization methods, outperforming any competitive single normalization method baseline. We provide insights into the selection of normalization and how this compares across imaging modalities and datasets. Overall, we propose that normalization should be managed carefully during the development of the most recent semantic segmentation models as it has a significant impact on medical image analysis tasks, contributing to a more efficient analysis of medical data. Our code is openly available at https://github.com/neuluna/evoNMS .
0 Replies
Loading