Abstract: While deep learning models are known to be able to solve the task of multi-organ segmentation, the scarcity of fully annotated multi-organ datasets poses a significant obstacle during training. The 3D volume annotation of such datasets is expensive, time-consuming and varies greatly in the variety of labeled structures. To this end, we propose a solution that leverages multiple partially annotated datasets using disentangled learning for a single segmentation model. Dataset-specific encoder and decoder networks are trained, while a joint decoder network gathers the encoders’ features to generate a complete segmentation mask. We evaluated our method using two simulated partially annotated datasets: one including the liver, lungs and kidneys, the other bones and bladder. Our method is trained to segment all five organs achieving a dice score of 0.78 and an IoU of 0.67. Notably, this performance is close to a model trained on the fully annotated dataset, scoring 0.80 in dice score and 0.70 in IoU respectively.
Loading