Abstract: Building large medical imaging datasets for image segmentation is a challenging task due to manual outlining. In this work, we explore the use of stereology to cut the costs of annotation. We train a segmentation model using a coarse point counting grid as the sole annotation and quantify the impact of this approach on segmentation performance. Results show that dense masks are not a strict requirement for training segmentation models to achieve satisfying performance. Since deciding whether a small set of grid points overlaps a structure of interest is an inherently faster operation than tracing a dense outline, this method allows to scale up volume annotation to large datasets.
Keywords: stereology, weak supervision, segmentation, 3D U-Net, convolutional neural networks, deep learning