Leveraging Model-Generated Annotations for Nuclei Segmentation in Computational Pathology

Published: 09 Oct 2025, Last Modified: 31 Oct 2025NeurIPS 2025 Workshop ImageomicsEveryoneRevisionsBibTeXCC BY 4.0
Submission Track: Short papers presenting ongoing research or work submitted to other venues (up to 5 pages, excluding references)
Keywords: Nuclei Segmentation, Medical Imaging, Computational Pathology, Instance Segmentation, Weakly Supervised Learning
TL;DR: Incorporate style-aware layer to incorporate model-generated annotations for nuclear segmentation taks
Abstract: Detecting and segmenting nuclei from hematoxylin and eosin (H\&E) stained images is important for many downstream applications, ranging from disease diagnosis in clinical setting to biomarkers development in the preclinical setting. Many open-source models have been developed for cell segmentation on publicly available datasets, but they might not generalize well across different tissue and disease conditions and finetuning these models can be costly owing to the labour-intensive nature of annotating from H\&E images. To address this, we propose a novel training framework that leverages annotations derived from multiple pre-existing segmentation models, treating them as imperfect "annotators". Our approach mitigates the risk of overfitting to the inherent biases of these source models by incorporating learnable embedding vectors that explicitly represent the distinct annotation "style" of each model. This allows our model to learn robust, generalizable features despite the limited availability of ground-truth annotations. We show that this approach results in a superior segmentation performance compared to naively training on the aggregated outputs of pre-trained models.
Submission Number: 22
Loading