CrossCT: CNN and Transformer cross–teaching for multimodal image cell segmentationDownload PDF

14 Nov 2022 (modified: 10 Mar 2023)Submitted to NeurIPS CellSeg 2022Readers: Everyone
Keywords: cross-teaching, CNN, Transformers, cell segmentation, NeurIPS challenge, semi-supervised learning
TL;DR: Image segmentation method that implement a cross-teaching between CNN and Transformer to exploit both labeled and unlabeled images
Abstract: Segmenting microscopy images is a crucial step for quantitatively analyzing biological imaging data. Classical tools for biological image segmentation need to be adjusted to the cell type and image conditions to get decent results. Another limitation is the lack of high-quality labeled data to train alternative methods like Deep Learning since manual labeling is costly and time-consuming. Weakly Supervised Cell Segmentation in Multi-modality High-Resolution Microscopy Images was organized by NeurIPS to solve this problem. The aim of the challenge was to develop a versatile method that can work with high variability, with few labeled images, a lot of unlabeled images, and with no human interaction. We developed CrossCT, a framework based on the cross-teaching between a CNN and a Transformer. The main idea behind this work was to improve the organizers' baseline methods and use both labeled and unlabeled data. Experiments show that our method outperforms the baseline methods based on a supervised learning approach. We achieved an F1 score of 0.5988 for the Transformer and 0.5626 for the CNN respecting the time limits imposed for inference.
11 Replies

Loading