Abstract: Data scarcity and annotation limit the quantitation of cell microscopy images. Data acquisition, preparation, and annotation are costly and time-consuming. Additionally, cell annotation is an error-prone task that requires personnel with specialized knowledge. Generative artificial intelligence is an alternative to alleviate these limitations by generating realistic images from an unknown data probabilistic distribution. Still, extra effort is needed since data annotation remains an independent task of the generative process. In this work, we assess whether generative models learn meaningful instance segmentation-related features, and their potential to produce realistic annotated images. We present a single-channel grayscale segmentation mask pipeline that differentiates overlapping objects while minimizing the number of labels. Additionally, we propose a modified version of the established StyleGAN2 generator that synthesizes images and segmentation masks simultaneously without add
Loading