Capturing Single-Cell Phenotypic Variation via Unsupervised Representation LearningDownload PDF

Published: 28 Feb 2019, Last Modified: 05 May 2023MIDL 2019 PosterReaders: Everyone
Abstract: We propose a novel variational autoencoder (VAE) framework for learning representations of cell images for the domain of image-based profiling, important for new therapeutic discovery. Previously, generative adversarial network-based (GAN) approaches were proposed to enable biologists to visualize structural variations in cells that drive differences in populations. However, while the images were realistic, they did not provide direct reconstructions from representations, and their performance in downstream analysis was poor. We address these limitations in our approach by adding an adversarial-driven similarity constraint applied to the standard VAE framework, and a progressive training procedure that allows higher quality reconstructions than standard VAEs. The proposed models improve classification accuracy by 22% (to 90%) compared to the best reported GAN model, making it competitive with other models that have higher quality representations, but lack the ability to synthesize images. This provides researchers a new tool to match cellular fingerprints effectively, and also to gain better insight into cellular structure variations that are driving differences between populations of cells.
Code Of Conduct: I have read and accept the code of conduct.
Remove If Rejected: (optional) Remove submission if paper is rejected.
8 Replies

Loading