Generalizing Microscopy Image Labeling via Layer-Matching Adversarial Domain Adaptation

ICML 2024 Workshop ML4LMS

Published: 17 Jun 2024, Last Modified: 18 Jul 2024ML4LMS PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Machine Learning, Deep Learning, Image-to-Image Translation, adversarial domain adaptation, Domain Adaptation, Adversarial Learning, Generative Adversarial Networks (GANs), UNet, Latent Space Matching, Model Generalization, Internal Representation Matching, Layer-Specific Adaptation, ADDA, Layer Matching, Synthetic Image Generation, Computational Biology, MSCs, Microscopy, Protein Marker Detection
Abstract: Image-to-image translation models are valuable tools that convert light microscopy images into in silico labeled biological immunofluorescence images, enabling non-invasive and label-free measurement of protein expression levels in live cells. Despite their potential, these models have not gained significant traction in the life sciences due to their low transferability and lack of robustness to common variances in microscopy images. Additionally, re-training a model for each new microscope setting is infeasible due to the high cost of data acquisition. In this work, we explore domain adaptation techniques to make image-to-image translation models more robust to common distribution shifts in microscopy images. Specifically, we propose Layer-Matching Adversarial Domain Adaptation (LM-ADDA), a general framework that leverages the information-rich latent spaces within the translation model to perform unsupervised domain adaptation. Through experiments on multiple domain shifts, we demonstrate that LM-ADDA enhances the robustness of image-to-image translation models without requiring additional paired or labeled data.
Poster: pdf
Submission Number: 139
Loading