Learning to Represent Whole Slide Images by Selecting Cell Graphs of PatchesDownload PDF

Apr 06, 2021 (edited Jun 30, 2021)MIDL 2021 Conference Short SubmissionReaders: Everyone
  • Keywords: self-supervised learning, cell graphs, graph neural networks
  • Abstract: Advances in multiplex biomarker imaging systems have enabled the study of complex spatial biology within the tumor microenvironment. However, the high-resolution multiplexed images are often only available for a subset of regions of interest (RoIs), clinical data is not easily accessible and the datasets are generally too small to apply off-the-shelf deep learning methods commonly used in histopathology. In this paper, we focus on datasets with few images and without labels, and aim to learn representations for slides. We choose a task of patient identification that leads our new model to select RoIs with discriminative properties and infer patient-specific features that can be used later for any task via transfer learning. The experimental results on the synthetic data generated by taking the tumor microenvironment into account indicate that the proposed method is a promising step towards computer-aided analysis in unlabeled datasets of high-resolution images.
  • Paper Type: methodological development
  • Primary Subject Area: Unsupervised Learning and Representation Learning
  • Secondary Subject Area: Application: Histopathology
  • Paper Status: original work, not submitted yet
  • Source Code Url: https://github.com/yinanzhangepfl/multigraph-classification
  • Data Set Url: Synthetic dataset
  • Registration: I acknowledge that publication of this at MIDL and in the proceedings requires at least one of the authors to register and present the work during the conference.
  • Authorship: I confirm that I am the author of this work and that it has not been submitted to another publication before.
4 Replies