Masked Autoencoders Pre-training in Multiple Instance Learning for Whole Slide Image ClassificationDownload PDF

22 Apr 2022, 14:43 (edited 04 Jun 2022)MIDL 2022 Short PapersReaders: Everyone
  • Keywords: histopathology, self-supervised learning, multiple instance learning
  • Abstract: End-to-end learning with whole-slide digital pathology images is challenging due to their size, which is in the order of gigapixels. In this paper, we propose a novel weakly-supervised learning strategy that combines masked autoencoders (MAE) with multiple instance learning (MIL). We use the output tokens of a self-supervised, pre-trained MAE as instances and design a token selection module to reduce the impact of global average pooling. We evaluate our framework on the assessment of whole-slide image classification on Camelyon16 dataset, showing improved performance compared to the state-of-the-art CLAM algorithm.
  • Registration: I acknowledge that acceptance of this work at MIDL requires at least one of the authors to register and present the work during the conference.
  • Authorship: I confirm that I am the author of this work and that it has not been submitted to another publication before.
  • Paper Type: novel methodological ideas without extensive validation
  • Primary Subject Area: Application: Histopathology
  • Secondary Subject Area: Detection and Diagnosis
  • Confidentiality And Author Instructions: I read the call for papers and author instructions. I acknowledge that exceeding the page limit and/or altering the latex template can result in desk rejection.
1 Reply

Loading