Abstract: Graph Neural Networks (GNNs) hold great promise for solving many challenges in digital pathology by leveraging the rich relationships between cells and tissues in histology images. However, the shortage of annotated data in digital pathology presents a significant challenge for training GNNs. To address this, self-supervision can be used to enable models to learn from data by capturing rich structures and relationships without requiring annotations. Inspired by pathologists who take multiple views of a histology slide under a microscope for exhaustive analysis, we propose a novel methodology for graph representation learning using self-supervision. Our methodology leverages multiple graph views constructed from a given histology image to capture diverse information. We maximize mutual information across nodes and graph representations of different graph views, resulting in a comprehensive graph representation. We showcase the efficacy of our methodology on the BRACS dataset where our algorithm generates superior representations compared to other self-supervised graph representation learning algorithms and comes close to pathologists and supervised learning algorithms. The code and pre-trained weights are shared on github at https://github.com/Vishwesh4/Multiview-GRL
Loading