Keywords: Single-cell analysis, multimodality integration, graph neural networks
TL;DR: Graph representation learning for single cell multimodal data integration
Abstract: Recent advances in multimodal single-cell technologies have enabled simultaneous acquisitions of multi-omics data from the same cell, providing deeper insights into cellular states and dynamics. However, it is challenging to learn the joint representations from the multimodal data, model the relationship between modalities, and, more importantly, incorporate the vast amount of single-modality datasets into the downstream analyses. To address these challenges and correspondingly facilitate multimodal single-cell data analyses, three key tasks have been introduced: modality prediction, modality matching and joint embedding. In this work, we present a general Graph Neural Network framework scMoGNN to tackle these three tasks and show that scMoGNN demonstrates superior results in all three tasks compared with the state-of-the-art and conventional approaches. Our method is an official winner in the overall ranking of modality prediction from a NeurIPS 2021 Competition.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:2203.01884/code)