scCLIP: Multi-modal Single-cell Contrastive Learning Integration Pre-training

Published: 28 Oct 2023, Last Modified: 23 Nov 2023NeurIPS2023-AI4Science PosterEveryoneRevisionsBibTeX
Keywords: Multi-modal Intergration, Single Cell, Transformer
Abstract: Recent advances in multi-modal single-cell sequencing technologies enable the simultaneous profiling of chromatin accessibility and transcriptome in individual cells. Integration analysis of multi-modal single-cell data offers a more comprehensive understanding of the regulatory mechanisms linking chromatin status and gene expression, driving cellular processes and diseases. In order to acquire features that align peaks and genes within the same embedding space and facilitate seamless zero-shot transfer to new data, we introduced scCLIP (single-cell Contrastive Learning Integration Pretraining), a generalized multi-modal transformer model with contrastive learning. We show that this model outperforms other competing methods, and beyond this, scCLIP learns transferable features across modalities and generalizes to unseen datasets, which pose the great potential to bridge the vast number of unpaired unimodal datasets both existing and new data generated in the future. Specifically, we propose the first large-scale transformer model designed for single-cell ATAC-seq data by patching peaks across the genomes and representing each patch as a token. This innovative approach enables us effectively to address the scalability challenges posed by scATAC-seq, even when dealing with datasets of up to one million dimensions. Codes are provided at: https://github.com/jsxlei/scCLIP.
Submission Track: Original Research
Submission Number: 139
Loading