GRACE: Towards Realistic Multimodal Single-Cell Data Matching

26 Sept 2024 (modified: 15 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Single-cell data analysis, Multimodal learning, AI for science
TL;DR: We study a new problem of realistic multimodal single-cell data matching and propose a novel approach GRACE.
Abstract: Single-cell multi-omics technologies (e.g., scRNA-seq and scATAC-seq data) have provided more comprehensive insights for understanding cellular conditions and activities in recent years. However, multimodal representation learning for omics data remains a challenging problem due to heterogeneous relationships and label scarcity in reality. In this work, we propose a novel approach named Geometric Relation Exploration with Cross-modal Supervision (GRACE) for realistic multimodal single-cell matching. In particular, we map both multimodal data into a shared embedding space by maximizing the log-likelihood of ZINB distributions. To reduce the semantic gap between multimodal data, we construct a geometric graph using mutual nearest neighbors to indicate cross-modal relations between samples for distribution alignment. Furthermore, to extract most pairwise information, we explore high-order relations in the geometric graph, which would be incorporated into a meta-learning paradigm for robust optimization. In addition, to further mitigate label scarcity, we introduce a non-parametric way to generate label vectors for unlabeled data for cross-modal supervision across different modalities. Extensive experiments on several benchmark datasets validate the superiority of the proposed GRACE compared to various baselines. In general, compared to the second-best method, GRACE exhibits an average performance improvement of 6.71% and 14.17% for the R2A task and A2R task, respectively.
Primary Area: applications to physical sciences (physics, chemistry, biology, etc.)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5881
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview