Multiscale Graph Representations for Cross-Modal Biological Data Integration

ICLR 2025 Workshop LMRL Submission22 Authors

06 Feb 2025 (modified: 18 Apr 2025)Submitted to ICLR 2025 Workshop LMRLEveryoneRevisionsBibTeXCC BY 4.0
Track: Tiny Paper Track
Keywords: Multiscale Graph Representations, Cross-Modal Biological Data Integration, Graph Attention Networks, Multimodal Graph Autoencoders, Hierarchical Graph Structures, Biological Systems, Disease Classification, Cell-Type Annotation
TL;DR: We propose a multiscale graph representation learning framework that unifies biological modalities by embedding them into a multiscale latent space, preserving cross-scale interactions and improving interpretability.
Abstract: We present a novel multiscale representation of biological life that captures the complexity of cellular and molecular systems while ensuring interpretability and generalization across modalities. Our framework, which combines graph attention networks and multimodal graph autoencoders, learns shared embeddings across different biological scales while enforcing cross-modal alignment. We demonstrate the effectiveness of our approach on downstream tasks such as disease classification and cell-type annotation, and show that it outperforms single-modality methods. Our results also highlight the importance of cross-modal alignment in biological data integration, and demonstrate the scalability of our approach on large-scale datasets.
Submission Number: 22
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview