From Base Pairs to Functions: Rich RNA Representations via Multimodal Language Modeling

Published: 06 Oct 2025, Last Modified: 06 Oct 2025NeurIPS 2025 2nd Workshop FM4LS PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: multimodal RNA language model, RNA language model, RNA reactivity, RNA classification, RNA degradation, mean ribosome loading
TL;DR: RABONA is a multimodal RNA language model trained on ncRNA sequences and structures, providing rich representations for downstream tasks.
Abstract: RNA foundation models have recently emerged as powerful tools for learning from large sequence databases, yet their embeddings often fall short in simple probing setups, necessitating additional finetuning. Most existing models are pretrained solely on sequences, assuming that structural information will emerge implicitly. We introduce RABONA, a multimodal RNA language model jointly pretrained on sequence-structure pairs using modality-specific masking, designed for both generative and understanding tasks. It produces embeddings that form clearer family-specific clusters and shows stronger attention alignment with RNA base pairs compared to other RNA language models. In this paper, we focus on RABONA's predictive capabilities and show that it consistently outperforms larger baselines across diverse downstream tasks in both finetuning and linear probing setups, demonstrating that incorporating structure during pretraining yields richer RNA embeddings and enables more efficient foundation models.
Submission Number: 11
Loading