Enhancing DNA Foundation Models to Address Masking Inefficiencies

Published: 05 Mar 2025, Last Modified: 16 Apr 2025ICLR 2025 AI4NA PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: long paper (up to 6 pages)
Keywords: DNA language models, Masked autoencoder, Masked language modelling, DNA barcode, Transformers
Abstract: Masked language modelling (MLM) as a pretraining objective has been widely adopted in genomic sequence modelling. While pretrained models can successfully serve as encoders for various downstream tasks, the distribution shift between pretraining and inference detrimentally impacts performance, as the pretraining task is to map [MASK] tokens to predictions, yet the [MASK] is absent during downstream applications. This means the encoder does not prioritize its encodings of non-[MASK] tokens, and expends parameters and compute on work only relevant to the MLM task, despite this being irrelevant at deployment time. In this work, we propose a modified encoder-decoder architecture based on the masked autoencoder framework, designed to address this inefficiency within a BERT-based transformer. We empirically show that the resulting mismatch is particularly detrimental in genomic pipelines where models are often used for feature extraction without fine-tuning. We evaluate our approach on the BIOSCAN-5M dataset, comprising over 2 million unique DNA barcodes. We achieve substantial performance gains in both closed-world and open-world classification tasks when compared against causal models and bidirectional architectures pretrained with MLM tasks.
Submission Number: 35
Loading