Exploring Adversarial Robustness in Classification tasks using DNA Language Models

ACL ARR 2025 February Submission4349 Authors

15 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: DNA Language Models, such as GROVER, DNABERT2 and the Nucleotide Transformer, operate on DNA sequences that inherently contain sequencing errors, mutations, and laboratory-induced noise, which may significantly impact model performance. Despite the importance of this issue, the robustness of DNA language models remains largely underexplored. In this paper, we comprehensivly investigate their robustness in DNA classification by applying various adversarial attack strategies: the character (nucleotide substitutions), word (codon modifications), and sentence levels (back-translation-based transformations) to systematically analyze model vulnerabilities. Our results demonstrate that DNA language models are highly susceptible to adversarial attacks, leading to significant performance degradation. Furthermore, we explore adversarial training method as a defense mechanism, which enhances both robustness and classification accuracy. This study highlights the limitations of DNA language models and underscores the necessity of robustness in bioinformatics.
Paper Type: Short
Research Area: Special Theme (conference specific)
Research Area Keywords: Robustness,Generalization,Interpretability and Analysis of Models for NLP,Machine Learning for NLP,Language Modeling
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data analysis
Languages Studied: DNA Sequences (Nucleotide Data)
Submission Number: 4349
Loading