The Mechanistic Invariance Test: Genomic Language Models Fail To Learn Positional Regulatory Logic
Keywords: Promoter design, Regulatory sequence generation, Synthetic biology applications, Design validation benchmarks, Generative model evaluation, Sequence-function mapping, Gene therapy design, Biophysical constraints, Design generalization, Functional element placement, Biomolecular grammar, Controllable generation, Design reliability, Experimental ground truth, Position-aware sequence design
TL;DR: Critical for biomolecular design: generative gLMs fail positional regulatory logic essential for promoter engineering—a 100-parameter biophysical model outperforms billion-parameter gLMs, urging hybrid approaches for reliable sequence design.
Abstract: Genomic language models (gLMs) have transformed computational biology, achieving state-of-the-art performance in variant effect prediction, gene expression modeling, and regulatory element discovery. Yet a fundamental question threatens the foundation of this success: do these models learn the mechanistic principles governing gene regulation, or do they merely exploit statistical shortcuts? We introduce the Mechanistic Invariance Test (MIT), a rigorous 650-sequence benchmark across 8 classes with scrambled controls that enables clean discrimination between compositional sensitivity and genuine positional understanding. We evaluate five gLMs spanning all major architectural paradigms (autoregressive, masked, and bidirectional state-space models) and uncover a universal failure mode. Through systematic mechanistic probing via AT titration, positional ablation, spacing perturbation, and strand orientation tests, we demonstrate that apparent compensation sensitivity is driven entirely by AT content correlation (r=0.78–0.96 across architectures), not positional regulatory logic. The failures are striking: Evo2-1B and Caduceus score regulatory elements at incorrect positions higher than correct positions, inverting biological reality. All models are strand-blind. Compositional effects dominate positional effects by 46-fold. Perhaps most revealing, a simple 100-parameter position-aware PWM achieves perfect performance (CSS=1.00, SCR=0.98), exposing that billion-parameter gLMs fail not from insufficient capacity but from fundamentally misaligned inductive biases. Larger models show stronger compositional bias, demonstrating that scale amplifies rather than corrects this limitation. These findings reveal that current gLMs capture surface statistics while missing the positional grammar essential for gene regulation, demanding architectural innovation before deployment in synthetic biology, gene therapy, and clinical variant interpretation.
Submission Number: 21
Loading