Track: Main track
Keywords: ai, icl, llm, genomic model, evo2, qwen3
TL;DR: We show large-scale genomic models (Evo2) can perform in-context learning in a manner similar to LLMs without explicit training.
Abstract: In-context learning (ICL) -- the capacity of a model to infer and apply abstract patterns from examples provided within its input -- has been extensively studied in large language models trained for next-token prediction on human text.
In fact, prior work often attributes this emergent behavior to distinctive statistical properties in *human* language. This raises a fundamental question: can ICL arise *organically* in other sequence domains purely through large-scale predictive training?
To explore this, we turn to genomic sequences, an alternative symbolic domain rich in statistical structure.
Specifically, we study the Evo2 genomic model, trained predominantly on next-nucleotide (A/T/C/G) prediction, at a scale comparable to mid-sized LLMs. We develop a controlled experimental framework comprising symbolic reasoning tasks instantiated in both linguistic and genomic forms, enabling direct comparison of ICL across genomic and linguistic models.
Our results show that genomic models, like their linguistic counterparts,
exhibit log-linear gains in pattern induction as the number of in-context demonstrations increases.
To the best of our knowledge, this is the first evidence of organically emergent ICL in genomic sequences, supporting the hypothesis that ICL arises as a consequence of large-scale predictive modeling over rich data.
These findings extend emergent meta-learning beyond language, pointing toward a unified, modality-agnostic view of in-context learning.
AI Policy Confirmation: I confirm that this submission clearly discloses the role of AI systems and human contributors and complies with the ICLR 2026 Policies on Large Language Model Usage and the ICLR Code of Ethics.
Submission Number: 35
Loading