(How) Can Transformers Predict Pseudo-Random Numbers?

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We show that transformers can learn to predict sequences from Linear Congruential Generators. We identify the underlying algorithm employed by trained models, which involves estimating the modulus in-context and finding its prime factorization.
Abstract: Transformers excel at discovering patterns in sequential data, yet their fundamental limitations and learning mechanisms remain crucial topics of investigation. In this paper, we study the ability of Transformers to learn pseudo-random number sequences from linear congruential generators (LCGs), defined by the recurrence relation $x_{t+1} = a x_t + c \\;\mathrm{mod}\\; m$. We find that with sufficient architectural capacity and training data variety, Transformers can perform in-context prediction of LCG sequences with unseen moduli ($m$) and parameters ($a,c$). By analyzing the embedding layers and attention patterns, we uncover how Transformers develop algorithmic structures to learn these sequences in two scenarios of increasing complexity. First, we investigate how Transformers learn LCG sequences with unseen ($a, c$) but fixed modulus; and demonstrate successful learning up to $m = 2^{32}$. We find that models learn to factorize $m$ and utilize digit-wise number representations to make sequential predictions. In the second, more challenging scenario of unseen moduli, we show that Transformers can generalize to unseen moduli up to $m_{\text{test}} = 2^{16}$. In this case, the model employs a two-step strategy: first estimating the unknown modulus from the context, then utilizing prime factorizations to generate predictions. For this task, we observe a sharp transition in the accuracy at a critical depth $d= 3$. We also find that the number of in-context sequence elements needed to reach high accuracy scales sublinearly with the modulus.
Lay Summary: We study whether a particular class of AI models, called Transformers, can learn to predict sequences of seemingly random numbers (PRNGs) that follow hidden mathematical rules. We find that when the models are complex enough and sufficient example-sequences are shown to them, they can successfully learn to predict new and unseen PRNGs by figuring out the underlying rules. These models develop their own strategies to predict PRNGs, which involve breaking the numbers into smaller prime factors and using them to simplify the sequences. Our research shows how modern AI systems can discover and apply complex mathematical rules without being explicitly programmed to do so, helping us understand both their capabilities and limitations.
Link To Code: https://github.com/dayal-kalra/transformer-prng.git
Primary Area: Social Aspects->Accountability, Transparency, and Interpretability
Keywords: Interpretability, In-context learning, Grokking, Transformer, Pseudo-random number generators
Submission Number: 14764
Loading