Neural Networks as Universal Finite-State Machines: A Constructive ReLU Simulation Framework for NFAs
Abstract: We present a formal and constructive framework establishing the equivalence between nondeterministic finite automata (NFAs) and standard feedforward ReLU neural networks. By encoding automaton states as binary vectors and transitions as sparse linear layers, we show that ReLU activations simulate nondeterministic branching, subset construction, and $\epsilon$-closures in a mathematically precise manner. Our core theoretical results prove that a three-layer ReLU network of width $\mathcal{O}(n)$ can exactly recognize any regular language accepted by an $n$-state NFA—without recurrence, memory, or approximation. Furthermore, we show that gradient descent over structure-preserving networks preserves symbolic semantics and acceptance behavior. Extensive experiments across multiple validation tasks—including parallel path tracking, symbolic subset construction, $\epsilon$-closure convergence, acceptance classification, structural training invariants, and functional equivalence—achieve perfect or near-perfect empirical alignment with ground-truth automata. This work provides the first provably complete symbolic simulation of NFAs within standard deep learning architectures, uniting automata theory with neural computation through ReLU dynamics.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Surbhi_Goel1
Submission Number: 4988
Loading