Neural Networks as Universal Finite-State Machines: A Constructive Feedforward Simulation Framework for NFAs

TMLR Paper4988 Authors

29 May 2025 (modified: 18 Sept 2025)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: We present a formal and constructive simulation framework for nondeterministic finite automata (NFAs) using standard feedforward neural networks. Unlike prior approaches that rely on recurrent architectures or post hoc extraction methods, our formulation symbolically encodes automaton states as binary vectors, transitions as sparse matrix transformations, and nondeterministic branching—including $\varepsilon$-closures—as compositions of shared thresholded updates. We prove that every regular language can be recognized exactly by a depth-unrolled feedforward network with shared parameters, independent of input length. Our construction yields not only formal equivalence between NFAs and neural networks, but also practical trainability: we demonstrate that these networks can learn NFA acceptance behavior through gradient descent using standard supervised data. Extensive experiments validate all theoretical results, achieving perfect or near-perfect agreement on acceptance, state propagation, and closure dynamics. This work establishes a new bridge between symbolic automata theory and modern neural architectures, showing that feedforward networks can perform precise, interpretable, and trainable symbolic computation.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Surbhi_Goel1
Submission Number: 4988
Loading