Neural Networks as Universal Finite-State Machines: A Constructive Deterministic Finite Automaton Theory
By: Sahil Rajesh Dhayalkar
Potential Business Impact:
Computers learn to follow rules like a simple machine.
We present a complete theoretical and empirical framework establishing feedforward neural networks as universal finite-state machines (N-FSMs). Our results prove that finite-depth ReLU and threshold networks can exactly simulate deterministic finite automata (DFAs) by unrolling state transitions into depth-wise neural layers, with formal characterizations of required depth, width, and state compression. We demonstrate that DFA transitions are linearly separable, binary threshold activations allow exponential compression, and Myhill-Nerode equivalence classes can be embedded into continuous latent spaces while preserving separability. We also formalize the expressivity boundary: fixed-depth feedforward networks cannot recognize non-regular languages requiring unbounded memory. Unlike prior heuristic or probing-based studies, we provide constructive proofs and design explicit DFA-unrolled neural architectures that empirically validate every claim. Our results bridge deep learning, automata theory, and neural-symbolic computation, offering a rigorous blueprint for how discrete symbolic processes can be realized in continuous neural systems.
Similar Papers
A Constructive Framework for Nondeterministic Automata via Time-Shared, Depth-Unrolled Feedforward Networks
Machine Learning (CS)
Computers learn to recognize patterns like a brain.
Symbolic Feedforward Networks for Probabilistic Finite Automata: Exact Simulation and Learnability
Machine Learning (CS)
Computers learn to predict patterns like a brain.
Symbolic Feedforward Networks for Probabilistic Finite Automata: Exact Simulation and Learnability
Machine Learning (CS)
Computers learn to predict patterns like a brain.