A Constructive Framework for Nondeterministic Automata via Time-Shared, Depth-Unrolled Feedforward Networks
By: Sahil Rajesh Dhayalkar
Potential Business Impact:
Computers learn to recognize patterns like a brain.
We present a formal and constructive simulation framework for nondeterministic finite automata (NFAs) using time-shared, depth-unrolled feedforward networks (TS-FFNs), i.e., acyclic unrolled computations with shared parameters that are functionally equivalent to unrolled recurrent or state-space models. Unlike prior approaches that rely on explicit recurrent architectures or post hoc extraction methods, our formulation symbolically encodes automaton states as binary vectors, transitions as sparse matrix transformations, and nondeterministic branching-including $\varepsilon$-closures-as compositions of shared thresholded updates. We prove that every regular language can be recognized exactly by such a shared-parameter unrolled feedforward network, with parameter count independent of input length. Our construction yields a constructive equivalence between NFAs and neural networks and demonstrates \emph{empirical learnability}: these networks can be trained via gradient descent on supervised acceptance data to recover the target automaton behavior. This learnability, formalized in Proposition 5.1, is the crux of this work. Extensive experiments validate the theoretical results, achieving perfect or near-perfect agreement on acceptance, state propagation, and closure dynamics. This work clarifies the correspondence between automata theory and modern neural architectures, showing that unrolled feedforward networks can perform precise, interpretable, and trainable symbolic computation.
Similar Papers
Neural Networks as Universal Finite-State Machines: A Constructive Deterministic Finite Automaton Theory
Machine Learning (CS)
Computers learn to follow rules like a simple machine.
Symbolic Feedforward Networks for Probabilistic Finite Automata: Exact Simulation and Learnability
Machine Learning (CS)
Computers learn to predict patterns like a brain.
Symbolic Feedforward Networks for Probabilistic Finite Automata: Exact Simulation and Learnability
Machine Learning (CS)
Computers learn to predict patterns like a brain.