Symbolic Feedforward Networks for Probabilistic Finite Automata: Exact Simulation and Learnability
By: Sahil Rajesh Dhayalkar
Potential Business Impact:
Computers learn to predict patterns like a brain.
We present a formal and constructive theory showing that probabilistic finite automata (PFAs) can be exactly simulated using symbolic feedforward neural networks. Our architecture represents state distributions as vectors and transitions as stochastic matrices, enabling probabilistic state propagation via matrix-vector products. This yields a parallel, interpretable, and differentiable simulation of PFA dynamics using soft updates-without recurrence. We formally characterize probabilistic subset construction, $\varepsilon$-closure, and exact simulation via layered symbolic computation, and prove equivalence between PFAs and specific classes of neural networks. We further show that these symbolic simulators are not only expressive but learnable: trained with standard gradient descent-based optimization on labeled sequence data, they recover the exact behavior of ground-truth PFAs. This learnability, formalized in Proposition 5.1, is the crux of this work. Our results unify probabilistic automata theory with neural architectures under a rigorous algebraic framework, bridging the gap between symbolic computation and deep learning.
Similar Papers
Symbolic Feedforward Networks for Probabilistic Finite Automata: Exact Simulation and Learnability
Machine Learning (CS)
Computers learn to predict patterns like a brain.
Neural Networks as Universal Finite-State Machines: A Constructive Deterministic Finite Automaton Theory
Machine Learning (CS)
Computers learn to follow rules like a simple machine.
A Constructive Framework for Nondeterministic Automata via Time-Shared, Depth-Unrolled Feedforward Networks
Machine Learning (CS)
Computers learn to recognize patterns like a brain.