Inference of Deterministic Finite Automata via Q-Learning
By: Elaheh Hosseinkhani, Martin Leucker
Potential Business Impact:
Teaches computers to learn patterns from examples.
Traditional approaches to inference of deterministic finite-state automata (DFA) stem from symbolic AI, including both active learning methods (e.g., Angluin's L* algorithm and its variants) and passive techniques (e.g., Biermann and Feldman's method, RPNI). Meanwhile, sub-symbolic AI, particularly machine learning, offers alternative paradigms for learning from data, such as supervised, unsupervised, and reinforcement learning (RL). This paper investigates the use of Q-learning, a well-known reinforcement learning algorithm, for the passive inference of deterministic finite automata. It builds on the core insight that the learned Q-function, which maps state-action pairs to rewards, can be reinterpreted as the transition function of a DFA over a finite domain. This provides a novel bridge between sub-symbolic learning and symbolic representations. The paper demonstrates how Q-learning can be adapted for automaton inference and provides an evaluation on several examples.
Similar Papers
RLAF: Reinforcement Learning from Automaton Feedback
Machine Learning (CS)
Teaches computers to learn tasks with tricky rules.
Active Automata Learning with Advice
Formal Languages and Automata Theory
Teaches computers faster by giving them hints.
Active Learning of Symbolic Automata Over Rational Numbers
Machine Learning (CS)
Teaches computers to learn from numbers, not just letters.