Passive Learning of Lattice Automata from Recurrent Neural Networks
By: Jaouhar Slimi, Tristan Le Gall, Augustin Lemesle
Potential Business Impact:
Finds hidden patterns in complex data.
We present a passive automata learning algorithm that can extract automata from recurrent networks with very large or even infinite alphabets. Our method combines overapproximations from the field of Abstract Interpretation and passive automata learning from the field of Grammatical Inference. We evaluate our algorithm by first comparing it with the state-of-the-art automata extraction algorithm from Recurrent Neural Networks trained on Tomita grammars. Then, we extend these experiments to regular languages with infinite alphabets, which we propose as a novel benchmark.
Similar Papers
Active Automata Learning with Advice
Formal Languages and Automata Theory
Teaches computers faster by giving them hints.
Extracting Robust Register Automata from Neural Networks over Data Sequences
Artificial Intelligence
Lets computers understand and check complex AI.
Compositional Active Learning of Synchronizing Systems through Automated Alphabet Refinement
Machine Learning (CS)
Learns how many parts work together automatically.