Neural Induction of Finite-State Transducers
By: Michael Ginn, Alexis Palmer, Mans Hulden
Potential Business Impact:
Teaches computers to change words perfectly.
Finite-State Transducers (FSTs) are effective models for string-to-string rewriting tasks, often providing the efficiency necessary for high-performance applications, but constructing transducers by hand is difficult. In this work, we propose a novel method for automatically constructing unweighted FSTs following the hidden state geometry learned by a recurrent neural network. We evaluate our methods on real-world datasets for morphological inflection, grapheme-to-phoneme prediction, and historical normalization, showing that the constructed FSTs are highly accurate and robust for many datasets, substantially outperforming classical transducer learning algorithms by up to 87% accuracy on held-out test sets.
Similar Papers
Certified Symbolic Finite Transducers: Formalization and Applications to String Analysis
Formal Languages and Automata Theory
Helps computers solve tricky text puzzles faster.
Complete Compositional Syntax for Finite Transducers on Finite and Bi-Infinite Words
Logic in Computer Science
Makes different computer systems talk the same language.
Neural Networks as Universal Finite-State Machines: A Constructive Deterministic Finite Automaton Theory
Machine Learning (CS)
Computers learn to follow rules like a simple machine.