Safe Language Generation in the Limit
By: Antonios Anastasopoulos, Giuseppe Ateniese, Evgenios M. Kornaropoulos
Recent results in learning a language in the limit have shown that, although language identification is impossible, language generation is tractable. As this foundational area expands, we need to consider the implications of language generation in real-world settings. This work offers the first theoretical treatment of safe language generation. Building on the computational paradigm of learning in the limit, we formalize the tasks of safe language identification and generation. We prove that under this model, safe language identification is impossible, and that safe language generation is at least as hard as (vanilla) language identification, which is also impossible. Last, we discuss several intractable and tractable cases.
Similar Papers
Language Generation: Complexity Barriers and Implications for Learning
Computation and Language
Computers need many examples to learn language.
Language Generation with Infinite Contamination
Machine Learning (Stat)
Teaches computers to learn languages even with mistakes.
Language Generation in the Limit: Noise, Loss, and Feedback
Data Structures and Algorithms
Teaches computers to learn any language perfectly.