Studies with impossible languages falsify LMs as models of human language
By: Jeffrey S. Bowers, Jeff Mitchell
Potential Business Impact:
Computers learn languages like babies do.
According to Futrell and Mahowald [arXiv:2501.17047], both infants and language models (LMs) find attested languages easier to learn than impossible languages that have unnatural structures. We review the literature and show that LMs often learn attested and many impossible languages equally well. Difficult to learn impossible languages are simply more complex (or random). LMs are missing human inductive biases that support language acquisition.
Similar Papers
Language models as tools for investigating the distinction between possible and impossible natural languages
Computation and Language
Teaches computers to tell real languages from fake ones.
Biasless Language Models Learn Unnaturally: How LLMs Fail to Distinguish the Possible from the Impossible
Computation and Language
Computers don't know what languages are impossible.
Language Models Model Language
Computation and Language
Makes AI understand language by counting word use.