Biasless Language Models Learn Unnaturally: How LLMs Fail to Distinguish the Possible from the Impossible
By: Imry Ziv , Nur Lan , Emmanuel Chemla and more
Potential Business Impact:
Computers don't know what languages are impossible.
Are large language models (LLMs) sensitive to the distinction between humanly possible languages and humanly impossible languages? This question is taken by many to bear on whether LLMs and humans share the same innate learning biases. Previous work has attempted to answer it in the positive by comparing LLM learning curves on existing language datasets and on "impossible" datasets derived from them via various perturbation functions. Using the same methodology, we examine this claim on a wider set of languages and impossible perturbations. We find that in most cases, GPT-2 learns each language and its impossible counterpart equally easily, in contrast to previous claims. We also apply a more lenient condition by testing whether GPT-2 provides any kind of separation between the whole set of natural languages and the whole set of impossible languages. By considering cross-linguistic variance in various metrics computed on the perplexity curves, we show that GPT-2 provides no systematic separation between the possible and the impossible. Taken together, these perspectives show that LLMs do not share the human innate biases that shape linguistic typology.
Similar Papers
Studies with impossible languages falsify LMs as models of human language
Computation and Language
Computers learn languages like babies do.
Language models as tools for investigating the distinction between possible and impossible natural languages
Computation and Language
Teaches computers to tell real languages from fake ones.
No LLM is Free From Bias: A Comprehensive Study of Bias Evaluation in Large Language Models
Computation and Language
Finds and fixes unfairness in AI language.