Large language models are not about language
By: Johan J. Bolhuis , Andrea Moro , Stephen Crain and more
Potential Business Impact:
Teaches computers how minds learn language.
Large Language Models are useless for linguistics, as they are probabilistic models that require a vast amount of data to analyse externalized strings of words. In contrast, human language is underpinned by a mind-internal computational system that recursively generates hierarchical thought structures. The language system grows with minimal external input and can readily distinguish between real language and impossible languages.
Similar Papers
Large language models have learned to use language
Computation and Language
Computers now understand language like people do.
Large Language Models Do Not Simulate Human Psychology
Artificial Intelligence
Computers can't think like people in studies.
What do language models model? Transformers, automata, and the format of thought
Computation and Language
Computers learn language like a machine, not a brain.