Language models as tools for investigating the distinction between possible and impossible natural languages
By: Julie Kallini, Christopher Potts
Potential Business Impact:
Teaches computers to tell real languages from fake ones.
We argue that language models (LMs) have strong potential as investigative tools for probing the distinction between possible and impossible natural languages and thus uncovering the inductive biases that support human language learning. We outline a phased research program in which LM architectures are iteratively refined to better discriminate between possible and impossible languages, supporting linking hypotheses to human cognition.
Similar Papers
Studies with impossible languages falsify LMs as models of human language
Computation and Language
Computers learn languages like babies do.
Biasless Language Models Learn Unnaturally: How LLMs Fail to Distinguish the Possible from the Impossible
Computation and Language
Computers don't know what languages are impossible.
Investigating Language Model Capabilities to Represent and Process Formal Knowledge: A Preliminary Study to Assist Ontology Engineering
Artificial Intelligence
Helps small computers reason better with logic.