Comparing human and language models sentence processing difficulties on complex structures
By: Samuel Joseph Amouyal, Aya Meltzer-Asscher, Jonathan Berant
Potential Business Impact:
Computers understand sentences like people do.
Large language models (LLMs) that fluently converse with humans are a reality - but do LLMs experience human-like processing difficulties? We systematically compare human and LLM sentence comprehension across seven challenging linguistic structures. We collect sentence comprehension data from humans and five families of state-of-the-art LLMs, varying in size and training procedure in a unified experimental framework. Our results show LLMs overall struggle on the target structures, but especially on garden path (GP) sentences. Indeed, while the strongest models achieve near perfect accuracy on non-GP structures (93.7% for GPT-5), they struggle on GP structures (46.8% for GPT-5). Additionally, when ranking structures based on average performance, rank correlation between humans and models increases with parameter count. For each target structure, we also collect data for their matched baseline without the difficult structure. Comparing performance on the target vs. baseline sentences, the performance gap observed in humans holds for LLMs, with two exceptions: for models that are too weak performance is uniformly low across both sentence types, and for models that are too strong the performance is uniformly high. Together, these reveal convergence and divergence in human and LLM sentence comprehension, offering new insights into the similarity of humans and LLMs.
Similar Papers
A suite of LMs comprehend puzzle statements as well as humans
Computation and Language
Computers now understand sentences better than people.
Does a Large Language Model Really Speak in Human-Like Language?
Computation and Language
Computers writing like people still sound fake.
Grammaticality Judgments in Humans and Language Models: Revisiting Generative Grammar with LLMs
Computation and Language
Computers learn grammar rules from reading text.