Large Language Models Understanding: an Inherent Ambiguity Barrier
By: Daniel N. Nissani
Potential Business Impact:
Computers can't truly understand what they say.
A lively ongoing debate is taking place, since the extraordinary emergence of Large Language Models (LLMs) with regards to their capability to understand the world and capture the meaning of the dialogues in which they are involved. Arguments and counter-arguments have been proposed based upon thought experiments, anecdotal conversations between LLMs and humans, statistical linguistic analysis, philosophical considerations, and more. In this brief paper we present a counter-argument based upon a thought experiment and semi-formal considerations leading to an inherent ambiguity barrier which prevents LLMs from having any understanding of what their amazingly fluent dialogues mean.
Similar Papers
An Empirical Study of the Role of Incompleteness and Ambiguity in Interactions with Large Language Models
Computation and Language
Helps computers ask better questions to get answers.
Disambiguation in Conversational Question Answering in the Era of LLMs and Agents: A Survey
Computation and Language
Helps computers understand confusing words better.
Linguistic Blind Spots of Large Language Models
Computation and Language
AI struggles to understand sentence parts.