Score: 0

Large Language Models Understanding: an Inherent Ambiguity Barrier

Published: May 1, 2025 | arXiv ID: 2505.00654v3

By: Daniel N. Nissani

Potential Business Impact:

Computers can't truly understand what they say.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

A lively ongoing debate is taking place, since the extraordinary emergence of Large Language Models (LLMs) with regards to their capability to understand the world and capture the meaning of the dialogues in which they are involved. Arguments and counter-arguments have been proposed based upon thought experiments, anecdotal conversations between LLMs and humans, statistical linguistic analysis, philosophical considerations, and more. In this brief paper we present a counter-argument based upon a thought experiment and semi-formal considerations leading to an inherent ambiguity barrier which prevents LLMs from having any understanding of what their amazingly fluent dialogues mean.

Country of Origin
🇮🇱 Israel

Page Count
8 pages

Category
Computer Science:
Computation and Language