Score: 0

A Categorical Analysis of Large Language Models and Why LLMs Circumvent the Symbol Grounding Problem

Published: December 9, 2025 | arXiv ID: 2512.09117v1

By: Luciano Floridi, Yiyang Jia, Fernando Tohmé

Potential Business Impact:

AI doesn't truly understand, it just tricks us.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

This paper presents a formal, categorical framework for analysing how humans and large language models (LLMs) transform content into truth-evaluated propositions about a state space of possible worlds W , in order to argue that LLMs do not solve but circumvent the symbol grounding problem.

Country of Origin
🇺🇸 United States

Page Count
25 pages

Category
Computer Science:
Artificial Intelligence