A Categorical Analysis of Large Language Models and Why LLMs Circumvent the Symbol Grounding Problem
By: Luciano Floridi, Yiyang Jia, Fernando Tohmé
Potential Business Impact:
AI doesn't truly understand, it just tricks us.
This paper presents a formal, categorical framework for analysing how humans and large language models (LLMs) transform content into truth-evaluated propositions about a state space of possible worlds W , in order to argue that LLMs do not solve but circumvent the symbol grounding problem.
Similar Papers
Evaluating Large Language Models on the Frame and Symbol Grounding Problems: A Zero-shot Benchmark
Artificial Intelligence
Computers now understand tricky thinking problems.
Cognitive Foundations for Reasoning and Their Manifestation in LLMs
Artificial Intelligence
Teaches computers to think more like people.
An Expert-grounded benchmark of General Purpose LLMs in LCA
Computation and Language
AI can help with eco-friendly product checks.