Evaluating Large Language Models on the Frame and Symbol Grounding Problems: A Zero-shot Benchmark
By: Shoko Oka
Potential Business Impact:
Computers now understand tricky thinking problems.
Recent advancements in large language models (LLMs) have revitalized philosophical debates surrounding artificial intelligence. Two of the most fundamental challenges - namely, the Frame Problem and the Symbol Grounding Problem - have historically been viewed as unsolvable within traditional symbolic AI systems. This study investigates whether modern LLMs possess the cognitive capacities required to address these problems. To do so, I designed two benchmark tasks reflecting the philosophical core of each problem, administered them under zero-shot conditions to 13 prominent LLMs (both closed and open-source), and assessed the quality of the models' outputs across five trials each. Responses were scored along multiple criteria, including contextual reasoning, semantic coherence, and information filtering. The results demonstrate that while open-source models showed variability in performance due to differences in model size, quantization, and instruction tuning, several closed models consistently achieved high scores. These findings suggest that select modern LLMs may be acquiring capacities sufficient to produce meaningful and stable responses to these long-standing theoretical challenges.
Similar Papers
A Categorical Analysis of Large Language Models and Why LLMs Circumvent the Symbol Grounding Problem
Artificial Intelligence
AI doesn't truly understand, it just tricks us.
A Benchmark for Zero-Shot Belief Inference in Large Language Models
Computation and Language
Helps computers understand what people believe.
Reasoning Capabilities and Invariability of Large Language Models
Computation and Language
Tests if computers can think logically.