Simulated Reasoning is Reasoning
By: Hendrik Kempt, Alon Lavie
Potential Business Impact:
Computers learn to solve problems by "thinking aloud."
Reasoning has long been understood as a pathway between stages of understanding. Proper reasoning leads to understanding of a given subject. This reasoning was conceptualized as a process of understanding in a particular way, i.e., "symbolic reasoning". Foundational Models (FM) demonstrate that this is not a necessary condition for many reasoning tasks: they can "reason" by way of imitating the process of "thinking out loud", testing the produced pathways, and iterating on these pathways on their own. This leads to some form of reasoning that can solve problems on its own or with few-shot learning, but appears fundamentally different from human reasoning due to its lack of grounding and common sense, leading to brittleness of the reasoning process. These insights promise to substantially alter our assessment of reasoning and its necessary conditions, but also inform the approaches to safety and robust defences against this brittleness of FMs. This paper offers and discusses several philosophical interpretations of this phenomenon, argues that the previously apt metaphor of the "stochastic parrot" has lost its relevance and thus should be abandoned, and reflects on different normative elements in the safety- and appropriateness-considerations emerging from these reasoning models and their growing capacity.
Similar Papers
Reasoning Systems as Structured Processes: Foundations, Failures, and Formal Criteria
Artificial Intelligence
Compares reasoning systems to spot failures
MixReasoning: Switching Modes to Think
Artificial Intelligence
Smart AI learns faster by skipping easy steps.
RLAD: Training LLMs to Discover Abstractions for Solving Reasoning Problems
Artificial Intelligence
Helps computers solve hard problems by learning steps.