Score: 0

On the Notion that Language Models Reason

Published: November 14, 2025 | arXiv ID: 2511.11810v1

By: Bertram Højer

Potential Business Impact:

Computers learn by copying patterns, not thinking.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Language models (LMs) are said to be exhibiting reasoning, but what does this entail? We assess definitions of reasoning and how key papers in the field of natural language processing (NLP) use the notion and argue that the definitions provided are not consistent with how LMs are trained, process information, and generate new tokens. To illustrate this incommensurability we assume the view that transformer-based LMs implement an \textit{implicit} finite-order Markov kernel mapping contexts to conditional token distributions. In this view, reasoning-like outputs correspond to statistical regularities and approximate statistical invariances in the learned kernel rather than the implementation of explicit logical mechanisms. This view is illustrative of the claim that LMs are "statistical pattern matchers"" and not genuine reasoners and provides a perspective that clarifies why reasoning-like outputs arise in LMs without any guarantees of logical consistency. This distinction is fundamental to how epistemic uncertainty is evaluated in LMs. We invite a discussion on the importance of how the computational processes of the systems we build and analyze in NLP research are described.

Country of Origin
🇩🇰 Denmark

Page Count
9 pages

Category
Computer Science:
Computation and Language