Score: 2

Language Models Do Not Follow Occam's Razor: A Benchmark for Inductive and Abductive Reasoning

Published: September 3, 2025 | arXiv ID: 2509.03345v1

By: Yunxin Sun, Abulhair Saparov

Potential Business Impact:

Helps computers guess better with less information.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Reasoning is a core capability in artificial intelligence systems, for which large language models (LLMs) have recently shown remarkable progress. However, most work focuses exclusively on deductive reasoning, which is problematic since other types of reasoning are also essential in solving real-world problems, and they are less explored. This work focuses on evaluating LLMs' inductive and abductive reasoning capabilities. We introduce a programmable and synthetic dataset, InAbHyD (pronounced in-a-bid), where each reasoning example consists of an incomplete world model and a set of observations. The task for the intelligent agent is to produce hypotheses to explain observations under the incomplete world model to solve each reasoning example. We propose a new metric to evaluate the quality of hypotheses based on Occam's Razor. We evaluate and analyze some state-of-the-art LLMs. Our analysis shows that LLMs can perform inductive and abductive reasoning in simple scenarios, but struggle with complex world models and producing high-quality hypotheses, even with popular reasoning-enhancing techniques such as in-context learning and RLVR.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
21 pages

Category
Computer Science:
Artificial Intelligence