Score: 1

Rationality Check! Benchmarking the Rationality of Large Language Models

Published: September 18, 2025 | arXiv ID: 2509.14546v1

By: Zhilun Zhou , Jing Yi Wang , Nicholas Sukiennik and more

Potential Business Impact:

Tests if AI thinks and acts like people.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs), a recent advance in deep learning and machine intelligence, have manifested astonishing capacities, now considered among the most promising for artificial general intelligence. With human-like capabilities, LLMs have been used to simulate humans and serve as AI assistants across many applications. As a result, great concern has arisen about whether and under what circumstances LLMs think and behave like real human agents. Rationality is among the most important concepts in assessing human behavior, both in thinking (i.e., theoretical rationality) and in taking action (i.e., practical rationality). In this work, we propose the first benchmark for evaluating the omnibus rationality of LLMs, covering a wide range of domains and LLMs. The benchmark includes an easy-to-use toolkit, extensive experimental results, and analysis that illuminates where LLMs converge and diverge from idealized human rationality. We believe the benchmark can serve as a foundational tool for both developers and users of LLMs.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
29 pages

Category
Computer Science:
Artificial Intelligence