Score: 1

Large Language Models are Near-Optimal Decision-Makers with a Non-Human Learning Behavior

Published: June 19, 2025 | arXiv ID: 2506.16163v1

By: Hao Li , Gengrui Zhang , Petter Holme and more

Potential Business Impact:

AI makes better choices than people in tests.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Human decision-making belongs to the foundation of our society and civilization, but we are on the verge of a future where much of it will be delegated to artificial intelligence. The arrival of Large Language Models (LLMs) has transformed the nature and scope of AI-supported decision-making; however, the process by which they learn to make decisions, compared to humans, remains poorly understood. In this study, we examined the decision-making behavior of five leading LLMs across three core dimensions of real-world decision-making: uncertainty, risk, and set-shifting. Using three well-established experimental psychology tasks designed to probe these dimensions, we benchmarked LLMs against 360 newly recruited human participants. Across all tasks, LLMs often outperformed humans, approaching near-optimal performance. Moreover, the processes underlying their decisions diverged fundamentally from those of humans. On the one hand, our finding demonstrates the ability of LLMs to manage uncertainty, calibrate risk, and adapt to changes. On the other hand, this disparity highlights the risks of relying on them as substitutes for human judgment, calling for further inquiry.

Country of Origin
🇺🇸 🇫🇮 🇨🇳 Finland, United States, China

Page Count
41 pages

Category
Computer Science:
Artificial Intelligence