Rationality Check! Benchmarking the Rationality of Large Language Models
By: Zhilun Zhou , Jing Yi Wang , Nicholas Sukiennik and more
Potential Business Impact:
Tests if AI thinks and acts like people.
Large language models (LLMs), a recent advance in deep learning and machine intelligence, have manifested astonishing capacities, now considered among the most promising for artificial general intelligence. With human-like capabilities, LLMs have been used to simulate humans and serve as AI assistants across many applications. As a result, great concern has arisen about whether and under what circumstances LLMs think and behave like real human agents. Rationality is among the most important concepts in assessing human behavior, both in thinking (i.e., theoretical rationality) and in taking action (i.e., practical rationality). In this work, we propose the first benchmark for evaluating the omnibus rationality of LLMs, covering a wide range of domains and LLMs. The benchmark includes an easy-to-use toolkit, extensive experimental results, and analysis that illuminates where LLMs converge and diverge from idealized human rationality. We believe the benchmark can serve as a foundational tool for both developers and users of LLMs.
Similar Papers
Human-Level Reasoning: A Comparative Study of Large Language Models on Logical and Abstract Reasoning
Artificial Intelligence
Tests if AI can think like a person.
Thinking Machines: A Survey of LLM based Reasoning Strategies
Computation and Language
Makes AI think better to solve hard problems.
From Language to Action: A Review of Large Language Models as Autonomous Agents and Tool Users
Computation and Language
AI learns to think, plan, and improve itself.