Reasoning and Behavioral Equilibria in LLM-Nash Games: From Mindsets to Actions
By: Quanyan Zhu
Potential Business Impact:
Helps AI make smarter choices by thinking differently.
We introduce the LLM-Nash framework, a game-theoretic model where agents select reasoning prompts to guide decision-making via Large Language Models (LLMs). Unlike classical games that assume utility-maximizing agents with full rationality, this framework captures bounded rationality by modeling the reasoning process explicitly. Equilibrium is defined over the prompt space, with actions emerging as the behavioral output of LLM inference. This approach enables the study of cognitive constraints, mindset expressiveness, and epistemic learning. Through illustrative examples, we show how reasoning equilibria can diverge from classical Nash outcomes, offering a new foundation for strategic interaction in LLM-enabled systems.
Similar Papers
Beyond Nash Equilibrium: Bounded Rationality of LLMs and humans in Strategic Decision-making
Artificial Intelligence
Computers copy human thinking, but less flexibly.
The Illusion of Rationality: Tacit Bias and Strategic Dominance in Frontier LLM Negotiation Games
CS and Game Theory
AI negotiators don't play fair.
From Debate to Equilibrium: Belief-Driven Multi-Agent LLM Reasoning via Bayesian Nash Equilibrium
Machine Learning (CS)
Makes AI teams work smarter, faster, and cheaper.