Score: 1

LLMs for Game Theory: Entropy-Guided In-Context Learning and Adaptive CoT Reasoning

Published: January 15, 2026 | arXiv ID: 2601.10775v1

By: Tommaso Felice Banfi, Sashenka Gamage

Potential Business Impact:

Teaches computers to win games by thinking smarter.

Business Areas:
Semantic Search Internet Services

We propose a novel LLM-based framework for reasoning in discrete, game-theoretic tasks, illustrated with \emph{Tic-Tac-Toe}. The method integrates in-context learning with entropy-guided chain-of-thought (CoT) reasoning and adaptive context retrieval. The model dynamically adjusts both the number of retrieved examples and reasoning paths according to token-level uncertainty: concise reasoning with minimal context is used when uncertainty is low, whereas higher uncertainty triggers expanded multi-path CoT exploration. Experimental evaluation against a sub-optimal algorithmic opponent shows that entropy-aware adaptive reasoning substantially improves decision quality, increasing the average game outcome from \(-11.6\%\) with the baseline LLM to \(+9.5\%\) with entropy-guided adaptive reasoning over 100 games (win = +1, tie = 0, loss = -1), while maintaining a relatively low number of LLM queries per game. Statistical validation confirms that the improvement is significant, and correlation analysis reveals a negative association between token-level entropy and move optimality. These findings demonstrate that uncertainty-guided adaptive reasoning effectively enhances LLM performance in sequential decision-making environments.

Country of Origin
🇭🇰 🇮🇹 Italy, Hong Kong

Page Count
10 pages

Category
Computer Science:
Computation and Language