Score: 1

DipLLM: Fine-Tuning LLM for Strategic Decision-making in Diplomacy

Published: June 11, 2025 | arXiv ID: 2506.09655v2

By: Kaixuan Xu , Jiajun Chai , Sicheng Li and more

Potential Business Impact:

AI learns to play complex strategy games better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Diplomacy is a complex multiplayer game that requires both cooperation and competition, posing significant challenges for AI systems. Traditional methods rely on equilibrium search to generate extensive game data for training, which demands substantial computational resources. Large Language Models (LLMs) offer a promising alternative, leveraging pre-trained knowledge to achieve strong performance with relatively small-scale fine-tuning. However, applying LLMs to Diplomacy remains challenging due to the exponential growth of possible action combinations and the intricate strategic interactions among players. To address this challenge, we propose DipLLM, a fine-tuned LLM-based agent that learns equilibrium policies for Diplomacy. DipLLM employs an autoregressive factorization framework to simplify the complex task of multi-unit action assignment into a sequence of unit-level decisions. By defining an equilibrium policy within this framework as the learning objective, we fine-tune the model using only 1.5% of the data required by the state-of-the-art Cicero model, surpassing its performance. Our results demonstrate the potential of fine-tuned LLMs for tackling complex strategic decision-making in multiplayer games.

Page Count
18 pages

Category
Computer Science:
Artificial Intelligence