Score: 0

Achieving Logarithmic Regret in KL-Regularized Zero-Sum Markov Games

Published: October 15, 2025 | arXiv ID: 2510.13060v1

By: Anupam Nayak , Tong Yang , Osman Yagan and more

Potential Business Impact:

Teaches computers to learn faster with less data.

Business Areas:
A/B Testing Data and Analytics

Reverse Kullback-Leibler (KL) divergence-based regularization with respect to a fixed reference policy is widely used in modern reinforcement learning to preserve the desired traits of the reference policy and sometimes to promote exploration (using uniform reference policy, known as entropy regularization). Beyond serving as a mere anchor, the reference policy can also be interpreted as encoding prior knowledge about good actions in the environment. In the context of alignment, recent game-theoretic approaches have leveraged KL regularization with pretrained language models as reference policies, achieving notable empirical success in self-play methods. Despite these advances, the theoretical benefits of KL regularization in game-theoretic settings remain poorly understood. In this work, we develop and analyze algorithms that provably achieve improved sample efficiency under KL regularization. We study both two-player zero-sum Matrix games and Markov games: for Matrix games, we propose OMG, an algorithm based on best response sampling with optimistic bonuses, and extend this idea to Markov games through the algorithm SOMG, which also uses best response sampling and a novel concept of superoptimistic bonuses. Both algorithms achieve a logarithmic regret in $T$ that scales inversely with the KL regularization strength $\beta$ in addition to the standard $\widetilde{\mathcal{O}}(\sqrt{T})$ regret independent of $\beta$ which is attained in both regularized and unregularized settings

Country of Origin
🇺🇸 United States

Page Count
52 pages

Category
Computer Science:
Machine Learning (CS)