Playing Markov Games Without Observing Payoffs
By: Daniel Ablin, Alon Cohen
Potential Business Impact:
Helps players win games without seeing scores.
Optimization under uncertainty is a fundamental problem in learning and decision-making, particularly in multi-agent systems. Previously, Feldman, Kalai, and Tennenholtz [2010] demonstrated the ability to efficiently compete in repeated symmetric two-player matrix games without observing payoffs, as long as the opponents actions are observed. In this paper, we introduce and formalize a new class of zero-sum symmetric Markov games, which extends the notion of symmetry from matrix games to the Markovian setting. We show that even without observing payoffs, a player who knows the transition dynamics and observes only the opponents sequence of actions can still compete against an adversary who may have complete knowledge of the game. We formalize three distinct notions of symmetry in this setting and show that, under these conditions, the learning problem can be reduced to an instance of online learning, enabling the player to asymptotically match the return of the opponent despite lacking payoff observations. Our algorithms apply to both matrix and Markov games, and run in polynomial time with respect to the size of the game and the number of episodes. Our work broadens the class of games in which robust learning is possible under severe informational disadvantage and deepens the connection between online learning and adversarial game theory.
Similar Papers
Playing Markov Games Without Observing Payoffs
CS and Game Theory
Lets computers win games without seeing scores.
The Sample Complexity of Online Strategic Decision Making with Information Asymmetry and Knowledge Transportability
Machine Learning (CS)
Learns how to win games with secret info.
Learning a Game by Paying the Agents
CS and Game Theory
Lets you guess what players want to win.