Score: 0

Learning a Game by Paying the Agents

Published: March 3, 2025 | arXiv ID: 2503.01976v1

By: Brian Hu Zhang , Tao Lin , Yiling Chen and more

Potential Business Impact:

Lets you guess what players want to win.

Business Areas:
Gamification Gaming

We study the problem of learning the utility functions of agents in a normal-form game by observing the agents play the game repeatedly. Differing from most prior literature, we introduce a principal with the power to observe the agents playing the game, send the agents signals, and send the agents payments as a function of their actions. Under reasonable behavioral models for the agents such as iterated dominated action removal or a no-regret assumption, we show that the principal can, using a number of rounds polynomial in the size of the game, learn the utility functions of all agents to any desirable precision $\varepsilon > 0$. We also show lower bounds in both models, which nearly match the upper bounds in the former model and also strictly separate the two models: the principal can learn strictly faster in the iterated dominance model. Finally, we discuss implications for the problem of steering agents to a desired equilibrium: in particular, we introduce, using our utility-learning algorithm as a subroutine, the first algorithm for steering learning agents without prior knowledge of their utilities.

Page Count
23 pages

Category
Computer Science:
CS and Game Theory