Optimal Algorithms for Bandit Learning in Matching Markets
By: Tejas Pagare, Agniv Bandyopadhyay, Sandeep Juneja
Potential Business Impact:
Finds best job matches faster with less guessing.
We study the problem of pure exploration in matching markets under uncertain preferences, where the goal is to identify a stable matching with confidence parameter $\delta$ and minimal sample complexity. Agents learn preferences via stochastic rewards, with expected values indicating preferences. This finds use in labor market platforms like Upwork, where firms and freelancers must be matched quickly despite noisy observations and no prior knowledge, in a stable manner that prevents dissatisfaction. We consider markets with unique stable matching and establish information-theoretic lower bounds on sample complexity for (1) one-sided learning, where one side of the market knows its true preferences, and (2) two-sided learning, where both sides are uncertain. We propose a computationally efficient algorithm and prove that it asymptotically ($\delta\to 0$) matches the lower bound to a constant for one-sided learning. Using the insights from the lower bound, we extend our algorithm to the two-sided learning setting and provide experimental results showing that it closely matches the lower bound on sample complexity. Finally, using a system of ODEs, we characterize the idealized fluid path that our algorithm chases.
Similar Papers
Learning Equilibria in Matching Games with Bandit Feedback
Machine Learning (CS)
Helps matching systems learn fair deals for everyone.
Bandit Learning in Housing Markets
CS and Game Theory
Helps people find best homes by learning preferences.
Bandits with Single-Peaked Preferences and Limited Resources
Machine Learning (CS)
Helps computers pick the best things for people.