Score: 0

Optimal Algorithms for Bandit Learning in Matching Markets

Published: September 17, 2025 | arXiv ID: 2509.14466v1

By: Tejas Pagare, Agniv Bandyopadhyay, Sandeep Juneja

Potential Business Impact:

Finds best job matches faster with less guessing.

Business Areas:
A/B Testing Data and Analytics

We study the problem of pure exploration in matching markets under uncertain preferences, where the goal is to identify a stable matching with confidence parameter $\delta$ and minimal sample complexity. Agents learn preferences via stochastic rewards, with expected values indicating preferences. This finds use in labor market platforms like Upwork, where firms and freelancers must be matched quickly despite noisy observations and no prior knowledge, in a stable manner that prevents dissatisfaction. We consider markets with unique stable matching and establish information-theoretic lower bounds on sample complexity for (1) one-sided learning, where one side of the market knows its true preferences, and (2) two-sided learning, where both sides are uncertain. We propose a computationally efficient algorithm and prove that it asymptotically ($\delta\to 0$) matches the lower bound to a constant for one-sided learning. Using the insights from the lower bound, we extend our algorithm to the two-sided learning setting and provide experimental results showing that it closely matches the lower bound on sample complexity. Finally, using a system of ODEs, we characterize the idealized fluid path that our algorithm chases.

Page Count
42 pages

Category
Computer Science:
CS and Game Theory