Score: 1

Sample-Optimal Agnostic Boosting with Unlabeled Data

Published: March 6, 2025 | arXiv ID: 2503.04706v1

By: Udaya Ghai, Karan Singh

BigTech Affiliations: Amazon

Potential Business Impact:

Teaches computers using fewer examples.

Business Areas:
A/B Testing Data and Analytics

Boosting provides a practical and provably effective framework for constructing accurate learning algorithms from inaccurate rules of thumb. It extends the promise of sample-efficient learning to settings where direct Empirical Risk Minimization (ERM) may not be implementable efficiently. In the realizable setting, boosting is known to offer this computational reprieve without compromising on sample efficiency. However, in the agnostic case, existing boosting algorithms fall short of achieving the optimal sample complexity. This paper highlights an unexpected and previously unexplored avenue of improvement: unlabeled samples. We design a computationally efficient agnostic boosting algorithm that matches the sample complexity of ERM, given polynomially many additional unlabeled samples. In fact, we show that the total number of samples needed, unlabeled and labeled inclusive, is never more than that for the best known agnostic boosting algorithm -- so this result is never worse -- while only a vanishing fraction of these need to be labeled for the algorithm to succeed. This is particularly fortuitous for learning-theoretic applications of agnostic boosting, which often take place in the distribution-specific setting, where unlabeled samples can be availed for free. We detail other applications of this result in reinforcement learning.

Country of Origin
🇺🇸 United States

Page Count
23 pages

Category
Computer Science:
Machine Learning (CS)