Revisiting Agnostic Boosting
By: Arthur da Cunha , Mikael Møller Høgsgaard , Andrea Paudice and more
Potential Business Impact:
Makes computer learning work better with less data.
Boosting is a key method in statistical learning, allowing for converting weak learners into strong ones. While well studied in the realizable case, the statistical properties of weak-to-strong learning remains less understood in the agnostic setting, where there are no assumptions on the distribution of the labels. In this work, we propose a new agnostic boosting algorithm with substantially improved sample complexity compared to prior works under very general assumptions. Our approach is based on a reduction to the realizable case, followed by a margin-based filtering step to select high-quality hypotheses. We conjecture that the error rate achieved by our proposed method is optimal up to logarithmic factors.
Similar Papers
Sample-Optimal Agnostic Boosting with Unlabeled Data
Machine Learning (CS)
Teaches computers using fewer examples.
Agnostic Reinforcement Learning: Foundations and Algorithms
Machine Learning (CS)
Teaches computers to learn from mistakes better.
Agnostic Learning under Targeted Poisoning: Optimal Rates and the Role of Randomness
Machine Learning (CS)
Protects computer learning from bad data.