Sample-Near-Optimal Agnostic Boosting with Improved Running Time
By: Arthur da Cunha, Miakel Møller Høgsgaard, Andrea Paudice
Potential Business Impact:
Makes weak computer guesses become smart answers.
Boosting is a powerful method that turns weak learners, which perform only slightly better than random guessing, into strong learners with high accuracy. While boosting is well understood in the classic setting, it is less so in the agnostic case, where no assumptions are made about the data. Indeed, only recently was the sample complexity of agnostic boosting nearly settled arXiv:2503.09384, but the known algorithm achieving this bound has exponential running time. In this work, we propose the first agnostic boosting algorithm with near-optimal sample complexity, running in time polynomial in the sample size when considering the other parameters of the problem fixed.
Similar Papers
Revisiting Agnostic Boosting
Machine Learning (CS)
Makes computer learning work better with less data.
Sample-Optimal Agnostic Boosting with Unlabeled Data
Machine Learning (CS)
Teaches computers using fewer examples.
Sample Complexity of Agnostic Multiclass Classification: Natarajan Dimension Strikes Back
Machine Learning (CS)
Teaches computers to learn from more kinds of data.