Score: 0

Smoothed Agnostic Learning of Halfspaces over the Hypercube

Published: November 21, 2025 | arXiv ID: 2511.17782v1

By: Yiwen Kou, Raghu Meka

Potential Business Impact:

Learns patterns in computer code faster.

Business Areas:
A/B Testing Data and Analytics

Agnostic learning of Boolean halfspaces is a fundamental problem in computational learning theory, but it is known to be computationally hard even for weak learning. Recent work [CKKMK24] proposed smoothed analysis as a way to bypass such hardness, but existing frameworks rely on additive Gaussian perturbations, making them unsuitable for discrete domains. We introduce a new smoothed agnostic learning framework for Boolean inputs, where perturbations are modeled via random bit flips. This defines a natural discrete analogue of smoothed optimality generalizing the Gaussian case. Under strictly subexponential assumptions on the input distribution, we give an efficient algorithm for learning halfspaces in this model, with runtime and sample complexity approximately n raised to a poly(1/(sigma * epsilon)) factor. Previously, such algorithms were known only with strong structural assumptions for the discrete hypercube, for example, independent coordinates or symmetric distributions. Our result provides the first computationally efficient guarantee for smoothed agnostic learning of halfspaces over the Boolean hypercube, bridging the gap between worst-case intractability and practical learnability in discrete settings.

Country of Origin
🇺🇸 United States

Page Count
34 pages

Category
Computer Science:
Machine Learning (CS)