Score: 0

Learning with Monotone Adversarial Corruptions

Published: January 5, 2026 | arXiv ID: 2601.02193v1

By: Kasper Green Larsen, Chirag Pabbaraju, Abhishek Shetty

Potential Business Impact:

Makes smart computer programs fail with bad data.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

We study the extent to which standard machine learning algorithms rely on exchangeability and independence of data by introducing a monotone adversarial corruption model. In this model, an adversary, upon looking at a "clean" i.i.d. dataset, inserts additional "corrupted" points of their choice into the dataset. These added points are constrained to be monotone corruptions, in that they get labeled according to the ground-truth target function. Perhaps surprisingly, we demonstrate that in this setting, all known optimal learning algorithms for binary classification can be made to achieve suboptimal expected error on a new independent test point drawn from the same distribution as the clean dataset. On the other hand, we show that uniform convergence-based algorithms do not degrade in their guarantees. Our results showcase how optimal learning algorithms break down in the face of seemingly helpful monotone corruptions, exposing their overreliance on exchangeability.

Page Count
14 pages

Category
Computer Science:
Machine Learning (CS)