Kernel Learning with Adversarial Features: Numerical Efficiency and Adaptive Regularization
By: Antônio H. Ribeiro , David Vävinggren , Dave Zachariah and more
Potential Business Impact:
Makes AI smarter and safer from mistakes.
Adversarial training has emerged as a key technique to enhance model robustness against adversarial input perturbations. Many of the existing methods rely on computationally expensive min-max problems that limit their application in practice. We propose a novel formulation of adversarial training in reproducing kernel Hilbert spaces, shifting from input to feature-space perturbations. This reformulation enables the exact solution of inner maximization and efficient optimization. It also provides a regularized estimator that naturally adapts to the noise level and the smoothness of the underlying function. We establish conditions under which the feature-perturbed formulation is a relaxation of the original problem and propose an efficient optimization algorithm based on iterative kernel ridge regression. We provide generalization bounds that help to understand the properties of the method. We also extend the formulation to multiple kernel learning. Empirical evaluation shows good performance in both clean and adversarial settings.
Similar Papers
RegMix: Adversarial Mutual and Generalization Regularization for Enhancing DNN Robustness
Machine Learning (CS)
Makes computer programs harder to trick.
Source-Condition Analysis of Kernel Adversarial Estimators
Statistics Theory
Improves computer learning from tricky data.
Adversarial learning for nonparametric regression: Minimax rate and adaptive estimation
Machine Learning (Stat)
Protects computers from tricky, fake data.