Stability and Generalization of Adversarial Diffusion Training
By: Hesam Hosseini, Ying Cao, Ali H. Sayed
Potential Business Impact:
Makes AI learn better even when tricked.
Algorithmic stability is an established tool for analyzing generalization. While adversarial training enhances model robustness, it often suffers from robust overfitting and an enlarged generalization gap. Although recent work has established the convergence of adversarial training in decentralized networks, its generalization properties remain unexplored. This work presents a stability-based generalization analysis of adversarial training under the diffusion strategy for convex losses. We derive a bound showing that the generalization error grows with both the adversarial perturbation strength and the number of training steps, a finding consistent with single-agent case but novel for decentralized settings. Numerical experiments on logistic regression validate these theoretical predictions.
Similar Papers
Generalization Error Analysis for Attack-Free and Byzantine-Resilient Decentralized Learning with Data Heterogeneity
Machine Learning (CS)
Helps computers learn together without sharing private data.
On the Escaping Efficiency of Distributed Adversarial Training Algorithms
Machine Learning (CS)
Makes AI stronger against tricky computer tricks.
Algorithms for Adversarially Robust Deep Learning
Machine Learning (CS)
Makes AI safer from tricks and mistakes.