Does Flatness imply Generalization for Logistic Loss in Univariate Two-Layer ReLU Network?
By: Dan Qiao, Yu-Xiang Wang
Potential Business Impact:
Makes computer learning more reliable for some tasks.
We consider the problem of generalization of arbitrarily overparameterized two-layer ReLU Neural Networks with univariate input. Recent work showed that under square loss, flat solutions (motivated by flat / stable minima and Edge of Stability phenomenon) provably cannot overfit, but it remains unclear whether the same phenomenon holds for logistic loss. This is a puzzling open problem because existing work on logistic loss shows that gradient descent with increasing step size converges to interpolating solutions (at infinity, for the margin-separable cases). In this paper, we prove that the \emph{flatness implied generalization} is more delicate under logistic loss. On the positive side, we show that flat solutions enjoy near-optimal generalization bounds within a region between the left-most and right-most \emph{uncertain} sets determined by each candidate solution. On the negative side, we show that there exist arbitrarily flat yet overfitting solutions at infinity that are (falsely) certain everywhere, thus certifying that flatness alone is insufficient for generalization in general. We demonstrate the effects predicted by our theory in a well-controlled simulation study.
Similar Papers
When Flatness Does (Not) Guarantee Adversarial Robustness
Machine Learning (CS)
Makes AI less fooled by tricky mistakes.
Generalization Below the Edge of Stability: The Role of Data Geometry
Machine Learning (Stat)
Helps computers learn better by understanding data shapes.
Flat Minima and Generalization: Insights from Stochastic Convex Optimization
Machine Learning (CS)
Makes computers learn better, even when they're wrong.