Score: 0

Does Flatness imply Generalization for Logistic Loss in Univariate Two-Layer ReLU Network?

Published: December 1, 2025 | arXiv ID: 2512.01473v1

By: Dan Qiao, Yu-Xiang Wang

Potential Business Impact:

Makes computer learning more reliable for some tasks.

Business Areas:
A/B Testing Data and Analytics

We consider the problem of generalization of arbitrarily overparameterized two-layer ReLU Neural Networks with univariate input. Recent work showed that under square loss, flat solutions (motivated by flat / stable minima and Edge of Stability phenomenon) provably cannot overfit, but it remains unclear whether the same phenomenon holds for logistic loss. This is a puzzling open problem because existing work on logistic loss shows that gradient descent with increasing step size converges to interpolating solutions (at infinity, for the margin-separable cases). In this paper, we prove that the \emph{flatness implied generalization} is more delicate under logistic loss. On the positive side, we show that flat solutions enjoy near-optimal generalization bounds within a region between the left-most and right-most \emph{uncertain} sets determined by each candidate solution. On the negative side, we show that there exist arbitrarily flat yet overfitting solutions at infinity that are (falsely) certain everywhere, thus certifying that flatness alone is insufficient for generalization in general. We demonstrate the effects predicted by our theory in a well-controlled simulation study.

Country of Origin
🇺🇸 United States

Page Count
59 pages

Category
Computer Science:
Machine Learning (CS)