Overfitting has a limitation: a model-independent generalization error bound based on Rényi entropy
By: Atsushi Suzuki
Potential Business Impact:
Lets computers learn with more data, not bigger.
Will further scaling up of machine learning models continue to bring success? A significant challenge in answering this question lies in understanding generalization error, which is the impact of overfitting. Understanding generalization error behavior of increasingly large-scale machine learning models remains a significant area of investigation, as conventional analyses often link error bounds to model complexity, failing to fully explain the success of extremely large architectures. This research introduces a novel perspective by establishing a model-independent upper bound for generalization error applicable to algorithms whose outputs are determined solely by the data's histogram, such as empirical risk minimization or gradient-based methods. Crucially, this bound is shown to depend only on the R\'enyi entropy of the data-generating distribution, suggesting that a small generalization error can be maintained even with arbitrarily large models, provided the data quantity is sufficient relative to this entropy. This framework offers a direct explanation for the phenomenon where generalization performance degrades significantly upon injecting random noise into data, where the performance degrade is attributed to the consequent increase in the data distribution's R\'enyi entropy. Furthermore, we adapt the no-free-lunch theorem to be data-distribution-dependent, demonstrating that an amount of data corresponding to the R\'enyi entropy is indeed essential for successful learning, thereby highlighting the tightness of our proposed generalization bound.
Similar Papers
High-entropy Advantage in Neural Networks' Generalizability
Machine Learning (CS)
Makes computers learn better by using "energy" ideas.
Generalizability of Neural Networks Minimizing Empirical Risk Based on Expressive Ability
Machine Learning (CS)
Teaches computers to learn from more data.
A Classical View on Benign Overfitting: The Role of Sample Size
Machine Learning (CS)
Helps computers learn from messy data well.