The Double Descent Behavior in Two Layer Neural Network for Binary Classification
By: Chathurika S Abeykoon, Aleksandr Beknazaryan, Hailin Sang
Potential Business Impact:
Finds a sweet spot for computer learning accuracy.
Recent studies observed a surprising concept on model test error called the double descent phenomenon, where the increasing model complexity decreases the test error first and then the error increases and decreases again. To observe this, we work on a two layer neural network model with a ReLU activation function designed for binary classification under supervised learning. Our aim is to observe and investigate the mathematical theory behind the double descent behavior of model test error for varying model sizes. We quantify the model size by the ratio of number of training samples to the dimension of the model. Due to the complexity of the empirical risk minimization procedure, we use the Convex Gaussian Min Max Theorem to find a suitable candidate for the global training loss.
Similar Papers
A dynamic view of some anomalous phenomena in SGD
Optimization and Control
Helps computers learn better by finding hidden patterns.
On the Relationship Between Double Descent of CNNs and Shape/Texture Bias Under Learning Process
CV and Pattern Recognition
Helps computers see better by understanding shapes and textures.
Double Descent and Overparameterization in Particle Physics Data
High Energy Physics - Experiment
Makes computer models better at guessing physics results.