On the Convergence of Overparameterized Problems: Inherent Properties of the Compositional Structure of Neural Networks
By: Arthur Castello Branco de Oliveira, Dhruv Jatkar, Eduardo Sontag
Potential Business Impact:
Makes AI learn faster by understanding its own structure.
This paper investigates how the compositional structure of neural networks shapes their optimization landscape and training dynamics. We analyze the gradient flow associated with overparameterized optimization problems, which can be interpreted as training a neural network with linear activations. Remarkably, we show that the global convergence properties can be derived for any cost function that is proper and real analytic. We then specialize the analysis to scalar-valued cost functions, where the geometry of the landscape can be fully characterized. In this setting, we demonstrate that key structural features -- such as the location and stability of saddle points -- are universal across all admissible costs, depending solely on the overparameterized representation rather than on problem-specific details. Moreover, we show that convergence can be arbitrarily accelerated depending on the initialization, as measured by an imbalance metric introduced in this work. Finally, we discuss how these insights may generalize to neural networks with sigmoidal activations, showing through a simple example which geometric and dynamical properties persist beyond the linear case.
Similar Papers
Entropic Confinement and Mode Connectivity in Overparameterized Neural Networks
Machine Learning (CS)
Keeps AI focused on one good answer.
Scalable Evaluation and Neural Models for Compositional Generalization
Machine Learning (CS)
Teaches computers to understand new things from old.
Solving Neural Min-Max Games: The Role of Architecture, Initialization & Dynamics
Machine Learning (CS)
Makes AI games find fair wins for everyone.