Score: 0

A Saddle Point Remedy: Power of Variable Elimination in Non-convex Optimization

Published: November 3, 2025 | arXiv ID: 2511.01234v1

By: Min Gan , Guang-Yong Chen , Yang Yi and more

Potential Business Impact:

Simplifies hard math problems for smarter computers.

Business Areas:
A/B Testing Data and Analytics

The proliferation of saddle points, rather than poor local minima, is increasingly understood to be a primary obstacle in large-scale non-convex optimization for machine learning. Variable elimination algorithms, like Variable Projection (VarPro), have long been observed to exhibit superior convergence and robustness in practice, yet a principled understanding of why they so effectively navigate these complex energy landscapes has remained elusive. In this work, we provide a rigorous geometric explanation by comparing the optimization landscapes of the original and reduced formulations. Through a rigorous analysis based on Hessian inertia and the Schur complement, we prove that variable elimination fundamentally reshapes the critical point structure of the objective function, revealing that local maxima in the reduced landscape are created from, and correspond directly to, saddle points in the original formulation. Our findings are illustrated on the canonical problem of non-convex matrix factorization, visualized directly on two-parameter neural networks, and finally validated in training deep Residual Networks, where our approach yields dramatic improvements in stability and convergence to superior minima. This work goes beyond explaining an existing method; it establishes landscape simplification via saddle point transformation as a powerful principle that can guide the design of a new generation of more robust and efficient optimization algorithms.

Country of Origin
🇨🇳 China

Page Count
33 pages

Category
Computer Science:
Machine Learning (CS)