The Geometry of Machine Learning Models
By: Pawel Gajer, Jacques Ravel
Potential Business Impact:
Makes computer learning models easier to understand.
This paper presents a mathematical framework for analyzing machine learning models through the geometry of their induced partitions. By representing partitions as Riemannian simplicial complexes, we capture not only adjacency relationships but also geometric properties including cell volumes, volumes of faces where cells meet, and dihedral angles between adjacent cells. For neural networks, we introduce a differential forms approach that tracks geometric structure through layers via pullback operations, making computations tractable by focusing on data-containing cells. The framework enables geometric regularization that directly penalizes problematic spatial configurations and provides new tools for model refinement through extended Laplacians and simplicial splines. We also explore how data distribution induces effective geometric curvature in model partitions, developing discrete curvature measures for vertices that quantify local geometric complexity and statistical Ricci curvature for edges that captures pairwise relationships between cells. While focused on mathematical foundations, this geometric perspective offers new approaches to model interpretation, regularization, and diagnostic tools for understanding learning dynamics.
Similar Papers
Learning Geometry: A Framework for Building Adaptive Manifold Models through Metric Optimization
Machine Learning (CS)
Teaches computers to learn by changing their shape.
A roadmap for curvature-based geometric data analysis and learning
Machine Learning (CS)
Helps computers understand shapes in data better.
Emergent Riemannian geometry over learning discrete computations on continuous manifolds
Machine Learning (CS)
Helps computers learn to make decisions from pictures.