Scale-Agnostic Kolmogorov-Arnold Geometry in Neural Networks
By: Mathew Vanherreweghe, Michael H. Freedman, Keith M. Adams
Potential Business Impact:
Neural networks learn by organizing data like a map.
Recent work by Freedman and Mulligan demonstrated that shallow multilayer perceptrons spontaneously develop Kolmogorov-Arnold geometric (KAG) structure during training on synthetic three-dimensional tasks. However, it remained unclear whether this phenomenon persists in realistic high-dimensional settings and what spatial properties this geometry exhibits. We extend KAG analysis to MNIST digit classification (784 dimensions) using 2-layer MLPs with systematic spatial analysis at multiple scales. We find that KAG emerges during training and appears consistently across spatial scales, from local 7-pixel neighborhoods to the full 28x28 image. This scale-agnostic property holds across different training procedures: both standard training and training with spatial augmentation produce the same qualitative pattern. These findings reveal that neural networks spontaneously develop organized, scale-invariant geometric structure during learning on realistic high-dimensional data.
Similar Papers
Spontaneous Kolmogorov-Arnold Geometry in Shallow MLPs
Machine Learning (CS)
Helps computers learn better by understanding data texture.
Series of quasi-uniform scatterings with fast search, root systems and neural network classifications
Algebraic Geometry
Teaches computers to learn new things faster.
KAN or MLP? Point Cloud Shows the Way Forward
CV and Pattern Recognition
Helps computers understand 3D shapes better.