Feature Learning beyond the Lazy-Rich Dichotomy: Insights from Representational Geometry
By: Chi-Ning Chou , Hang Le , Yichen Wang and more
Potential Business Impact:
Untangles how computer brains learn, showing new ways.
Integrating task-relevant information into neural representations is a fundamental ability of both biological and artificial intelligence systems. Recent theories have categorized learning into two regimes: the rich regime, where neural networks actively learn task-relevant features, and the lazy regime, where networks behave like random feature models. Yet this simple lazy-rich dichotomy overlooks a diverse underlying taxonomy of feature learning, shaped by differences in learning algorithms, network architectures, and data properties. To address this gap, we introduce an analysis framework to study feature learning via the geometry of neural representations. Rather than inspecting individual learned features, we characterize how task-relevant representational manifolds evolve throughout the learning process. We show, in both theoretical and empirical settings, that as networks learn features, task-relevant manifolds untangle, with changes in manifold geometry revealing distinct learning stages and strategies beyond the lazy-rich dichotomy. This framework provides novel insights into feature learning across neuroscience and machine learning, shedding light on structural inductive biases in neural circuits and the mechanisms underlying out-of-distribution generalization.
Similar Papers
Neural Feature Geometry Evolves as Discrete Ricci Flow
Machine Learning (CS)
Helps computers learn better by understanding shapes.
Emergent Riemannian geometry over learning discrete computations on continuous manifolds
Machine Learning (CS)
Helps computers learn to make decisions from pictures.
Why all roads don't lead to Rome: Representation geometry varies across the human visual cortical hierarchy
Neurons and Cognition
Brain and AI learn best when goals change.