Bhargava Cube--Inspired Quadratic Regularization for Structured Neural Embeddings
By: S Sairam, Prateek P Kulkarni
We present a novel approach to neural representation learning that incorporates algebraic constraints inspired by Bhargava cubes from number theory. Traditional deep learning methods learn representations in unstructured latent spaces lacking interpretability and mathematical consistency. Our framework maps input data to constrained 3-dimensional latent spaces where embeddings are regularized to satisfy learned quadratic relationships derived from Bhargava's combinatorial structures. The architecture employs a differentiable auxiliary loss function operating independently of classification objectives, guiding models toward mathematically structured representations. We evaluate on MNIST, achieving 99.46% accuracy while producing interpretable 3D embeddings that naturally cluster by digit class and satisfy learned quadratic constraints. Unlike existing manifold learning approaches requiring explicit geometric supervision, our method imposes weak algebraic priors through differentiable constraints, ensuring compatibility with standard optimization. This represents the first application of number-theoretic constructs to neural representation learning, establishing a foundation for incorporating structured mathematical priors in neural networks.
Similar Papers
A Theoretical Framework for Discovering Groups and Unitary Representations via Tensor Factorization
Machine Learning (CS)
Finds hidden patterns in data using math.
Native Logical and Hierarchical Representations with Subspace Embeddings
Machine Learning (CS)
Computers understand words and their meanings better.
GeoVideo: Introducing Geometric Regularization into Video Generation Model
CV and Pattern Recognition
Makes videos look real and move smoothly.