Using predefined vector systems as latent space configuration for neural network supervised training on data with arbitrarily large number of classes
By: Nikita Gabdullin
Potential Business Impact:
Teaches computers to learn from many more things.
Supervised learning (SL) methods are indispensable for neural network (NN) training used to perform classification tasks. While resulting in very high accuracy, SL training often requires making NN parameter number dependent on the number of classes, limiting their applicability when the number of classes is extremely large or unknown in advance. In this paper we propose a methodology that allows one to train the same NN architecture regardless of the number of classes. This is achieved by using predefined vector systems as the target latent space configuration (LSC) during NN training. We discuss the desired properties of target configurations and choose randomly perturbed vectors of An root system for our experiments. These vectors are used to successfully train encoders and visual transformers (ViT) on Cinic-10 and ImageNet-1K in low- and high-dimensional cases by matching NN predictions with the predefined vectors. Finally, ViT is trained on a dataset with 1.28 million classes illustrating the applicability of the method to training on datasets with extremely large number of classes. In addition, potential applications of LSC in lifelong learning and NN distillation are discussed illustrating versatility of the proposed methodology.
Similar Papers
Series of quasi-uniform scatterings with fast search, root systems and neural network classifications
Algebraic Geometry
Teaches computers to learn new things faster.
Deep Hierarchical Learning with Nested Subspace Networks
Machine Learning (CS)
Lets one smart computer program use less power.
Efficient Long-Tail Learning in Latent Space by sampling Synthetic Data
Machine Learning (CS)
Makes computer learning fair for rare things.