Transformers for Tabular Data: A Training Perspective of Self-Attention via Optimal Transport
By: Antonio Candelieri, Alessandro Quadrio
This thesis examines self-attention training through the lens of Optimal Transport (OT) and develops an OT-based alternative for tabular classification. The study tracks intermediate projections of the self-attention layer during training and evaluates their evolution using discrete OT metrics, including Wasserstein distance, Monge gap, optimality, and efficiency. Experiments are conducted on classification tasks with two and three classes, as well as on a biomedical dataset. Results indicate that the final self-attention mapping often approximates the OT optimal coupling, yet the training trajectory remains inefficient. Pretraining the MLP section on synthetic data partially improves convergence but is sensitive to their initialization. To address these limitations, an OT-based algorithm is introduced: it generates class-specific dummy Gaussian distributions, computes an OT alignment with the data, and trains an MLP to generalize this mapping. The method achieves accuracy comparable to Transformers while reducing computational cost and scaling more efficiently under standardized inputs, though its performance depends on careful dummy-geometry design. All experiments and implementations are conducted in R.
Similar Papers
Embedding Empirical Distributions for Computing Optimal Transport Maps
Machine Learning (CS)
Helps computers move data between different groups.
The Mean-Field Dynamics of Transformers
Machine Learning (CS)
Makes AI understand long texts better by grouping ideas.
Progressive Depth Up-scaling via Optimal Transport
Computation and Language
Makes AI learn faster and better by fixing its layers.