Sparse Transformer Architectures via Regularized Wasserstein Proximal Operator with $L_1$ Prior
By: Fuqun Han, Stanley Osher, Wuchen Li
Potential Business Impact:
Makes AI learn faster and more accurately.
In this work, we propose a sparse transformer architecture that incorporates prior information about the underlying data distribution directly into the transformer structure of the neural network. The design of the model is motivated by a special optimal transport problem, namely the regularized Wasserstein proximal operator, which admits a closed-form solution and turns out to be a special representation of transformer architectures. Compared with classical flow-based models, the proposed approach improves the convexity properties of the optimization problem and promotes sparsity in the generated samples. Through both theoretical analysis and numerical experiments, including applications in generative modeling and Bayesian inverse problems, we demonstrate that the sparse transformer achieves higher accuracy and faster convergence to the target distribution than classical neural ODE-based methods.
Similar Papers
Preconditioned Regularized Wasserstein Proximal Sampling
Machine Learning (Stat)
Speeds up computer learning for tough problems.
Neural Local Wasserstein Regression
Machine Learning (Stat)
Teaches computers to understand complex data patterns.
Accelerated Regularized Wasserstein Proximal Sampling Algorithms
Machine Learning (Stat)
Makes computer learning faster and better.