Score: 0

Sparse Transformer Architectures via Regularized Wasserstein Proximal Operator with $L_1$ Prior

Published: October 18, 2025 | arXiv ID: 2510.16356v1

By: Fuqun Han, Stanley Osher, Wuchen Li

Potential Business Impact:

Makes AI learn faster and more accurately.

Business Areas:
A/B Testing Data and Analytics

In this work, we propose a sparse transformer architecture that incorporates prior information about the underlying data distribution directly into the transformer structure of the neural network. The design of the model is motivated by a special optimal transport problem, namely the regularized Wasserstein proximal operator, which admits a closed-form solution and turns out to be a special representation of transformer architectures. Compared with classical flow-based models, the proposed approach improves the convexity properties of the optimization problem and promotes sparsity in the generated samples. Through both theoretical analysis and numerical experiments, including applications in generative modeling and Bayesian inverse problems, we demonstrate that the sparse transformer achieves higher accuracy and faster convergence to the target distribution than classical neural ODE-based methods.

Country of Origin
🇺🇸 United States

Page Count
24 pages

Category
Computer Science:
Machine Learning (CS)