Improving Matrix Exponential for Generative AI Flows: A Taylor-Based Approach Beyond Paterson--Stockmeyer
By: Jorge Sastre , Daniel Faronbi , José Miguel Alonso and more
The matrix exponential is a fundamental operator in scientific computing and system simulation, with applications ranging from control theory and quantum mechanics to modern generative machine learning. While Padé approximants combined with scaling and squaring have long served as the standard, recent Taylor-based methods, which utilize polynomial evaluation schemes that surpass the classical Paterson--Stockmeyer technique, offer superior accuracy and reduced computational complexity. This paper presents an optimized Taylor-based algorithm for the matrix exponential, specifically designed for the high-throughput requirements of generative AI flows. We provide a rigorous error analysis and develop a dynamic selection strategy for the Taylor order and scaling factor to minimize computational effort under a prescribed error tolerance. Extensive numerical experiments demonstrate that our approach provides significant acceleration and maintains high numerical stability compared to existing state-of-the-art implementations. These results establish the proposed method as a highly efficient tool for large-scale generative modeling.
Similar Papers
Matrix- and tensor-oriented numerical schemes for the evolutionary space-fractional complex Ginzburg--Landau equation
Numerical Analysis
Makes computer simulations run much faster.
Laplace Approximation For Tensor Train Kernel Machines In System Identification
Machine Learning (Stat)
Trains computers faster for complex predictions.
A scaling and recovering algorithm for the matrix $\varphi$-functions
Numerical Analysis
Solves hard math problems faster for computers.