The Anatomy of a Triton Attention Kernel
By: Burkhard Ringlein , Jan van Lunteren , Radu Stoica and more
Potential Business Impact:
Makes AI work fast on different computers.
A long-standing goal in both industry and academia is to develop an LLM inference platform that is portable across hardware architectures, eliminates the need for low-level hand-tuning, and still delivers best-in-class efficiency. In this work, we demonstrate that portable, efficient cross-platform LLM inference is indeed possible and share our experience. We develop a state-of-the-art paged attention kernel, the core performance-critical component of many LLM deployments, that builds exclusively on the domain-specific just-in-time compiled language Triton to achieve state-of-the-art performance on both NVIDIA and AMD GPUs. We describe our high-level approach, the key algorithmic and system-level improvements, the parameter auto-tuning required to unlock efficiency, and the integrations into a popular inference server that are necessary to bring the performance of a generic Triton attention kernel from 19.7% of the state-of-the-art to 105.9%. Our results highlight how open-source domain-specific languages can be leveraged to unlock model portability across different GPU vendors.
Similar Papers
TritonRL: Training LLMs to Think and Code Triton Without Cheating
Software Engineering
Makes computer code for AI run much faster.
TritonForge: Profiling-Guided Framework for Automated Triton Kernel Optimization
Software Engineering
Makes computer programs run much faster automatically.
ML-Triton, A Multi-Level Compilation and Language Extension to Triton GPU Programming
Computation and Language
Makes AI learn faster by using computer chips better.