TritonForge: Profiling-Guided Framework for Automated Triton Kernel Optimization
By: Haonan Li , Keyu Man , Partha Kanuparthy and more
High-performance GPU kernel optimization remains a critical yet labor-intensive task in modern machine learning workloads. Although Triton, a domain-specific language for GPU programming, enables developers to write efficient kernels with concise code, achieving expert-level performance still requires deep understanding of GPU architectures and low-level performance trade-offs. We present TritonForge, a profiling-guided framework for automated Triton kernel optimization. TritonForge integrates kernel analysis, runtime profiling, and iterative code transformation to streamline the optimization process. By incorporating data-driven feedback from profiling results, the system identifies performance bottlenecks, proposes targeted code modifications, and evaluates their impact automatically. While our prototype leverages large language models (LLMs) to assist in code reasoning and transformation, the framework remains modular and model-agnostic. Across diverse kernel types and GPU architectures, TritonForge achieves up to 5x performance improvement over baseline implementations and on average 1.76x of the cases are successful, providing a foundation for future research in automated GPU performance optimization.
Similar Papers
The Anatomy of a Triton Attention Kernel
Machine Learning (CS)
Makes AI work fast on different computers.
ML-Triton, A Multi-Level Compilation and Language Extension to Triton GPU Programming
Computation and Language
Makes AI learn faster by using computer chips better.
TritonRL: Training LLMs to Think and Code Triton Without Cheating
Software Engineering
Makes computer code for AI run much faster.