TritonRL: Training LLMs to Think and Code Triton Without Cheating
By: Jiin Woo , Shaowei Zhu , Allen Nie and more
Potential Business Impact:
Makes computer code for AI run much faster.
With the rapid evolution of large language models (LLMs), the demand for automated, high-performance system kernels has emerged as a key enabler for accelerating development and deployment. We introduce TritonRL, a domain-specialized LLM for Triton kernel generation, trained with a novel training framework that enables robust and automated kernel synthesis. Unlike general-purpose programming languages, Triton kernel generation faces unique challenges due to data scarcity and incomplete evaluation criteria, vulnerable to reward hacking. Our approach addresses these challenges end-to-end by distilling Triton-specific knowledge through supervised fine-tuning on curated datasets, and further improving code quality via reinforcement learning (RL) with robust, verifiable rewards and hierarchical reward assignment. Our RL framework robustly detects reward hacking and guides both reasoning traces and code tokens through fine-grained verification and hierarchical reward decomposition, enabling the model to generate high-quality Triton kernels that can truly replace existing modules. With robust and fine-grained evaluation, our experiments on KernelBench demonstrate that TritonRL achieves state-of-the-art correctness and speedup, surpassing all other Triton-specific models and underscoring the effectiveness of our RL-based training paradigm.
Similar Papers
The Anatomy of a Triton Attention Kernel
Machine Learning (CS)
Makes AI work fast on different computers.
Language Models that Think, Chat Better
Computation and Language
Makes AI better at thinking and chatting.
TritonForge: Profiling-Guided Framework for Automated Triton Kernel Optimization
Software Engineering
Makes computer programs run much faster automatically.