AscendKernelGen: A Systematic Study of LLM-Based Kernel Generation for Neural Processing Units
By: Xinzi Cao , Jianyang Zhai , Pengfei Li and more
Potential Business Impact:
Makes AI chips write their own smart code.
To meet the ever-increasing demand for computational efficiency, Neural Processing Units (NPUs) have become critical in modern AI infrastructure. However, unlocking their full potential requires developing high-performance compute kernels using vendor-specific Domain-Specific Languages (DSLs), a task that demands deep hardware expertise and is labor-intensive. While Large Language Models (LLMs) have shown promise in general code generation, they struggle with the strict constraints and scarcity of training data in the NPU domain. Our preliminary study reveals that state-of-the-art general-purpose LLMs fail to generate functional complex kernels for Ascend NPUs, yielding a near-zero success rate. To address these challenges, we propose AscendKernelGen, a generation-evaluation integrated framework for NPU kernel development. We introduce Ascend-CoT, a high-quality dataset incorporating chain-of-thought reasoning derived from real-world kernel implementations, and KernelGen-LM, a domain-adaptive model trained via supervised fine-tuning and reinforcement learning with execution feedback. Furthermore, we design NPUKernelBench, a comprehensive benchmark for assessing compilation, correctness, and performance across varying complexity levels. Experimental results demonstrate that our approach significantly bridges the gap between general LLMs and hardware-specific coding. Specifically, the compilation success rate on complex Level-2 kernels improves from 0% to 95.5% (Pass@10), while functional correctness achieves 64.3% compared to the baseline's complete failure. These results highlight the critical role of domain-specific reasoning and rigorous evaluation in automating accelerator-aware code generation.
Similar Papers
MultiKernelBench: A Multi-Platform Benchmark for Kernel Generation
Distributed, Parallel, and Cluster Computing
Helps AI build faster computer programs for different chips.
MultiKernelBench: A Multi-Platform Benchmark for Kernel Generation
Distributed, Parallel, and Cluster Computing
Helps AI write code for faster computer chips.
NPUEval: Optimizing NPU Kernels with LLMs and Open Source Compilers
Programming Languages
Tests AI chips to make them run faster.