KernelBand: Boosting LLM-based Kernel Optimization with a Hierarchical and Hardware-aware Multi-armed Bandit
By: Dezhi Ran , Shuxiao Xie , Mingfang Ji and more
Potential Business Impact:
Helps AI learn faster by finding best computer code.
High quality kernels are critical for reducing training and inference costs of Large Language Models (LLMs), yet they traditionally require significant expertise in hardware architecture and software optimization. While recent advances in LLM-based code generation show promise for complex optimization, existing methods struggle with the vast optimization space due to insufficient hardware domain knowledge, failing to effectively balance exploration and exploitation. We present KernelBand, a novel framework that formulates kernel optimization as a hierarchical multi-armed bandit problem, enabling LLM agents to strategically navigate the optimization space by treating kernel selection and optimization strategy application as sequential decision-making processes. Our approach leverages hardware profiling information to identify promising optimization strategies and employs runtime behavior clustering to reduce exploration overhead across kernel candidates. Extensive experiments on TritonBench demonstrate that KernelBand significantly outperforms state-of-the-art methods, achieving superior performance with fewer tokens while exhibiting consistent improvement without saturation as computational resources increase.
Similar Papers
MultiKernelBench: A Multi-Platform Benchmark for Kernel Generation
Distributed, Parallel, and Cluster Computing
Helps AI build faster computer programs for different chips.
MultiKernelBench: A Multi-Platform Benchmark for Kernel Generation
Distributed, Parallel, and Cluster Computing
Helps AI write code for faster computer chips.
BOAD: Discovering Hierarchical Software Engineering Agents via Bandit Optimization
Machine Learning (CS)
Helps computers fix complex code problems better.