STARK: Strategic Team of Agents for Refining Kernels
By: Juncheng Dong , Yang Yang , Tao Liu and more
Potential Business Impact:
AI makes computer graphics run much faster.
The efficiency of GPU kernels is central to the progress of modern AI, yet optimizing them remains a difficult and labor-intensive task due to complex interactions between memory hierarchies, thread scheduling, and hardware-specific characteristics. While recent advances in large language models (LLMs) provide new opportunities for automated code generation, existing approaches largely treat LLMs as single-shot generators or naive refinement tools, limiting their effectiveness in navigating the irregular kernel optimization landscape. We introduce an LLM agentic framework for GPU kernel optimization that systematically explores the design space through multi-agent collaboration, grounded instruction, dynamic context management, and strategic search. This framework mimics the workflow of expert engineers, enabling LLMs to reason about hardware trade-offs, incorporate profiling feedback, and refine kernels iteratively. We evaluate our approach on KernelBench, a benchmark for LLM-based kernel optimization, and demonstrate substantial improvements over baseline agents: our system produces correct solutions where baselines often fail, and achieves kernels with up to 16x faster runtime performance. These results highlight the potential of agentic LLM frameworks to advance fully automated, scalable GPU kernel optimization.
Similar Papers
Astra: A Multi-Agent System for GPU Kernel Performance Optimization
Distributed, Parallel, and Cluster Computing
Makes computer programs run much faster automatically.
Automated Design Optimization via Strategic Search with Large Language Models
Machine Learning (CS)
Helps computers design better code faster and cheaper.
Optimizing PyTorch Inference with LLM-Based Multi-Agent Systems
Multiagent Systems
Makes AI run much faster on computers.