Optimizing PyTorch Inference with LLM-Based Multi-Agent Systems
By: Kirill Nagaitsev , Luka Grbcic , Samuel Williams and more
Potential Business Impact:
Makes AI run much faster on computers.
Maximizing performance on available GPU hardware is an ongoing challenge for modern AI inference systems. Traditional approaches include writing custom GPU kernels and using specialized model compilers to tune high-level code for specific GPU targets. Recent work shows that LLM-based multi-agent systems can effectively perform such tuning, often outperforming existing compilers and eliminating the need for manual kernel development. However, the dynamics of multi-agent systems for this task remain unexplored. In this work, we present a logical framework for comparing multi-agent PyTorch optimization systems. Our evaluation shows that exploit-heavy strategies perform best when paired with error-fixing agents, and that performance correlates with the granularity of optimization steps. The best implementation achieves an average 2.88x speedup on an H100 GPU across diverse tasks in KernelBench, a benchmark suite covering a range of machine learning architectures in PyTorch.
Similar Papers
Astra: A Multi-Agent System for GPU Kernel Performance Optimization
Distributed, Parallel, and Cluster Computing
Makes computer programs run much faster automatically.
STARK: Strategic Team of Agents for Refining Kernels
Artificial Intelligence
AI makes computer graphics run much faster.
Query Optimization Beyond Data Systems: The Case for Multi-Agent Systems
Databases
Makes AI teams work smarter, faster, and cheaper.