FlashInfer-Bench: Building the Virtuous Cycle for AI-driven LLM Systems
By: Shanli Xing , Yiyan Zhai , Alexander Jiang and more
Potential Business Impact:
Makes AI write better computer code for faster results.
Recent advances show that large language models (LLMs) can act as autonomous agents capable of generating GPU kernels, but integrating these AI-generated kernels into real-world inference systems remains challenging. FlashInfer-Bench addresses this gap by establishing a standardized, closed-loop framework that connects kernel generation, benchmarking, and deployment. At its core, FlashInfer Trace provides a unified schema describing kernel definitions, workloads, implementations, and evaluations, enabling consistent communication between agents and systems. Built on real serving traces, FlashInfer-Bench includes a curated dataset, a robust correctness- and performance-aware benchmarking framework, a public leaderboard to track LLM agents' GPU programming capabilities, and a dynamic substitution mechanism (apply()) that seamlessly injects the best-performing kernels into production LLM engines such as SGLang and vLLM. Using FlashInfer-Bench, we further evaluate the performance and limitations of LLM agents, compare the trade-offs among different GPU programming languages, and provide insights for future agent design. FlashInfer-Bench thus establishes a practical, reproducible pathway for continuously improving AI-generated kernels and deploying them into large-scale LLM inference.
Similar Papers
FlashInfer: Efficient and Customizable Attention Engine for LLM Inference Serving
Distributed, Parallel, and Cluster Computing
Makes AI answer questions much faster.
Bench360: Benchmarking Local LLM Inference from 360 Degrees
Computation and Language
Tests computer brains for best speed and smarts.
AIConfigurator: Lightning-Fast Configuration Optimization for Multi-Framework LLM Serving
Machine Learning (CS)
Finds best settings for AI to run faster.