Framing AI System Benchmarking as a Learning Task: FlexBench and the Open MLPerf Dataset
By: Grigori Fursin, Daniel Altunay
Potential Business Impact:
Tests AI faster, cheaper, and smarter.
Existing AI system benchmarks such as MLPerf often struggle to keep pace with the rapidly evolving AI landscape, making it difficult to support informed deployment, optimization, and co-design decisions for AI systems. We suggest that benchmarking itself can be framed as an AI task - one in which models are continuously evaluated and optimized across diverse datasets, software, and hardware, using key metrics such as accuracy, latency, throughput, energy consumption, and cost. To support this perspective, we present FlexBench: a modular extension of the MLPerf LLM inference benchmark, integrated with HuggingFace and designed to provide relevant and actionable insights. Benchmarking results and metadata are collected into an Open MLPerf Dataset, which can be collaboratively curated, extended, and leveraged for predictive modeling and feature engineering. We successfully validated the FlexBench concept through MLPerf Inference submissions, including evaluations of DeepSeek R1 and LLaMA 3.3 on commodity servers. The broader objective is to enable practitioners to make cost-effective AI deployment decisions that reflect their available resources, requirements, and constraints.
Similar Papers
MLPerf Automotive
Machine Learning (CS)
Tests car AI to make driving safer.
AI Benchmark Democratization and Carpentry
Artificial Intelligence
Makes AI tests stay fair as AI gets smarter.
UpBench: A Dynamically Evolving Real-World Labor-Market Agentic Benchmark Framework Built for Human-Centric AI
Artificial Intelligence
Tests AI on real jobs to help them work with people.