The ML.ENERGY Benchmark: Toward Automated Inference Energy Measurement and Optimization
By: Jae-Won Chung , Jiachen Liu , Jeff J. Ma and more
Potential Business Impact:
Measures AI energy use, helps save power.
As the adoption of Generative AI in real-world services grow explosively, energy has emerged as a critical bottleneck resource. However, energy remains a metric that is often overlooked, under-explored, or poorly understood in the context of building ML systems. We present the ML.ENERGY Benchmark, a benchmark suite and tool for measuring inference energy consumption under realistic service environments, and the corresponding ML.ENERGY Leaderboard, which have served as a valuable resource for those hoping to understand and optimize the energy consumption of their generative AI services. In this paper, we explain four key design principles for benchmarking ML energy we have acquired over time, and then describe how they are implemented in the ML.ENERGY Benchmark. We then highlight results from the latest iteration of the benchmark, including energy measurements of 40 widely used model architectures across 6 different tasks, case studies of how ML design choices impact energy consumption, and how automated optimization recommendations can lead to significant (sometimes more than 40%) energy savings without changing what is being computed by the model. The ML.ENERGY Benchmark is open-source and can be easily extended to various customized models and application scenarios.
Similar Papers
Energy Considerations of Large Language Model Inference and Efficiency Optimizations
Computation and Language
Cuts AI's energy use by 73%.
How Hungry is AI? Benchmarking Energy, Water, and Carbon Footprint of LLM Inference
Computers and Society
Measures AI's energy use and pollution.
Breaking the ICE: Exploring promises and challenges of benchmarks for Inference Carbon & Energy estimation for LLMs
Machine Learning (CS)
Tracks computer's energy use to help the planet.