Reducing Compute Waste in LLMs through Kernel-Level DVFS
By: Jeffrey Spaan, Kuan-Hsun Chen, Ana-Lucia Varbanescu
The rapid growth of AI has fueled the expansion of accelerator- or GPU-based data centers. However, the rising operational energy consumption has emerged as a critical bottleneck and a major sustainability concern. Dynamic Voltage and Frequency Scaling (DVFS) is a well-known technique used to reduce energy consumption, and thus improve energy-efficiency, since it requires little effort and works with existing hardware. Reducing the energy consumption of training and inference of Large Language Models (LLMs) through DVFS or power capping is feasible: related work has shown energy savings can be significant, but at the cost of significant slowdowns. In this work, we focus on reducing waste in LLM operations: i.e., reducing energy consumption without losing performance. We propose a fine-grained, kernel-level, DVFS approach that explores new frequency configurations, and prove these save more energy than previous, pass- or iteration-level solutions. For example, for a GPT-3 training run, a pass-level approach could reduce energy consumption by 2% (without losing performance), while our kernel-level approach saves as much as 14.6% (with a 0.6% slowdown). We further investigate the effect of data and tensor parallelism, and show our discovered clock frequencies translate well for both. We conclude that kernel-level DVFS is a suitable technique to reduce waste in LLM operations, providing significant energy savings with negligible slow-down.
Similar Papers
DVFS-Aware DNN Inference on GPUs: Latency Modeling and Performance Analysis
Machine Learning (CS)
Makes computer brains run faster and use less power.
HiDVFS: A Hierarchical Multi-Agent DVFS Scheduler for OpenMP DAG Workloads
Distributed, Parallel, and Cluster Computing
Makes computers run faster and use less power.
Joint Memory Frequency and Computing Frequency Scaling for Energy-efficient DNN Inference
Machine Learning (CS)
Saves phone power by adjusting chip speeds.