Camel: Energy-Aware LLM Inference on Resource-Constrained Devices
By: Hao Xu , Long Peng , Shezheng Song and more
Potential Business Impact:
Makes smart computer programs run faster, using less power.
Most Large Language Models (LLMs) are currently deployed in the cloud, with users relying on internet connectivity for access. However, this paradigm faces challenges such as network latency, privacy concerns, and bandwidth limits. Thus, deploying LLMs on edge devices has become an important research focus. In edge inference, request latency is critical as high latency can impair real-time tasks. At the same time, edge devices usually have limited battery capacity, making energy consumption another major concern. Balancing energy consumption and inference latency is essential. To address this, we propose an LLM inference energy management framework that optimizes GPU frequency and batch size to balance latency and energy consumption. By effectively managing the exploration-exploitation dilemma in configuration search, the framework finds the optimal settings. The framework was implemented on the NVIDIA Jetson AGX Orin platform, and a series of experimental validations were conducted. Results demonstrate that, compared to the default configuration, our framework reduces energy delay product (EDP) by 12.4%-29.9%, achieving a better balance between energy consumption and latency.
Similar Papers
Understanding the Performance and Power of LLM Inferencing on Edge Accelerators
Distributed, Parallel, and Cluster Computing
Runs smart AI on small computers, not just big ones.
Characterizing and Understanding Energy Footprint and Efficiency of Small Language Model on Edges
Distributed, Parallel, and Cluster Computing
Makes smart gadgets run AI without internet.
Quantifying the Energy Consumption and Carbon Emissions of LLM Inference via Simulations
Distributed, Parallel, and Cluster Computing
Makes AI use less electricity and pollution.