Score: 0

Green-LLM: Optimal Workload Allocation for Environmentally-Aware Distributed Inference

Published: July 14, 2025 | arXiv ID: 2507.09942v1

By: Jiaming Cheng, Duong Tung Nguyen

Potential Business Impact:

Smartly sends AI tasks to save power and money.

Business Areas:
Cloud Computing Internet Services, Software

This letter investigates the optimal allocation of large language model (LLM) inference workloads across heterogeneous edge data centers (DCs) over time. Each DC features on-site renewable generation and faces dynamic electricity prices and spatiotemporal variability in renewable availability. The central question is: how can inference workloads be optimally distributed to the DCs to minimize energy consumption, carbon emissions, and water usage while enhancing user experience? This letter proposes a novel optimization model for LLM service providers to reduce operational costs and environmental impacts. Numerical results validate the efficacy of the proposed approach.

Country of Origin
🇺🇸 United States

Page Count
5 pages

Category
Computer Science:
Networking and Internet Architecture