Score: 0

Splitwise: Collaborative Edge-Cloud Inference for LLMs via Lyapunov-Assisted DRL

Published: December 29, 2025 | arXiv ID: 2512.23310v1

By: Abolfazl Younesi , Abbas Shabrang Maryan , Elyas Oustad and more

Potential Business Impact:

Makes smart AI work faster on phones.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Deploying large language models (LLMs) on edge devices is challenging due to their limited memory and power resources. Cloud-only inference reduces device burden but introduces high latency and cost. Static edge-cloud partitions optimize a single metric and struggle when bandwidth fluctuates. We propose Splitwise, a novel Lyapunov-assisted deep reinforcement learning (DRL) framework for fine-grained, adaptive partitioning of LLMs across edge and cloud environments. Splitwise decomposes transformer layers into attention heads and feed-forward sub-blocks, exposing more partition choices than layer-wise schemes. A hierarchical DRL policy, guided by Lyapunov optimization, jointly minimizes latency, energy consumption, and accuracy degradation while guaranteeing queue stability under stochastic workloads and variable network bandwidth. Splitwise also guarantees robustness via partition checkpoints with exponential backoff recovery in case of communication failures. Experiments on Jetson Orin NX, Galaxy S23, and Raspberry Pi 5 with GPT-2 (1.5B), LLaMA-7B, and LLaMA-13B show that Splitwise reduces end-to-end latency by 1.4x-2.8x and cuts energy consumption by up to 41% compared with existing partitioners. It lowers the 95th-percentile latency by 53-61% relative to cloud-only execution, while maintaining accuracy and modest memory requirements.

Country of Origin
🇦🇹 Austria

Page Count
11 pages

Category
Computer Science:
Machine Learning (CS)