Score: 2

Large Language Model-Based Task Offloading and Resource Allocation for Digital Twin Edge Computing Networks

Published: July 25, 2025 | arXiv ID: 2507.19050v1

By: Qiong Wu , Yu Xie , Pingyi Fan and more

Potential Business Impact:

Helps cars share computing power to avoid delays.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

In this paper, we propose a general digital twin edge computing network comprising multiple vehicles and a server. Each vehicle generates multiple computing tasks within a time slot, leading to queuing challenges when offloading tasks to the server. The study investigates task offloading strategies, queue stability, and resource allocation. Lyapunov optimization is employed to transform long-term constraints into tractable short-term decisions. To solve the resulting problem, an in-context learning approach based on large language model (LLM) is adopted, replacing the conventional multi-agent reinforcement learning (MARL) framework. Experimental results demonstrate that the LLM-based method achieves comparable or even superior performance to MARL.

Country of Origin
🇬🇧 🇨🇳 🇭🇰 United Kingdom, China, Hong Kong

Repos / Data Links

Page Count
12 pages

Category
Computer Science:
Networking and Internet Architecture