Large Language Model-Based Task Offloading and Resource Allocation for Digital Twin Edge Computing Networks
By: Qiong Wu , Yu Xie , Pingyi Fan and more
Potential Business Impact:
Helps cars share computing power to avoid delays.
In this paper, we propose a general digital twin edge computing network comprising multiple vehicles and a server. Each vehicle generates multiple computing tasks within a time slot, leading to queuing challenges when offloading tasks to the server. The study investigates task offloading strategies, queue stability, and resource allocation. Lyapunov optimization is employed to transform long-term constraints into tractable short-term decisions. To solve the resulting problem, an in-context learning approach based on large language model (LLM) is adopted, replacing the conventional multi-agent reinforcement learning (MARL) framework. Experimental results demonstrate that the LLM-based method achieves comparable or even superior performance to MARL.
Similar Papers
Toward Edge General Intelligence with Multiple-Large Language Model (Multi-LLM): Architecture, Trust, and Orchestration
Networking and Internet Architecture
Smart computers work better with many AI brains.
Semantic-Aware LLM Orchestration for Proactive Resource Management in Predictive Digital Twin Vehicular Networks
Networking and Internet Architecture
Cars predict and manage their computer needs.
Privacy-Preserving Offloading for Large Language Models in 6G Vehicular Networks
Cryptography and Security
Keeps car data private when using smart driving AI.