A Hybrid Reinforcement Learning Framework for Hard Latency Constrained Resource Scheduling
By: Luyuan Zhang, An Liu, Kexuan Wang
Potential Business Impact:
Makes virtual worlds run smoother, even with lots of people.
In the forthcoming 6G era, extend reality (XR) has been regarded as an emerging application for ultra-reliable and low latency communications (URLLC) with new traffic characteristics and more stringent requirements. In addition to the quasi-periodical traffic in XR, burst traffic with both large frame size and random arrivals in some real world low latency communication scenarios has become the leading cause of network congestion or even collapse, and there still lacks an efficient algorithm for the resource scheduling problem under burst traffic with hard latency constraints. We propose a novel hybrid reinforcement learning framework for resource scheduling with hard latency constraints (HRL-RSHLC), which reuses polices from both old policies learned under other similar environments and domain-knowledge-based (DK) policies constructed using expert knowledge to improve the performance. The joint optimization of the policy reuse probabilities and new policy is formulated as an Markov Decision Problem (MDP), which maximizes the hard-latency constrained effective throughput (HLC-ET) of users. We prove that the proposed HRL-RSHLC can converge to KKT points with an arbitrary initial point. Simulations show that HRL-RSHLC can achieve superior performance with faster convergence speed compared to baseline algorithms.
Similar Papers
An Explainable AI Framework for Dynamic Resource Management in Vehicular Network Slicing
Machine Learning (CS)
Makes car internet faster and more reliable.
Improving Mixed-Criticality Scheduling with Reinforcement Learning
Machine Learning (CS)
Makes computers finish important jobs faster.
Multi-Agent Reinforcement Learning Scheduling to Support Low Latency in Teleoperated Driving
Networking and Internet Architecture
Makes self-driving cars react faster to avoid crashes.