Score: 1

Improving Mixed-Criticality Scheduling with Reinforcement Learning

Published: April 4, 2025 | arXiv ID: 2504.03994v2

By: Muhammad El-Mahdy, Nourhan Sakr, Rodrigo Carrasco

Potential Business Impact:

Makes computers finish important jobs faster.

Business Areas:
Scheduling Information Technology, Software

This paper introduces a novel reinforcement learning (RL) approach to scheduling mixed-criticality (MC) systems on processors with varying speeds. Building upon the foundation laid by [1], we extend their work to address the non-preemptive scheduling problem, which is known to be NP-hard. By modeling this scheduling challenge as a Markov Decision Process (MDP), we develop an RL agent capable of generating near-optimal schedules for real-time MC systems. Our RL-based scheduler prioritizes high-critical tasks while maintaining overall system performance. Through extensive experiments, we demonstrate the scalability and effectiveness of our approach. The RL scheduler significantly improves task completion rates, achieving around 80% overall and 85% for high-criticality tasks across 100,000 instances of synthetic data and real data under varying system conditions. Moreover, under stable conditions without degradation, the scheduler achieves 94% overall task completion and 93% for high-criticality tasks. These results highlight the potential of RL-based schedulers in real-time and safety-critical applications, offering substantial improvements in handling complex and dynamic scheduling scenarios.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡¨πŸ‡± πŸ‡ͺπŸ‡¬ Chile, United States, Egypt

Page Count
8 pages

Category
Computer Science:
Machine Learning (CS)