Policy-Based Deep Reinforcement Learning Hyperheuristics for Job-Shop Scheduling Problems
By: Sofiene Lassoued , Asrat Gobachew , Stefan Lier and more
Potential Business Impact:
Teaches computers to finish jobs faster.
This paper proposes a policy-based deep reinforcement learning hyper-heuristic framework for solving the Job Shop Scheduling Problem. The hyper-heuristic agent learns to switch scheduling rules based on the system state dynamically. We extend the hyper-heuristic framework with two key mechanisms. First, action prefiltering restricts decision-making to feasible low-level actions, enabling low-level heuristics to be evaluated independently of environmental constraints and providing an unbiased assessment. Second, a commitment mechanism regulates the frequency of heuristic switching. We investigate the impact of different commitment strategies, from step-wise switching to full-episode commitment, on both training behavior and makespan. Additionally, we compare two action selection strategies at the policy level: deterministic greedy selection and stochastic sampling. Computational experiments on standard JSSP benchmarks demonstrate that the proposed approach outperforms traditional heuristics, metaheuristics, and recent neural network-based scheduling methods
Similar Papers
Policy-Based Reinforcement Learning with Action Masking for Dynamic Job Shop Scheduling under Uncertainty: Handling Random Arrivals and Machine Failures
Artificial Intelligence
Helps factories make things faster, even when problems happen.
A Production Scheduling Framework for Reinforcement Learning Under Real-World Constraints
Machine Learning (CS)
Helps factories make things faster and better.
Generalizing Beyond Suboptimality: Offline Reinforcement Learning Learns Effective Scheduling through Random Data
Machine Learning (CS)
Teaches factories to make things faster.