Trustworthy and Explainable Deep Reinforcement Learning for Safe and Energy-Efficient Process Control: A Use Case in Industrial Compressed Air Systems
By: Vincent Bezold , Patrick Wagner , Jakob Hofmann and more
This paper presents a trustworthy reinforcement learning approach for the control of industrial compressed air systems. We develop a framework that enables safe and energy-efficient operation under realistic boundary conditions and introduce a multi-level explainability pipeline combining input perturbation tests, gradient-based sensitivity analysis, and SHAP (SHapley Additive exPlanations) feature attribution. An empirical evaluation across multiple compressor configurations shows that the learned policy is physically plausible, anticipates future demand, and consistently respects system boundaries. Compared to the installed industrial controller, the proposed approach reduces unnecessary overpressure and achieves energy savings of approximately 4\,\% without relying on explicit physics models. The results further indicate that system pressure and forecast information dominate policy decisions, while compressor-level inputs play a secondary role. Overall, the combination of efficiency gains, predictive behavior, and transparent validation supports the trustworthy deployment of reinforcement learning in industrial energy systems.
Similar Papers
A TinyML Reinforcement Learning Approach for Energy-Efficient Light Control in Low-Cost Greenhouse Systems
Machine Learning (CS)
Makes lights automatically adjust to save energy.
Optimizing Operation Recipes with Reinforcement Learning for Safe and Interpretable Control of Chemical Processes
Machine Learning (CS)
Makes factories run better with less data.
Comparative Field Deployment of Reinforcement Learning and Model Predictive Control for Residential HVAC
Systems and Control
Makes house heating and cooling smarter, saving energy.