Higher-Order Action Regularization in Deep Reinforcement Learning: From Continuous Control to Building Energy Management
By: Faizan Ahmed, Aniket Dixit, James Brusey
Potential Business Impact:
Makes robots move smoothly, saving energy and wear.
Deep reinforcement learning agents often exhibit erratic, high-frequency control behaviors that hinder real-world deployment due to excessive energy consumption and mechanical wear. We systematically investigate action smoothness regularization through higher-order derivative penalties, progressing from theoretical understanding in continuous control benchmarks to practical validation in building energy management. Our comprehensive evaluation across four continuous control environments demonstrates that third-order derivative penalties (jerk minimization) consistently achieve superior smoothness while maintaining competitive performance. We extend these findings to HVAC control systems where smooth policies reduce equipment switching by 60%, translating to significant operational benefits. Our work establishes higher-order action regularization as an effective bridge between RL optimization and operational constraints in energy-critical applications.
Similar Papers
High-order Regularization for Machine Learning and Learning-based Control
Machine Learning (CS)
Makes smart computer programs more understandable.
Data-regularized Reinforcement Learning for Diffusion Models at Scale
Machine Learning (CS)
Makes AI create better videos that people like.
Deep Reinforcement Learning for Real-Time Green Energy Integration in Data Centers
Machine Learning (CS)
Saves money and energy in computer centers.