UACER: An Uncertainty-Aware Critic Ensemble Framework for Robust Adversarial Reinforcement Learning
By: Jiaxi Wu , Tiantian Zhang , Yuxing Wang and more
Potential Business Impact:
Teaches robots to learn better from mistakes.
Robust adversarial reinforcement learning has emerged as an effective paradigm for training agents to handle uncertain disturbance in real environments, with critical applications in sequential decision-making domains such as autonomous driving and robotic control. Within this paradigm, agent training is typically formulated as a zero-sum Markov game between a protagonist and an adversary to enhance policy robustness. However, the trainable nature of the adversary inevitably induces non-stationarity in the learning dynamics, leading to exacerbated training instability and convergence difficulties, particularly in high-dimensional complex environments. In this paper, we propose a novel approach, Uncertainty-Aware Critic Ensemble for robust adversarial Reinforcement learning (UACER), which consists of two strategies: 1) Diversified critic ensemble: a diverse set of K critic networks is exploited in parallel to stabilize Q-value estimation rather than conventional single-critic architectures for both variance reduction and robustness enhancement. 2) Time-varying Decay Uncertainty (TDU) mechanism: advancing beyond simple linear combinations, we develop a variance-derived Q-value aggregation strategy that explicitly incorporates epistemic uncertainty to dynamically regulate the exploration-exploitation trade-off while simultaneously stabilizing the training process. Comprehensive experiments across several MuJoCo control problems validate the superior effectiveness of UACER, outperforming state-of-the-art methods in terms of overall performance, stability, and efficiency.
Similar Papers
RLAC: Reinforcement Learning with Adversarial Critic for Free-Form Generation Tasks
Machine Learning (CS)
Makes AI write better stories and code.
Unveiling Uncertainty-Aware Autonomous Cooperative Learning Based Planning Strategy
Robotics
Cars learn to drive safely together, even with mistakes.
Adversarial Reinforcement Learning for Robust Control of Fixed-Wing Aircraft under Model Uncertainty
Optimization and Control
Drones fly straighter even when the air is tricky.