Mitigating Estimation Bias with Representation Learning in TD Error-Driven Regularization
By: Haohui Chen , Zhiyong Chen , Aoxiang Liu and more
Potential Business Impact:
Teaches robots to learn better by balancing risks.
Deterministic policy gradient algorithms for continuous control suffer from value estimation biases that degrade performance. While double critics reduce such biases, the exploration potential of double actors remains underexplored. Building on temporal-difference error-driven regularization (TDDR), a double actor-critic framework, this work introduces enhanced methods to achieve flexible bias control and stronger representation learning. We propose three convex combination strategies, symmetric and asymmetric, that balance pessimistic estimates to mitigate overestimation and optimistic exploration via double actors to alleviate underestimation. A single hyperparameter governs this mechanism, enabling tunable control across the bias spectrum. To further improve performance, we integrate augmented state and action representations into the actor and critic networks. Extensive experiments show that our approach consistently outperforms benchmarks, demonstrating the value of tunable bias and revealing that both overestimation and underestimation can be exploited differently depending on the environment.
Similar Papers
Quasi-Newton Compatible Actor-Critic for Deterministic Policies
Machine Learning (CS)
Teaches computers to learn faster by watching mistakes.
Moderate Actor-Critic Methods: Controlling Overestimation Bias via Expectile Loss
Machine Learning (CS)
Fixes computer learning mistakes for better results.
On The Presence of Double-Descent in Deep Reinforcement Learning
Machine Learning (CS)
Makes smart computer players learn better and faster.