Wasserstein Barycenter Soft Actor-Critic
By: Zahra Shahrooei, Ali Baheri
Potential Business Impact:
Teaches robots to learn faster with less practice.
Deep off-policy actor-critic algorithms have emerged as the leading framework for reinforcement learning in continuous control domains. However, most of these algorithms suffer from poor sample efficiency, especially in environments with sparse rewards. In this paper, we take a step towards addressing this issue by providing a principled directed exploration strategy. We propose Wasserstein Barycenter Soft Actor-Critic (WBSAC) algorithm, which benefits from a pessimistic actor for temporal difference learning and an optimistic actor to promote exploration. This is achieved by using the Wasserstein barycenter of the pessimistic and optimistic policies as the exploration policy and adjusting the degree of exploration throughout the learning process. We compare WBSAC with state-of-the-art off-policy actor-critic algorithms and show that WBSAC is more sample-efficient on MuJoCo continuous control tasks.
Similar Papers
Effective Reinforcement Learning Control using Conservative Soft Actor-Critic
Robotics
Teaches robots to learn and move better.
Wasserstein-Barycenter Consensus for Cooperative Multi-Agent Reinforcement Learning
Systems and Control
Teaches robots to work together better.
Wasserstein Policy Optimization
Machine Learning (CS)
Teaches robots to move smoothly and learn faster.