SACn: Soft Actor-Critic with n-step Returns
By: Jakub Łyskawa, Jakub Lewandowski, Paweł Wawrzyński
Soft Actor-Critic (SAC) is widely used in practical applications and is now one of the most relevant off-policy online model-free reinforcement learning (RL) methods. The technique of n-step returns is known to increase the convergence speed of RL algorithms compared to their 1-step returns-based versions. However, SAC is notoriously difficult to combine with n-step returns, since their usual combination introduces bias in off-policy algorithms due to the changes in action distribution. While this problem is solved by importance sampling, a method for estimating expected values of one distribution using samples from another distribution, importance sampling may result in numerical instability. In this work, we combine SAC with n-step returns in a way that overcomes this issue. We present an approach to applying numerically stable importance sampling with simplified hyperparameter selection. Furthermore, we analyze the entropy estimation approach of Soft Actor-Critic in the context of the n-step maximum entropy framework and formulate the $τ$-sampled entropy estimation to reduce the variance of the learning target. Finally, we formulate the Soft Actor-Critic with n-step returns (SAC$n$) algorithm that we experimentally verify on MuJoCo simulated environments.
Similar Papers
Chunking the Critic: A Transformer-based Soft Actor-Critic with N-Step Returns
Machine Learning (CS)
Teaches robots to learn complex tasks faster.
DR-SAC: Distributionally Robust Soft Actor-Critic for Reinforcement Learning under Uncertainty
Machine Learning (CS)
Makes robots learn better even when things change.
Revisiting Actor-Critic Methods in Discrete Action Off-Policy Reinforcement Learning
Machine Learning (CS)
Makes game AI learn better from past games.