Application of linear regression and quasi-Newton methods to the deep reinforcement learning in continuous action cases
By: Hisato Komatsu
Potential Business Impact:
Teaches robots to move smoothly and learn.
The linear regression (LR) method offers the advantage that optimal parameters can be calculated relatively easily, although its representation capability is limited than that of the deep learning technique. To improve deep reinforcement learning, the Least Squares Deep Q Network (LS-DQN) method was proposed by Levine et al., which combines Deep Q Network (DQN) with LR method. However, the LS-DQN method assumes that the actions are discrete. In this study, we propose the Double Least Squares Deep Deterministic Policy Gradient (DLS-DDPG) method to address this limitation. This method combines the LR method with the Deep Deterministic Policy Gradient (DDPG) technique, one of the representative deep reinforcement learning algorithms for continuous action cases. For the LR update of the critic network, DLS-DDPG uses an algorithm similar to the Fitted Q iteration, the method which LS-DQN adopted. In addition, we calculated the optimal action using the quasi-Newton method and used it as both the agent's action and the training data for the LR update of the actor network. Numerical experiments conducted in MuJoCo environments showed that the proposed method improved performance at least in some tasks, although there are difficulties such as the inability to make the regularization terms small.
Similar Papers
Frictional Q-Learning
Machine Learning (CS)
Teaches robots to learn new skills safely.
Quasi-Newton Compatible Actor-Critic for Deterministic Policies
Machine Learning (CS)
Teaches computers to learn faster by watching mistakes.
A Practical Introduction to Deep Reinforcement Learning
Machine Learning (CS)
Teaches computers to learn and make smart choices.