Score: 0

Application of linear regression and quasi-Newton methods to the deep reinforcement learning in continuous action cases

Published: March 19, 2025 | arXiv ID: 2503.14976v3

By: Hisato Komatsu

Potential Business Impact:

Teaches robots to move smoothly and learn.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

The linear regression (LR) method offers the advantage that optimal parameters can be calculated relatively easily, although its representation capability is limited than that of the deep learning technique. To improve deep reinforcement learning, the Least Squares Deep Q Network (LS-DQN) method was proposed by Levine et al., which combines Deep Q Network (DQN) with LR method. However, the LS-DQN method assumes that the actions are discrete. In this study, we propose the Double Least Squares Deep Deterministic Policy Gradient (DLS-DDPG) method to address this limitation. This method combines the LR method with the Deep Deterministic Policy Gradient (DDPG) technique, one of the representative deep reinforcement learning algorithms for continuous action cases. For the LR update of the critic network, DLS-DDPG uses an algorithm similar to the Fitted Q iteration, the method which LS-DQN adopted. In addition, we calculated the optimal action using the quasi-Newton method and used it as both the agent's action and the training data for the LR update of the actor network. Numerical experiments conducted in MuJoCo environments showed that the proposed method improved performance at least in some tasks, although there are difficulties such as the inability to make the regularization terms small.

Country of Origin
🇯🇵 Japan

Page Count
23 pages

Category
Computer Science:
Machine Learning (CS)