Dynamic Entropy Tuning in Reinforcement Learning Low-Level Quadcopter Control: Stochasticity vs Determinism
By: Youssef Mahran, Zeyad Gamal, Ayman El-Badawy
This paper explores the impact of dynamic entropy tuning in Reinforcement Learning (RL) algorithms that train a stochastic policy. Its performance is compared against algorithms that train a deterministic one. Stochastic policies optimize a probability distribution over actions to maximize rewards, while deterministic policies select a single deterministic action per state. The effect of training a stochastic policy with both static entropy and dynamic entropy and then executing deterministic actions to control the quadcopter is explored. It is then compared against training a deterministic policy and executing deterministic actions. For the purpose of this research, the Soft Actor-Critic (SAC) algorithm was chosen for the stochastic algorithm while the Twin Delayed Deep Deterministic Policy Gradient (TD3) was chosen for the deterministic algorithm. The training and simulation results show the positive effect the dynamic entropy tuning has on controlling the quadcopter by preventing catastrophic forgetting and improving exploration efficiency.
Similar Papers
Control of a Twin Rotor using Twin Delayed Deep Deterministic Policy Gradient (TD3)
Robotics
Teaches drones to fly steady and follow paths.
Trajectory Entropy Reinforcement Learning for Predictable and Robust Control
Machine Learning (CS)
Makes robots move more smoothly and reliably.
Fault Tolerant Control of a Quadcopter using Reinforcement Learning
Robotics
Keeps drones flying even if a propeller breaks.