A New Perspective on Transformers in Online Reinforcement Learning for Continuous Control
By: Nikita Kachaev , Daniil Zelezetsky , Egor Cherepanov and more
Potential Business Impact:
Makes robots learn faster and better.
Despite their effectiveness and popularity in offline or model-based reinforcement learning (RL), transformers remain underexplored in online model-free RL due to their sensitivity to training setups and model design decisions such as how to structure the policy and value networks, share components, or handle temporal information. In this paper, we show that transformers can be strong baselines for continuous control in online model-free RL. We investigate key design questions: how to condition inputs, share components between actor and critic, and slice sequential data for training. Our experiments reveal stable architectural and training strategies enabling competitive performance across fully and partially observable tasks, and in both vector- and image-based settings. These findings offer practical guidance for applying transformers in online RL.
Similar Papers
A Comparison Between Decision Transformers and Traditional Offline Reinforcement Learning Algorithms
Machine Learning (CS)
Lets computers learn from past actions better.
Do We Need Transformers to Play FPS Video Games?
Machine Learning (CS)
Makes game AI learn from past games.
Online Finetuning Decision Transformers with Pure RL Gradients
Machine Learning (CS)
Teaches AI to learn from its own actions.