Score: 0

A New Perspective on Transformers in Online Reinforcement Learning for Continuous Control

Published: October 15, 2025 | arXiv ID: 2510.13367v1

By: Nikita Kachaev , Daniil Zelezetsky , Egor Cherepanov and more

Potential Business Impact:

Makes robots learn faster and better.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Despite their effectiveness and popularity in offline or model-based reinforcement learning (RL), transformers remain underexplored in online model-free RL due to their sensitivity to training setups and model design decisions such as how to structure the policy and value networks, share components, or handle temporal information. In this paper, we show that transformers can be strong baselines for continuous control in online model-free RL. We investigate key design questions: how to condition inputs, share components between actor and critic, and slice sequential data for training. Our experiments reveal stable architectural and training strategies enabling competitive performance across fully and partially observable tasks, and in both vector- and image-based settings. These findings offer practical guidance for applying transformers in online RL.

Page Count
10 pages

Category
Computer Science:
Machine Learning (CS)