Score: 1

Online Finetuning Decision Transformers with Pure RL Gradients

Published: January 1, 2026 | arXiv ID: 2601.00167v1

By: Junkai Luo, Yinglun Zhu

Potential Business Impact:

Teaches AI to learn from its own actions.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Decision Transformers (DTs) have emerged as a powerful framework for sequential decision making by formulating offline reinforcement learning (RL) as a sequence modeling problem. However, extending DTs to online settings with pure RL gradients remains largely unexplored, as existing approaches continue to rely heavily on supervised sequence-modeling objectives during online finetuning. We identify hindsight return relabeling -- a standard component in online DTs -- as a critical obstacle to RL-based finetuning: while beneficial for supervised learning, it is fundamentally incompatible with importance sampling-based RL algorithms such as GRPO, leading to unstable training. Building on this insight, we propose new algorithms that enable online finetuning of Decision Transformers using pure reinforcement learning gradients. We adapt GRPO to DTs and introduce several key modifications, including sub-trajectory optimization for improved credit assignment, sequence-level likelihood objectives for enhanced stability and efficiency, and active sampling to encourage exploration in uncertain regions. Through extensive experiments, we demonstrate that our methods outperform existing online DT baselines and achieve new state-of-the-art performance across multiple benchmarks, highlighting the effectiveness of pure-RL-based online finetuning for Decision Transformers.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
21 pages

Category
Computer Science:
Machine Learning (CS)