Score: 2

Improving and Accelerating Offline RL in Large Discrete Action Spaces with Structured Policy Initialization

Published: January 7, 2026 | arXiv ID: 2601.04441v1

By: Matthew Landers , Taylor W. Killian , Thomas Hartvigsen and more

Potential Business Impact:

Teaches robots to pick the best actions.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Reinforcement learning in discrete combinatorial action spaces requires searching over exponentially many joint actions to simultaneously select multiple sub-actions that form coherent combinations. Existing approaches either simplify policy learning by assuming independence across sub-actions, which often yields incoherent or invalid actions, or attempt to learn action structure and control jointly, which is slow and unstable. We introduce Structured Policy Initialization (SPIN), a two-stage framework that first pre-trains an Action Structure Model (ASM) to capture the manifold of valid actions, then freezes this representation and trains lightweight policy heads for control. On challenging discrete DM Control benchmarks, SPIN improves average return by up to 39% over the state of the art while reducing time to convergence by up to 12.8$\times$.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
23 pages

Category
Computer Science:
Machine Learning (CS)