Improving and Accelerating Offline RL in Large Discrete Action Spaces with Structured Policy Initialization
By: Matthew Landers , Taylor W. Killian , Thomas Hartvigsen and more
Potential Business Impact:
Teaches robots to pick the best actions.
Reinforcement learning in discrete combinatorial action spaces requires searching over exponentially many joint actions to simultaneously select multiple sub-actions that form coherent combinations. Existing approaches either simplify policy learning by assuming independence across sub-actions, which often yields incoherent or invalid actions, or attempt to learn action structure and control jointly, which is slow and unstable. We introduce Structured Policy Initialization (SPIN), a two-stage framework that first pre-trains an Action Structure Model (ASM) to capture the manifold of valid actions, then freezes this representation and trains lightweight policy heads for control. On challenging discrete DM Control benchmarks, SPIN improves average return by up to 39% over the state of the art while reducing time to convergence by up to 12.8$\times$.
Similar Papers
Deep SPI: Safe Policy Improvement via World Models
Machine Learning (CS)
Makes AI learn better and safer.
Data-Efficient Safe Policy Improvement Using Parametric Structure
Artificial Intelligence
Makes AI learn better with less data.
Partial Action Replacement: Tackling Distribution Shift in Offline MARL
Machine Learning (CS)
Helps AI learn better from past experiences.