Deep SPI: Safe Policy Improvement via World Models
By: Florent Delgrange, Raphael Avalos, Willem Röpke
Potential Business Impact:
Makes AI learn better and safer.
Safe policy improvement (SPI) offers theoretical control over policy updates, yet existing guarantees largely concern offline, tabular reinforcement learning (RL). We study SPI in general online settings, when combined with world model and representation learning. We develop a theoretical framework showing that restricting policy updates to a well-defined neighborhood of the current policy ensures monotonic improvement and convergence. This analysis links transition and reward prediction losses to representation quality, yielding online, "deep" analogues of classical SPI theorems from the offline RL literature. Building on these results, we introduce DeepSPI, a principled on-policy algorithm that couples local transition and reward losses with regularised policy updates. On the ALE-57 benchmark, DeepSPI matches or exceeds strong baselines, including PPO and DeepMDPs, while retaining theoretical guarantees.
Similar Papers
Internalizing World Models via Self-Play Finetuning for Agentic RL
Machine Learning (CS)
Teaches AI to learn and solve new problems.
Soft Policy Optimization: Online Off-Policy RL for Sequence Models
Machine Learning (CS)
Teaches computers to learn from more examples faster.
Mitigating the Safety Alignment Tax with Null-Space Constrained Policy Optimization
Machine Learning (CS)
Keeps AI smart while making it safe.