DAWM: Diffusion Action World Models for Offline Reinforcement Learning via Action-Inferred Transitions
By: Zongyue Li , Xiao Han , Yusong Li and more
Potential Business Impact:
Teaches robots to learn from past experiences.
Diffusion-based world models have demonstrated strong capabilities in synthesizing realistic long-horizon trajectories for offline reinforcement learning (RL). However, many existing methods do not directly generate actions alongside states and rewards, limiting their compatibility with standard value-based offline RL algorithms that rely on one-step temporal difference (TD) learning. While prior work has explored joint modeling of states, rewards, and actions to address this issue, such formulations often lead to increased training complexity and reduced performance in practice. We propose \textbf{DAWM}, a diffusion-based world model that generates future state-reward trajectories conditioned on the current state, action, and return-to-go, paired with an inverse dynamics model (IDM) for efficient action inference. This modular design produces complete synthetic transitions suitable for one-step TD-based offline RL, enabling effective and computationally efficient training. Empirically, we show that conservative offline RL algorithms such as TD3BC and IQL benefit significantly from training on these augmented trajectories, consistently outperforming prior diffusion-based baselines across multiple tasks in the D4RL benchmark.
Similar Papers
DiWA: Diffusion Policy Adaptation with World Models
Robotics
Teaches robots new tricks without real-world practice.
World4RL: Diffusion World Models for Policy Refinement with Reinforcement Learning for Robotic Manipulation
Robotics
Teaches robots new skills without real-world practice.
LaDi-WM: A Latent Diffusion-based World Model for Predictive Manipulation
Robotics
Helps robots learn to do tasks better.