GrndCtrl: Grounding World Models via Self-Supervised Reward Alignment
By: Haoyang He , Jay Patrikar , Dong-Ki Kim and more
Potential Business Impact:
Makes robots navigate safely and understand spaces.
Recent advances in video world modeling have enabled large-scale generative models to simulate embodied environments with high visual fidelity, providing strong priors for prediction, planning, and control. Yet, despite their realism, these models often lack geometric grounding, limiting their use in navigation tasks that require spatial coherence and long-horizon stability. We introduce Reinforcement Learning with World Grounding (RLWG), a self-supervised post-training framework that aligns pretrained world models with a physically verifiable structure through geometric and perceptual rewards. Analogous to reinforcement learning from verifiable feedback (RLVR) in language models, RLWG can use multiple rewards that measure pose cycle-consistency, depth reprojection, and temporal coherence. We instantiate this framework with GrndCtrl, a reward-aligned adaptation method based on Group Relative Policy Optimization (GRPO), yielding world models that maintain stable trajectories, consistent geometry, and reliable rollouts for embodied navigation. Like post-training alignment in large language models, GrndCtrl leverages verifiable rewards to bridge generative pretraining and grounded behavior, achieving superior spatial coherence and navigation stability over supervised fine-tuning in outdoor environments.
Similar Papers
Taming Camera-Controlled Video Generation with Verifiable Geometry Reward
CV and Pattern Recognition
Makes AI videos move cameras more accurately.
Lessons from Training Grounded LLMs with Verifiable Rewards
Computation and Language
Makes AI answers more truthful and proven.
RoboScape-R: Unified Reward-Observation World Models for Generalizable Robotics Training via RL
Robotics
Teaches robots to learn new tasks faster.