RLGF: Reinforcement Learning with Geometric Feedback for Autonomous Driving Video Generation
By: Tianyi Yan , Wencheng Han , Xia Zhou and more
Potential Business Impact:
Makes self-driving cars see better with fake videos.
Synthetic data is crucial for advancing autonomous driving (AD) systems, yet current state-of-the-art video generation models, despite their visual realism, suffer from subtle geometric distortions that limit their utility for downstream perception tasks. We identify and quantify this critical issue, demonstrating a significant performance gap in 3D object detection when using synthetic versus real data. To address this, we introduce Reinforcement Learning with Geometric Feedback (RLGF), RLGF uniquely refines video diffusion models by incorporating rewards from specialized latent-space AD perception models. Its core components include an efficient Latent-Space Windowing Optimization technique for targeted feedback during diffusion, and a Hierarchical Geometric Reward (HGR) system providing multi-level rewards for point-line-plane alignment, and scene occupancy coherence. To quantify these distortions, we propose GeoScores. Applied to models like DiVE on nuScenes, RLGF substantially reduces geometric errors (e.g., VP error by 21\%, Depth error by 57\%) and dramatically improves 3D object detection mAP by 12.7\%, narrowing the gap to real-data performance. RLGF offers a plug-and-play solution for generating geometrically sound and reliable synthetic videos for AD development.
Similar Papers
Taming Camera-Controlled Video Generation with Verifiable Geometry Reward
CV and Pattern Recognition
Makes AI videos move cameras more accurately.
GenFlowRL: Shaping Rewards with Generative Object-Centric Flow in Visual Reinforcement Learning
Robotics
Teaches robots to do tasks better using fake videos.
GrndCtrl: Grounding World Models via Self-Supervised Reward Alignment
CV and Pattern Recognition
Makes robots navigate safely and understand spaces.