Learning to Drive from a World Model
By: Mitchell Goff , Greg Hogan , George Hotz and more
Potential Business Impact:
Teaches cars to drive by watching humans.
Most self-driving systems rely on hand-coded perception outputs and engineered driving rules. Learning directly from human driving data with an end-to-end method can allow for a training architecture that is simpler and scales well with compute and data. In this work, we propose an end-to-end training architecture that uses real driving data to train a driving policy in an on-policy simulator. We show two different methods of simulation, one with reprojective simulation and one with a learned world model. We show that both methods can be used to train a policy that learns driving behavior without any hand-coded driving rules. We evaluate the performance of these policies in a closed-loop simulation and when deployed in a real-world advanced driver-assistance system.
Similar Papers
A Survey of World Models for Autonomous Driving
Robotics
Helps self-driving cars predict and plan driving.
AD-R1: Closed-Loop Reinforcement Learning for End-to-End Autonomous Driving with Impartial World Models
CV and Pattern Recognition
Teaches self-driving cars to avoid crashes.
SimScale: Learning to Drive via Real-World Simulation at Scale
CV and Pattern Recognition
Teaches self-driving cars to handle new situations.