Driving on Registers
By: Ellington Kirby , Alexandre Boulch , Yihong Xu and more
Potential Business Impact:
Teaches cars to drive safely and smoothly.
We present DrivoR, a simple and efficient transformer-based architecture for end-to-end autonomous driving. Our approach builds on pretrained Vision Transformers (ViTs) and introduces camera-aware register tokens that compress multi-camera features into a compact scene representation, significantly reducing downstream computation without sacrificing accuracy. These tokens drive two lightweight transformer decoders that generate and then score candidate trajectories. The scoring decoder learns to mimic an oracle and predicts interpretable sub-scores representing aspects such as safety, comfort, and efficiency, enabling behavior-conditioned driving at inference. Despite its minimal design, DrivoR outperforms or matches strong contemporary baselines across NAVSIM-v1, NAVSIM-v2, and the photorealistic closed-loop HUGSIM benchmark. Our results show that a pure-transformer architecture, combined with targeted token compression, is sufficient for accurate, efficient, and adaptive end-to-end driving. Code and checkpoints will be made available via the project page.
Similar Papers
Enhanced Drift-Aware Computer Vision Architecture for Autonomous Driving
CV and Pattern Recognition
Makes self-driving cars see better in bad weather.
SymDrive: Realistic and Controllable Driving Simulator via Symmetric Auto-regressive Online Restoration
CV and Pattern Recognition
Makes self-driving cars see better in 3D.
DriveVGGT: Visual Geometry Transformer for Autonomous Driving
CV and Pattern Recognition
Helps self-driving cars see better in 3D.