Extending Large Vision-Language Model for Diverse Interactive Tasks in Autonomous Driving
By: Zongchuang Zhao , Haoyu Fu , Dingkang Liang and more
Potential Business Impact:
Helps self-driving cars see and understand everything.
The Large Visual-Language Models (LVLMs) have significantly advanced image understanding. Their comprehension and reasoning capabilities enable promising applications in autonomous driving scenarios. However, existing research typically focuses on front-view perspectives and partial objects within scenes, struggling to achieve comprehensive scene understanding. Meanwhile, existing LVLMs suffer from the lack of mapping relationship between 2D and 3D and insufficient integration of 3D object localization and instruction understanding. To tackle these limitations, we first introduce NuInteract, a large-scale dataset with over 1.5M multi-view image language pairs spanning dense scene captions and diverse interactive tasks. Furthermore, we propose DriveMonkey, a simple yet effective framework that seamlessly integrates LVLMs with a spatial processor using a series of learnable queries. The spatial processor, designed as a plug-and-play component, can be initialized with pre-trained 3D detectors to improve 3D perception. Our experiments show that DriveMonkey outperforms general LVLMs, especially achieving a 9.86% notable improvement on the 3D visual grounding task. The dataset and code will be released at https://github.com/zc-zhao/DriveMonkey.
Similar Papers
V3LMA: Visual 3D-enhanced Language Model for Autonomous Driving
CV and Pattern Recognition
Helps self-driving cars see in 3D.
VLM-3D:End-to-End Vision-Language Models for Open-World 3D Perception
CV and Pattern Recognition
Helps self-driving cars see new things safely.
Toward Automatic Safe Driving Instruction: A Large-Scale Vision Language Model Approach
CV and Pattern Recognition
Helps cars watch drivers and roads for safety.