VLM-3D:End-to-End Vision-Language Models for Open-World 3D Perception
By: Fuhao Chang , Shuxin Li , Yabei Li and more
Potential Business Impact:
Helps self-driving cars see new things safely.
Open-set perception in complex traffic environments poses a critical challenge for autonomous driving systems, particularly in identifying previously unseen object categories, which is vital for ensuring safety. Visual Language Models (VLMs), with their rich world knowledge and strong semantic reasoning capabilities, offer new possibilities for addressing this task. However, existing approaches typically leverage VLMs to extract visual features and couple them with traditional object detectors, resulting in multi-stage error propagation that hinders perception accuracy. To overcome this limitation, we propose VLM-3D, the first end-to-end framework that enables VLMs to perform 3D geometric perception in autonomous driving scenarios. VLM-3D incorporates Low-Rank Adaptation (LoRA) to efficiently adapt VLMs to driving tasks with minimal computational overhead, and introduces a joint semantic-geometric loss design: token-level semantic loss is applied during early training to ensure stable convergence, while 3D IoU loss is introduced in later stages to refine the accuracy of 3D bounding box predictions. Evaluations on the nuScenes dataset demonstrate that the proposed joint semantic-geometric loss in VLM-3D leads to a 12.8% improvement in perception accuracy, fully validating the effectiveness and advancement of our method.
Similar Papers
V3LMA: Visual 3D-enhanced Language Model for Autonomous Driving
CV and Pattern Recognition
Helps self-driving cars see in 3D.
Let Language Constrain Geometry: Vision-Language Models as Semantic and Spatial Critics for 3D Generation
CV and Pattern Recognition
Makes 3D pictures match words better.
dVLM-AD: Enhance Diffusion Vision-Language-Model for Driving via Controllable Reasoning
CV and Pattern Recognition
Makes self-driving cars better at tricky situations.