LightEMMA: Lightweight End-to-End Multimodal Model for Autonomous Driving
By: Zhijie Qiao , Haowei Li , Zhong Cao and more
Potential Business Impact:
Helps self-driving cars learn and improve faster.
Vision-Language Models (VLMs) have demonstrated significant potential for end-to-end autonomous driving. However, the field still lacks a practical platform that enables dynamic model updates, rapid validation, fair comparison, and intuitive performance assessment. To that end, we introduce LightEMMA, a Lightweight End-to-End Multimodal Model for Autonomous driving. LightEMMA provides a unified, VLM-based autonomous driving framework without ad hoc customizations, enabling easy integration with evolving state-of-the-art commercial and open-source models. We construct twelve autonomous driving agents using various VLMs and evaluate their performance on the challenging nuScenes prediction task, comprehensively assessing computational metrics and providing critical insights. Illustrative examples show that, although VLMs exhibit strong scenario interpretation capabilities, their practical performance in autonomous driving tasks remains a concern. Additionally, increased model complexity and extended reasoning do not necessarily lead to better performance, emphasizing the need for further improvements and task-specific designs. The code is available at https://github.com/michigan-traffic-lab/LightEMMA.
Similar Papers
A Vision-Language-Action Model with Visual Prompt for OFF-Road Autonomous Driving
Robotics
Helps self-driving cars navigate rough ground better.
V3LMA: Visual 3D-enhanced Language Model for Autonomous Driving
CV and Pattern Recognition
Helps self-driving cars see in 3D.
Distilling Multi-modal Large Language Models for Autonomous Driving
CV and Pattern Recognition
Teaches self-driving cars to plan better, safely.