Score: 1

LightEMMA: Lightweight End-to-End Multimodal Model for Autonomous Driving

Published: May 1, 2025 | arXiv ID: 2505.00284v2

By: Zhijie Qiao , Haowei Li , Zhong Cao and more

Potential Business Impact:

Helps self-driving cars learn and improve faster.

Business Areas:
Autonomous Vehicles Transportation

Vision-Language Models (VLMs) have demonstrated significant potential for end-to-end autonomous driving. However, the field still lacks a practical platform that enables dynamic model updates, rapid validation, fair comparison, and intuitive performance assessment. To that end, we introduce LightEMMA, a Lightweight End-to-End Multimodal Model for Autonomous driving. LightEMMA provides a unified, VLM-based autonomous driving framework without ad hoc customizations, enabling easy integration with evolving state-of-the-art commercial and open-source models. We construct twelve autonomous driving agents using various VLMs and evaluate their performance on the challenging nuScenes prediction task, comprehensively assessing computational metrics and providing critical insights. Illustrative examples show that, although VLMs exhibit strong scenario interpretation capabilities, their practical performance in autonomous driving tasks remains a concern. Additionally, increased model complexity and extended reasoning do not necessarily lead to better performance, emphasizing the need for further improvements and task-specific designs. The code is available at https://github.com/michigan-traffic-lab/LightEMMA.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Repos / Data Links

Page Count
8 pages

Category
Computer Science:
Robotics