Large Multimodal Models for Embodied Intelligent Driving: The Next Frontier in Self-Driving?
By: Long Zhang, Yuchen Xia
The advent of Large Multimodal Models (LMMs) offers a promising technology to tackle the limitations of modular design in autonomous driving, which often falters in open-world scenarios requiring sustained environmental understanding and logical reasoning. Besides, embodied artificial intelligence facilitates policy optimization through closed-loop interactions to achieve the continuous learning capability, thereby advancing autonomous driving toward embodied intelligent (El) driving. However, such capability will be constrained by relying solely on LMMs to enhance EI driving without joint decision-making. This article introduces a novel semantics and policy dual-driven hybrid decision framework to tackle this challenge, ensuring continuous learning and joint decision. The framework merges LMMs for semantic understanding and cognitive representation, and deep reinforcement learning (DRL) for real-time policy optimization. We starts by introducing the foundational principles of EI driving and LMMs. Moreover, we examine the emerging opportunities this framework enables, encompassing potential benefits and representative use cases. A case study is conducted experimentally to validate the performance superiority of our framework in completing lane-change planning task. Finally, several future research directions to empower EI driving are identified to guide subsequent work.
Similar Papers
Large Multimodal Models-Empowered Task-Oriented Autonomous Communications: Design Methodology and Implementation Challenges
Machine Learning (CS)
AI helps machines talk and work together better.
Exploring Embodied Multimodal Large Models: Development, Datasets, and Future Directions
Robotics
Robots learn to see, think, and do tasks.
Modular Autonomy with Conversational Interaction: An LLM-driven Framework for Decision Making in Autonomous Driving
Robotics
Lets you talk to self-driving cars.