Vehicle-to-Infrastructure Collaborative Spatial Perception via Multimodal Large Language Models
By: Kimia Ehsani, Walid Saad
Potential Business Impact:
Helps cars talk to each other better, even in bad weather.
Accurate prediction of communication link quality metrics is essential for vehicle-to-infrastructure (V2I) systems, enabling smooth handovers, efficient beam management, and reliable low-latency communication. The increasing availability of sensor data from modern vehicles motivates the use of multimodal large language models (MLLMs) because of their adaptability across tasks and reasoning capabilities. However, MLLMs inherently lack three-dimensional spatial understanding. To overcome this limitation, a lightweight, plug-and-play bird's-eye view (BEV) injection connector is proposed. In this framework, a BEV of the environment is constructed by collecting sensing data from neighboring vehicles. This BEV representation is then fused with the ego vehicle's input to provide spatial context for the large language model. To support realistic multimodal learning, a co-simulation environment combining CARLA simulator and MATLAB-based ray tracing is developed to generate RGB, LiDAR, GPS, and wireless signal data across varied scenarios. Instructions and ground-truth responses are programmatically extracted from the ray-tracing outputs. Extensive experiments are conducted across three V2I link prediction tasks: line-of-sight (LoS) versus non-line-of-sight (NLoS) classification, link availability, and blockage prediction. Simulation results show that the proposed BEV injection framework consistently improved performance across all tasks. The results indicate that, compared to an ego-only baseline, the proposed approach improves the macro-average of the accuracy metrics by up to 13.9%. The results also show that this performance gain increases by up to 32.7% under challenging rainy and nighttime conditions, confirming the robustness of the framework in adverse settings.
Similar Papers
Multimodal Large Language Model Framework for Safe and Interpretable Grid-Integrated EVs
Artificial Intelligence
Helps electric cars warn drivers about dangers.
BEV-LLM: Leveraging Multimodal BEV Maps for Scene Captioning in Autonomous Driving
CV and Pattern Recognition
Helps self-driving cars describe what they see.
Are VLMs Ready for Lane Topology Awareness in Autonomous Driving?
CV and Pattern Recognition
Helps cars understand road turns and paths.