A Comprehensive LLM-powered Framework for Driving Intelligence Evaluation
By: Shanhe You , Xuewen Luo , Xinhe Liang and more
Potential Business Impact:
Tests how smart self-driving cars act.
Evaluation methods for autonomous driving are crucial for algorithm optimization. However, due to the complexity of driving intelligence, there is currently no comprehensive evaluation method for the level of autonomous driving intelligence. In this paper, we propose an evaluation framework for driving behavior intelligence in complex traffic environments, aiming to fill this gap. We constructed a natural language evaluation dataset of human professional drivers and passengers through naturalistic driving experiments and post-driving behavior evaluation interviews. Based on this dataset, we developed an LLM-powered driving evaluation framework. The effectiveness of this framework was validated through simulated experiments in the CARLA urban traffic simulator and further corroborated by human assessment. Our research provides valuable insights for evaluating and designing more intelligent, human-like autonomous driving agents. The implementation details of the framework and detailed information about the dataset can be found at Github.
Similar Papers
A Framework for a Capability-driven Evaluation of Scenario Understanding for Multimodal Large Language Models in Autonomous Driving
CV and Pattern Recognition
Helps self-driving cars understand traffic better.
A Multi-Agent LLM Framework for Design Space Exploration in Autonomous Driving Systems
Robotics
Makes self-driving cars design themselves faster.
Evaluation of Large Language Models for Anomaly Detection in Autonomous Vehicles
Robotics
Helps self-driving cars spot tricky road problems.