Modular Autonomy with Conversational Interaction: An LLM-driven Framework for Decision Making in Autonomous Driving
By: Marvin Seegert, Korbinian Moller, Johannes Betz
Potential Business Impact:
Lets you talk to self-driving cars.
Recent advancements in Large Language Models (LLMs) offer new opportunities to create natural language interfaces for Autonomous Driving Systems (ADSs), moving beyond rigid inputs. This paper addresses the challenge of mapping the complexity of human language to the structured action space of modular ADS software. We propose a framework that integrates an LLM-based interaction layer with Autoware, a widely used open-source software. This system enables passengers to issue high-level commands, from querying status information to modifying driving behavior. Our methodology is grounded in three key components: a taxonomization of interaction categories, an application-centric Domain Specific Language (DSL) for command translation, and a safety-preserving validation layer. A two-stage LLM architecture ensures high transparency by providing feedback based on the definitive execution status. Evaluation confirms the system's timing efficiency and translation robustness. Simulation successfully validated command execution across all five interaction categories. This work provides a foundation for extensible, DSL-assisted interaction in modular and safety-conscious autonomy stacks.
Similar Papers
Multi-Agent Autonomous Driving Systems with Large Language Models: A Survey of Recent Advances
Multiagent Systems
Cars talk to each other to drive safer.
Multi-Agent Autonomous Driving Systems with Large Language Models: A Survey of Recent Advances
Multiagent Systems
Cars talk to each other for safer driving.
Enhancing Autonomous Driving Systems with On-Board Deployed Large Language Models
Artificial Intelligence
Helps self-driving cars handle unexpected situations better.