A Cascading Cooperative Multi-agent Framework for On-ramp Merging Control Integrating Large Language Models
By: Miao Zhang , Zhenlong Fang , Tianyi Wang and more
Potential Business Impact:
Helps self-driving cars work together better.
Traditional Reinforcement Learning (RL) suffers from replicating human-like behaviors, generalizing effectively in multi-agent scenarios, and overcoming inherent interpretability issues.These tasks are compounded when deep environment understanding, agent coordination and dynamic optimization are required. While Large Language Model (LLM) enhanced methods have shown promise in generalization and interoperability, they often neglect necessary multi-agent coordination. Therefore, we introduce the Cascading Cooperative Multi-agent (CCMA) framework, integrating RL for individual interactions, a fine-tuned LLM for regional cooperation, a reward function for global optimization, and the Retrieval-augmented Generation mechanism to dynamically optimize decision-making across complex driving scenarios. Our experiments demonstrate that the CCMA outperforms existing RL methods, demonstrating significant improvements in both micro and macro-level performance in complex driving environments.
Similar Papers
Controlling Performance and Budget of a Centralized Multi-agent LLM System with Reinforcement Learning
Computation and Language
Smart AI teams work together to save money.
Enhancing Multi-Agent Systems via Reinforcement Learning with LLM-based Planner and Graph-based Policy
CV and Pattern Recognition
Helps robots work together on hard jobs.
Large Language Model Integration with Reinforcement Learning to Augment Decision-Making in Autonomous Cyber Operations
Cryptography and Security
Teaches computers to fight cyber threats faster.