Goal-Oriented Multi-Agent Reinforcement Learning for Decentralized Agent Teams
By: Hung Du , Hy Nguyen , Srikanth Thudumu and more
Potential Business Impact:
Helps self-driving vehicles work together better.
Connected and autonomous vehicles across land, water, and air must often operate in dynamic, unpredictable environments with limited communication, no centralized control, and partial observability. These real-world constraints pose significant challenges for coordination, particularly when vehicles pursue individual objectives. To address this, we propose a decentralized Multi-Agent Reinforcement Learning (MARL) framework that enables vehicles, acting as agents, to communicate selectively based on local goals and observations. This goal-aware communication strategy allows agents to share only relevant information, enhancing collaboration while respecting visibility limitations. We validate our approach in complex multi-agent navigation tasks featuring obstacles and dynamic agent populations. Results show that our method significantly improves task success rates and reduces time-to-goal compared to non-cooperative baselines. Moreover, task performance remains stable as the number of agents increases, demonstrating scalability. These findings highlight the potential of decentralized, goal-driven MARL to support effective coordination in realistic multi-vehicle systems operating across diverse domains.
Similar Papers
Multi-Agent Reinforcement Learning in Intelligent Transportation Systems: A Comprehensive Survey
Machine Learning (CS)
Helps self-driving cars learn to work together.
Multi-Agent Reinforcement Learning and Real-Time Decision-Making in Robotic Soccer for Virtual Environments
Robotics
Teaches robot soccer teams to play better together.
Multi-Agent Reinforcement Learning for Task Offloading in Wireless Edge Networks
Machine Learning (CS)
Helps robots share resources without talking much.