Preference-Guided Learning for Sparse-Reward Multi-Agent Reinforcement Learning
By: The Viet Bui, Tien Mai, Hong Thanh Nguyen
Potential Business Impact:
Teaches robots to learn from few rewards.
We study the problem of online multi-agent reinforcement learning (MARL) in environments with sparse rewards, where reward feedback is not provided at each interaction but only revealed at the end of a trajectory. This setting, though realistic, presents a fundamental challenge: the lack of intermediate rewards hinders standard MARL algorithms from effectively guiding policy learning. To address this issue, we propose a novel framework that integrates online inverse preference learning with multi-agent on-policy optimization into a unified architecture. At its core, our approach introduces an implicit multi-agent reward learning model, built upon a preference-based value-decomposition network, which produces both global and local reward signals. These signals are further used to construct dual advantage streams, enabling differentiated learning targets for the centralized critic and decentralized actors. In addition, we demonstrate how large language models (LLMs) can be leveraged to provide preference labels that enhance the quality of the learned reward model. Empirical evaluations on state-of-the-art benchmarks, including MAMuJoCo and SMACv2, show that our method achieves superior performance compared to existing baselines, highlighting its effectiveness in addressing sparse-reward challenges in online MARL.
Similar Papers
From Pixels to Cooperation Multi Agent Reinforcement Learning based on Multimodal World Models
Multiagent Systems
Teaches robots to work together using sight and sound.
Remembering the Markov Property in Cooperative MARL
Machine Learning (CS)
Teaches robots to work together by learning rules.
MAESTRO: Multi-Agent Environment Shaping through Task and Reward Optimization
Machine Learning (CS)
Teaches AI to control traffic better using smart lessons.