Score: 0

Interactive Distillation for Cooperative Multi-Agent Reinforcement Learning

Published: January 8, 2026 | arXiv ID: 2601.05407v1

By: Minwoo Cho, Batuhan Altundas, Matthew Gombolay

Potential Business Impact:

Teaches AI teams to win games better.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Knowledge distillation (KD) has the potential to accelerate MARL by employing a centralized teacher for decentralized students but faces key bottlenecks. Specifically, there are (1) challenges in synthesizing high-performing teaching policies in complex domains, (2) difficulties when teachers must reason in out-of-distribution (OOD) states, and (3) mismatches between the decentralized students' and the centralized teacher's observation spaces. To address these limitations, we propose HINT (Hierarchical INteractive Teacher-based transfer), a novel KD framework for MARL in a centralized training, decentralized execution setup. By leveraging hierarchical RL, HINT provides a scalable, high-performing teacher. Our key innovation, pseudo off-policy RL, enables the teacher policy to be updated using both teacher and student experience, thereby improving OOD adaptation. HINT also applies performance-based filtering to retain only outcome-relevant guidance, reducing observation mismatches. We evaluate HINT on challenging cooperative domains (e.g., FireCommander for resource allocation, MARINE for tactical combat). Across these benchmarks, HINT outperforms baselines, achieving improvements of 60% to 165% in success rate.

Country of Origin
🇺🇸 United States

Page Count
17 pages

Category
Computer Science:
Machine Learning (CS)