Adaptive Context Length Optimization with Low-Frequency Truncation for Multi-Agent Reinforcement Learning
By: Wenchang Duan , Yaoliang Yu , Jiwan He and more
Potential Business Impact:
Helps AI teams learn tasks faster and better.
Recently, deep multi-agent reinforcement learning (MARL) has demonstrated promising performance for solving challenging tasks, such as long-term dependencies and non-Markovian environments. Its success is partly attributed to conditioning policies on large fixed context length. However, such large fixed context lengths may lead to limited exploration efficiency and redundant information. In this paper, we propose a novel MARL framework to obtain adaptive and effective contextual information. Specifically, we design a central agent that dynamically optimizes context length via temporal gradient analysis, enhancing exploration to facilitate convergence to global optima in MARL. Furthermore, to enhance the adaptive optimization capability of the context length, we present an efficient input representation for the central agent, which effectively filters redundant information. By leveraging a Fourier-based low-frequency truncation method, we extract global temporal trends across decentralized agents, providing an effective and efficient representation of the MARL environment. Extensive experiments demonstrate that the proposed method achieves state-of-the-art (SOTA) performance on long-term dependency tasks, including PettingZoo, MiniGrid, Google Research Football (GRF), and StarCraft Multi-Agent Challenge v2 (SMACv2).
Similar Papers
cMALC-D: Contextual Multi-Agent LLM-Guided Curriculum Learning with Diversity-Based Context Blending
Machine Learning (CS)
Teaches robots to handle new situations better.
Scaling Long-Horizon LLM Agent via Context-Folding
Computation and Language
Helps AI remember more for long tasks.
Multi-agent In-context Coordination via Decentralized Memory Retrieval
Multiagent Systems
Helps robot teams learn new jobs faster together.