Score: 0

Multiagent Reinforcement Learning with Neighbor Action Estimation

Published: January 8, 2026 | arXiv ID: 2601.04511v1

By: Zhenglong Luo, Zhiyong Chen, Aoxiang Liu

Potential Business Impact:

Robots learn to work together without talking.

Business Areas:
Robotics Hardware, Science and Engineering, Software

Multiagent reinforcement learning, as a prominent intelligent paradigm, enables collaborative decision-making within complex systems. However, existing approaches often rely on explicit action exchange between agents to evaluate action value functions, which is frequently impractical in real-world engineering environments due to communication constraints, latency, energy consumption, and reliability requirements. From an artificial intelligence perspective, this paper proposes an enhanced multiagent reinforcement learning framework that employs action estimation neural networks to infer agent behaviors. By integrating a lightweight action estimation module, each agent infers neighboring agents' behaviors using only locally observable information, enabling collaborative policy learning without explicit action sharing. This approach is fully compatible with standard TD3 algorithms and scalable to larger multiagent systems. At the engineering application level, this framework has been implemented and validated in dual-arm robotic manipulation tasks: two robotic arms collaboratively lift objects. Experimental results demonstrate that this approach significantly enhances the robustness and deployment feasibility of real-world robotic systems while reducing dependence on information infrastructure. Overall, this research advances the development of decentralized multiagent artificial intelligence systems while enabling AI to operate effectively in dynamic, information-constrained real-world environments.

Country of Origin
🇦🇺 Australia

Page Count
21 pages

Category
Computer Science:
Robotics