Score: 1

Scalable Multi Agent Diffusion Policies for Coverage Control

Published: September 21, 2025 | arXiv ID: 2509.17244v1

By: Frederic Vatnsdal , Romina Garcia Camargo , Saurav Agarwal and more

Potential Business Impact:

Robots work together better, like a team.

Business Areas:
Autonomous Vehicles Transportation

We propose MADP, a novel diffusion-model-based approach for collaboration in decentralized robot swarms. MADP leverages diffusion models to generate samples from complex and high-dimensional action distributions that capture the interdependencies between agents' actions. Each robot conditions policy sampling on a fused representation of its own observations and perceptual embeddings received from peers. To evaluate this approach, we task a team of holonomic robots piloted by MADP to address coverage control-a canonical multi agent navigation problem. The policy is trained via imitation learning from a clairvoyant expert on the coverage control problem, with the diffusion process parameterized by a spatial transformer architecture to enable decentralized inference. We evaluate the system under varying numbers, locations, and variances of importance density functions, capturing the robustness demands of real-world coverage tasks. Experiments demonstrate that our model inherits valuable properties from diffusion models, generalizing across agent densities and environments, and consistently outperforming state-of-the-art baselines.

Country of Origin
🇺🇸 United States

Page Count
8 pages

Category
Computer Science:
Robotics