Ant-inspired Walling Strategies for Scalable Swarm Separation: Reinforcement Learning Approaches Based on Finite State Machines
By: Shenbagaraj Kannapiran , Elena Oikonomou , Albert Chu and more
Potential Business Impact:
Robots build walls to do jobs without bumping.
In natural systems, emergent structures often arise to balance competing demands. Army ants, for example, form temporary "walls" that prevent interference between foraging trails. Inspired by this behavior, we developed two decentralized controllers for heterogeneous robotic swarms to maintain spatial separation while executing concurrent tasks. The first is a finite-state machine (FSM)-based controller that uses encounter-triggered transitions to create rigid, stable walls. The second integrates FSM states with a Deep Q-Network (DQN), dynamically optimizing separation through emergent "demilitarized zones." In simulation, both controllers reduce mixing between subgroups, with the DQN-enhanced controller improving adaptability and reducing mixing by 40-50% while achieving faster convergence.
Similar Papers
VariAntNet: Learning Decentralized Control of Multi-Agent Systems
Machine Learning (CS)
Robots work together to fight fires better.
Sensor to Pixels: Decentralized Swarm Gathering via Image-Based Reinforcement Learning
Machine Learning (CS)
Robots learn to move together by watching each other.
Rule-Based Conflict-Free Decision Framework in Swarm Confrontation
Artificial Intelligence
Helps robot groups work together without getting stuck.