Finite-State Decentralized Policy-Based Control With Guaranteed Ground Coverage
By: Hossein Rastgoftar
Potential Business Impact:
Robots work together to cover areas efficiently.
We propose a finite-state, decentralized decision and control framework for multi-agent ground coverage. The approach decomposes the problem into two coupled components: (i) the structural design of a deep neural network (DNN) induced by the reference configuration of the agents, and (ii) policy-based decentralized coverage control. Agents are classified as anchors and followers, yielding a generic and scalable communication architecture in which each follower interacts with exactly three in-neighbors from the preceding layer, forming an enclosing triangular communication structure. The DNN training weights implicitly encode the spatial configuration of the agent team, thereby providing a geometric representation of the environmental target set. Within this architecture, we formulate a computationally efficient decentralized Markov decision process (MDP) whose components are time-invariant except for a time-varying cost function defined by the deviation from the centroid of the target set contained within each agent communication triangle. By introducing the concept of Anyway Output Controllability (AOC), we assume each agent is AOC and establish decentralized convergence to a desired configuration that optimally represents the environmental target.
Similar Papers
Connectivity-Preserving Multi-Agent Area Coverage via Optimal-Transport-Based Density-Driven Optimal Control (D2OC)
Systems and Control
Keeps robots connected while they cover areas.
Connectivity-Preserving Multi-Agent Area Coverage via Optimal-Transport-Based Density-Driven Optimal Control (D2OC)
Systems and Control
Keeps robots connected while they cover areas.
Deep Neural Network-Based Aerial Transport in the Presence of Cooperative and Uncooperative UAS
Systems and Control
Drones work together even when some don't listen.