Time-Constrained Intelligent Adversaries for Automation Vulnerability Testing: A Multi-Robot Patrol Case Study
By: James C. Ward , Alex Bott , Connor York and more
Potential Business Impact:
Teaches robots how to sneak past guards.
Simulating hostile attacks of physical autonomous systems can be a useful tool to examine their robustness to attack and inform vulnerability-aware design. In this work, we examine this through the lens of multi-robot patrol, by presenting a machine learning-based adversary model that observes robot patrol behavior in order to attempt to gain undetected access to a secure environment within a limited time duration. Such a model allows for evaluation of a patrol system against a realistic potential adversary, offering insight into future patrol strategy design. We show that our new model outperforms existing baselines, thus providing a more stringent test, and examine its performance against multiple leading decentralized multi-robot patrol strategies.
Similar Papers
Attacking Autonomous Driving Agents with Adversarial Machine Learning: A Holistic Evaluation with the CARLA Leaderboard
Cryptography and Security
Makes self-driving cars safer from fake signs.
Practical Handling of Dynamic Environments in Decentralised Multi-Robot Patrol
Robotics
Robots learn to patrol changing areas together.
TAMAS: Benchmarking Adversarial Risks in Multi-Agent LLM Systems
Multiagent Systems
Tests if AI teams can be tricked.