PROTEA: Securing Robot Task Planning and Execution
By: Zainab Altaweel , Mohaiminul Al Nahian , Jake Juettner and more
Potential Business Impact:
Protects robot plans from being tricked.
Robots need task planning methods to generate action sequences for complex tasks. Recent work on adversarial attacks has revealed significant vulnerabilities in existing robot task planners, especially those built on foundation models. In this paper, we aim to address these security challenges by introducing PROTEA, an LLM-as-a-Judge defense mechanism, to evaluate the security of task plans. PROTEA is developed to address the dimensionality and history challenges in plan safety assessment. We used different LLMs to implement multiple versions of PROTEA for comparison purposes. For systemic evaluations, we created a dataset containing both benign and malicious task plans, where the harmful behaviors were injected at varying levels of stealthiness. Our results provide actionable insights for robotic system practitioners seeking to enhance robustness and security of their task planning systems. Details, dataset and demos are provided: https://protea-secure.github.io/PROTEA/
Similar Papers
Robo-Troj: Attacking LLM-based Task Planners
Robotics
Makes robots do bad things when told.
Safety Aware Task Planning via Large Language Models in Robotics
Robotics
Makes robots safer by checking their plans.
Plan Verification for LLM-Based Embodied Task Completion Agents
Artificial Intelligence
Makes robots learn better by fixing their mistakes.