SafePlan: Leveraging Formal Logic and Chain-of-Thought Reasoning for Enhanced Safety in LLM-based Robotic Task Planning
By: Ike Obi , Vishnunandan L. N. Venkatesh , Weizheng Wang and more
Potential Business Impact:
Keeps robots safe from bad instructions.
Robotics researchers increasingly leverage large language models (LLM) in robotics systems, using them as interfaces to receive task commands, generate task plans, form team coalitions, and allocate tasks among multi-robot and human agents. However, despite their benefits, the growing adoption of LLM in robotics has raised several safety concerns, particularly regarding executing malicious or unsafe natural language prompts. In addition, ensuring that task plans, team formation, and task allocation outputs from LLMs are adequately examined, refined, or rejected is crucial for maintaining system integrity. In this paper, we introduce SafePlan, a multi-component framework that combines formal logic and chain-of-thought reasoners for enhancing the safety of LLM-based robotics systems. Using the components of SafePlan, including Prompt Sanity COT Reasoner and Invariant, Precondition, and Postcondition COT reasoners, we examined the safety of natural language task prompts, task plans, and task allocation outputs generated by LLM-based robotic systems as means of investigating and enhancing system safety profile. Our results show that SafePlan outperforms baseline models by leading to 90.5% reduction in harmful task prompt acceptance while still maintaining reasonable acceptance of safe tasks.
Similar Papers
Safety Aware Task Planning via Large Language Models in Robotics
Robotics
Makes robots safer by checking their plans.
A Framework for Benchmarking and Aligning Task-Planning Safety in LLM-Based Embodied Agents
Artificial Intelligence
Makes robots safer by teaching them risks.
Graphormer-Guided Task Planning: Beyond Static Rules with LLM Safety Perception
Robotics
Robots learn to avoid danger while working.