A Framework for Benchmarking and Aligning Task-Planning Safety in LLM-Based Embodied Agents
By: Yuting Huang , Leilei Ding , Zhipeng Tang and more
Potential Business Impact:
Makes robots safer by teaching them risks.
Large Language Models (LLMs) exhibit substantial promise in enhancing task-planning capabilities within embodied agents due to their advanced reasoning and comprehension. However, the systemic safety of these agents remains an underexplored frontier. In this study, we present Safe-BeAl, an integrated framework for the measurement (SafePlan-Bench) and alignment (Safe-Align) of LLM-based embodied agents' behaviors. SafePlan-Bench establishes a comprehensive benchmark for evaluating task-planning safety, encompassing 2,027 daily tasks and corresponding environments distributed across 8 distinct hazard categories (e.g., Fire Hazard). Our empirical analysis reveals that even in the absence of adversarial inputs or malicious intent, LLM-based agents can exhibit unsafe behaviors. To mitigate these hazards, we propose Safe-Align, a method designed to integrate physical-world safety knowledge into LLM-based embodied agents while maintaining task-specific performance. Experiments across a variety of settings demonstrate that Safe-BeAl provides comprehensive safety validation, improving safety by 8.55 - 15.22%, compared to embodied agents based on GPT-4, while ensuring successful task completion.
Similar Papers
Safety Aware Task Planning via Large Language Models in Robotics
Robotics
Makes robots safer by checking their plans.
SafeLawBench: Towards Safe Alignment of Large Language Models
Computation and Language
Tests AI for safe and legal answers.
AGENTSAFE: Benchmarking the Safety of Embodied Agents on Hazardous Instructions
Cryptography and Security
Tests robots to follow safe rules, not dangerous ones.