Subtle Risks, Critical Failures: A Framework for Diagnosing Physical Safety of LLMs for Embodied Decision Making
By: Yejin Son , Minseo Kim , Sungwoong Kim and more
Potential Business Impact:
Tests robots to make sure they don't do dangerous things.
Large Language Models (LLMs) are increasingly used for decision making in embodied agents, yet existing safety evaluations often rely on coarse success rates and domain-specific setups, making it difficult to diagnose why and where these models fail. This obscures our understanding of embodied safety and limits the selective deployment of LLMs in high-risk physical environments. We introduce SAFEL, the framework for systematically evaluating the physical safety of LLMs in embodied decision making. SAFEL assesses two key competencies: (1) rejecting unsafe commands via the Command Refusal Test, and (2) generating safe and executable plans via the Plan Safety Test. Critically, the latter is decomposed into functional modules, goal interpretation, transition modeling, action sequencing, enabling fine-grained diagnosis of safety failures. To support this framework, we introduce EMBODYGUARD, a PDDL-grounded benchmark containing 942 LLM-generated scenarios covering both overtly malicious and contextually hazardous instructions. Evaluation across 13 state-of-the-art LLMs reveals that while models often reject clearly unsafe commands, they struggle to anticipate and mitigate subtle, situational risks. Our results highlight critical limitations in current LLMs and provide a foundation for more targeted, modular improvements in safe embodied reasoning.
Similar Papers
A Framework for Benchmarking and Aligning Task-Planning Safety in LLM-Based Embodied Agents
Artificial Intelligence
Makes robots safer by teaching them risks.
Safety Not Found (404): Hidden Risks of LLM-Based Robotics Decision Making
Artificial Intelligence
AI robots avoid dangerous places in emergencies.
Safety Aware Task Planning via Large Language Models in Robotics
Robotics
Makes robots safer by checking their plans.